You are browsing the site of a past edition of the AIES conference (2020). Navigate to present edition here.

Paper session 1: Fairness

09:45
When your only tool is a hammer: ethical limitations of computational fairness solutions in healthcare machine learning

ABSTRACT. The implications of using machine learning (ML) tools that risk propagation of pernicious bias (that is, reflecting societal inequality) are of tremendous concern. The implications are even greater in healthcare where social determinants of health may independently contribute to healthcare inequalities. Although the mainstream perception appears to be that bias has arisen de novo and is attributable to ML per se, there is ample evidence to indicate that bias in ML models real-world patterns of social inequality. Given that ML-related techniques involve learning from associations within these extant, biased data, these applications require targeted attention to their ethical development and implementation to minimize the risk of unintended consequences stemming from propagation of bias. In this work, we briefly describe the range of ‘algorithmic fairness’ solutions offered within the fair ML field and how they operationalize and define ‘fairness.’ We explore how the efficacy of these solutions are likely highly limited in the field of healthcare ML by elucidating epistemic, empirical, and ethical considerations. Finally, we focus on how contributions from feminist critiques of science may inform a more ethically defensible path forward, and conclude with a set of recommendations for bias in healthcare ML.

10:00
Normative Principles for Evaluating Fairness in Machine Learning

ABSTRACT. There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups (race, gender, sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles include: Consequentialism, Intent-Based and Compensation-Based Egalitarianism, Libertarianism, and Desert-Based Theories. Each principle will be applied to a sample risk-assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.

10:15
Biased Priorities, Biased Outcomes: Three Recommendations for Ethics-oriented Data Annotation Practices

ABSTRACT. In this paper, we analyze the relation between biased data-driven outcomes and practices of data annotation for vision models, by placing them in the context of the market economy. Understanding data annotation as a sense-making process, we investigate which goals are prioritized by decision-makers throughout the annotation of datasets. Following a qualitative design, the study is based on 24 interviews with relevant actors and extensive participatory observations, including several weeks of fieldwork at two companies dedicated to data annotation for machine learning in Buenos Aires, Argentina and Sofia, Bulgaria. The prevalence of market-oriented values over socially responsible approaches is argued based on three corporate priorities that inform work practices in this field: profit, standardization, and opacity. Finally, we introduce three elements, namely transparency, education, and regulations, aiming at developing ethics-oriented practices of data annotation, that could help prevent biased outcomes.

10:30
CERTIFAI: A common framework to provide explanations and analyse the fairness and robustness of black-box models

ABSTRACT. Concerns within the machine learning community and external pressures from regulators over the vulnerabilities of ma-chine learning algorithms have spurred on the fields of explainability, robustness, and fairness. Often, issues in explain-ability, robustness, and fairness are confined to their specific sub-fields and few tools exist for model developers to use to simultaneously build their modeling pipelines in a transparent, accountable, and fair way. This can lead to a bottleneck on the model developer’s side as they must juggle multiple methods to evaluate their algorithms. In this paper, we present a single framework for analyzing the robustness, fairness, and explainability of a classifier. The framework, which is based on the generation of counterfactual explanations through a custom genetic algorithm, is flexible, model-agnostic, and does not require access to model internals. The framework allows the user to calculate robustness and fairness scores for individual models and generate explanations for individual predictions which provide a means for actionable recourse(changes to an input to help get a desired outcome). This is the first time that a unified tool has been developed to address three key issues pertaining towards building a responsible artificial intelligence system.