You are browsing the site of a past edition of the AIES conference (2021). Navigate to present edition here.

Poster session 4

Thu 20 19:30 PDT Thu 20 20:30 PDT

#47 The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity

Mohamed Abdalla, Moustafa Abdalla

As governmental bodies rely on academics’ expert advice to shape policy regarding Artificial Intelligence, it is important that these academics not have conflicts of interests that may cloud or bias their judgement. Our work explores how Big Tech can actively distort the academic landscape to suit its needs. By comparing the well-studied actions of another industry (Big Tobacco) to the current actions of Big Tech we see similar strategies employed by both industries. These strategies enable either industry to sway and influence academic and public discourse. We examine the funding of academic research as a tool used by Big Tech to put forward a socially responsible public image, influence events hosted by and decisions made by funded universities, influence the research questions and plans of individual scientists, and discover receptive academics who can be leveraged. We demonstrate how Big Tech can affect academia from the institutional level down to individual researchers. Thus, we believe that it is vital, particularly for universities and other institutions of higher learning, to discuss the appropriateness and the tradeoffs of accepting funding from Big Tech, and what limitations or conditions should be put in place.

#258 “I’m Covered in Blood”: Persistent Anti-Muslim Bias in Large Language Models

Abubakar Abid, James Zou, Maheen Farooqi

#235 Are AI Ethics Conferences Different and More Diverse Compared to Traditional Computer Science Conferences?

Daniel Acuna, Lizhen Liang

Even though computer science (CS) has had a historical lack of gender and race representation, its AI research affects everybody eventually. Being partially rooted in CS conferences, “AI ethics” (AIE) conferences such as FAccT and AIES have quickly become distinct venues where AI’s societal implications are discussed and solutions proposed. However, it is largely unknown if these conferences improve upon the historical representational issues of traditional CS venues. In this work, we explore AIE conferences’ evolution and compare them across demographic characteristics, publication content, and citation patterns. We find that AIE conferences have increased their internal topical diversity and impact on other CS conferences. Importantly, AIE conferences are highly differentiable, covering topics not represented in other venues. However, and perhaps contrary to the field’s aspirations, white authors are more common while seniority and black researchers are represented similarly to CS venues. Our results suggest that AIE conferences could increase efforts to attract more diverse authors, especially considering their sizable roots in CS.

#34 Measuring Model Biases in the Absence of Ground Truth

Osman Aka, Ken Burke, Alex Bauerle, Christina Greer, Margaret Mitchell

#279 Accounting for Model Uncertainty in Algorithmic Discrimination

Junaid Ali, Preethi Lahoti, Krishna P. Gummadi

Traditional approaches to ensure group fairness in algorithmic decision making aim to equalize “total” error rates for different subgroups in the population. In contrast, we argue that the fairness approaches should instead focus only on equalizing errors arising due to model uncertainty (a.k.a epistemic uncertainty), caused due to lack of knowledge about the best model or due to lack of data. In other words, our proposal calls for ignoring the errors that occur due to uncertainty inherent in the data, i.e., aleatoric uncertainty. We draw a connection between predictive multiplicity and model uncertainty and argue that the techniques from predictive multiplicity could be used to identify errors made due to model uncertainty. We propose scalable convex proxies to come up with classifiers that exhibit predictive multiplicity and empirically show that our methods are comparable in performance and up to four orders of magnitude faster than the current state-of-the-art. We further pro- pose methods to achieve our goal of equalizing group error rates arising due to model uncertainty in algorithmic decision making and demonstrate the effectiveness of these methods using synthetic and real-world datasets

#214 Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs

Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach

Disaggregated evaluations of AI systems, in which system performanceis assessed and reported separately for different groups of people,are conceptually simple. However, their design involves a variety ofchoices. Some of these choices influence the results that will beobtained, and thus the conclusions that can be drawn; others influencethe impacts—both beneficial and harmful—that a disaggregatedevaluation will have on people, including the people whose data isused to conduct the evaluation. We argue that a deeper understandingof these choices will enable researchers and practitioners to designcareful and conclusive disaggregated evaluations. We also argue thatbetter documentation of these choices, along with the underlyingconsiderations and tradeoffs that have been made, will help otherswhen interpreting an evaluation’s results and conclusions.

#50 Explainable AI and Adoption of Financial Algorithmic Advisors: An Experimental Study

Daniel Ben David, Yehezkel Resheff, Talia Tron

We study whether receiving advice from either a human or algorithmic advisor, accompanied by five types of Local and Global explanation labelings, has an effect on the readiness to adopt, willingness to pay, and trust in a financial AI consultant. We compare the differences over time and in various key situations using a unique experimental framework where participants play a web-based game with real monetary consequences. We observed that accuracy-based explanations of the model in initial phases leads to higher adoption rates. When the performance of the model is immaculate, there is less importance associated with the kind of explanation for adoption. Using more elaborate feature-based or accuracy-based explanations helps substantially in reducing the adoption drop upon model failure. Furthermore, using an autopilot increases adoption significantly. Participants assigned to the AI-labeled advice with explanations were willing to pay more for the advice than the AI-labeled advice with "No-explanation" alternative. These results add to the literature on the importance of XAI for algorithmic adoption and trust.

#143 Rawlsian Fair Adaptation of Deep Learning Classifiers

Chiranjib Bhattacharyya, Amit Deshpande, Pooja Gupta, Kulin Shah

Group-fairness in classification aims for equality of a predictive utility across different sensitive sub-populations, e.g., race or gender. Equality or near-equality constraints in group-fairness often worsen not only the aggregate utility but also the utility for the least advantaged sub-population. In this paper, we apply the principles of Pareto-efficiency and least-difference to the utility being accuracy, as an illustrative example, and arrive at the \emph{Rawls classifier} that minimizes the error rate on the worst-off sensitive sub-population. Our mathematical characterization shows that the \emph{Rawls classifier} uniformly applies a threshold to an ideal \emph{score} of features, in the spirit of fair equality of opportunity. In practice, such a score or a feature representation is often computed by a black-box model that has been useful but unfair. Our second contribution is practical \emph{Rawlsian fair adaptation} of any given black-box deep learning model, without changing the score or feature representation it computes. Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the \emph{Rawls error rate} restricted to this hypothesis class. Our technical contribution is to formulate the above problems using ambiguous chance constraints, and to provide efficient algorithms for Rawlsian fair adaptation, along with provable upper bounds on the Rawls error rate. Our empirical results show significant improvement over state-of-the-art group-fair algorithms, even without retraining for fairness.

#160 Ensuring Fairness under Prior Probability Shifts

Arpita Biswas, Suvam Mukherjee

Prior probability shift is a phenomenon where the training and test datasets differ structurally within population subgroups. This phenomenon can be observed in the yearly records of several real-world datasets, for example, recidivism records and medical expenditure surveys. If unaccounted for, such shifts can cause the predictions of a classifier to become unfair towards specific population subgroups. While the fairness notion called Proportional Equality (PE) accounts for such shifts, a procedure to ensure PE-fairness was unknown. In this work, we design an algorithm, called CAPE, that ensures fair classification under such shifts. We introduce a metric, called prevalence difference, which CAPE attempts to minimize in order to achieve fairness under prior probability shifts. We theoretically establish that this metric exhibits several properties that are desirable for a fair classifier. We evaluate the efficacy of CAPE via a thorough empirical evaluation on synthetic datasets. We also compare the performance of CAPE with several state-of-the-art fair classifiers on real-world datasets like COMPAS (criminal risk assessment) and MEPS (medical expenditure panel survey). The results indicate that CAPE ensures a high degree of PE-fairness in its predictions, while performing well on other important metrics.

#220 Envisioning Communities: A Participatory Approach towards AI for Social Good

Elizabeth Bondi, Lily Xu, Diana Acosta-Navas, Jackson A. Killian

Research in artificial intelligence (AI) for social good presupposes some definition of social good, but potential definitions have been seldom suggested and never agreed upon. The normative question of what AI for social good research should be "for" is not thoughtfully elaborated, or is frequently addressed with a utilitarian outlook that prioritizes the needs of the majority over those who have been historically marginalized, brushing aside realities of injustice and inequity. We argue that AI for social good ought to be assessed by the communities that the AI system will impact, using as a guide the capabilities approach, a framework to measure the ability of different policies to improve human welfare equity. Furthermore, we lay out how AI research has the potential to catalyze social progress by expanding and equalizing capabilities. We show how the capabilities approach aligns with a participatory approach for the design and implementation of AI for social good research in a framework we introduce called PACT, in which community members affected should be brought in as partners and their input prioritized throughout the project. We conclude by providing an incomplete set of guiding questions for carrying out such participatory AI research in a way that elicits and respects a community’s own definition of social good.

#103 Fairness and Machine Fairness

Clinton Castro, David O’Brien, Ben Schwan

Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take “fairness” in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group (cf. Kusner et al. 2018).

#253 Reconfiguring Diversity and Inclusion for AI Ethics

Nicole Chi, Emma Lurie, Deirdre K. Mulligan

Activists, journalists, and scholars have long raised critical questions about the relationship between diversity, representation, and structural exclusions in data-intensive tools and services. We build on work mapping the emergent landscape of corporate AI ethics to center one outcome of these conversations: the incorporation of diversity and inclusion in corporate AI ethics activities. Using interpretive document analysis and analytic tools from the values in design field, we examine how diversity and inclusion work is articulated in public-facing AI ethics documentation produced by three companies that create application and services layer AI infrastructure: Google, Microsoft, and Salesforce. We find that as these documents make diversity and inclusion more tractable to engineers and technical clients, they reveal a drift away from civil rights justifications that resonates with the “managerialization of diversity” by corporations in the mid-1980s. The focus on technical artifacts — such as diverse and inclusive datasets — and the replacement of equity with fairness make ethical work more actionable for everyday practitioners. Yet, they appear divorced from broader DEI initiatives and relevant subject matter experts that could provide needed context to nuanced decisions around how to operationalize these values and new solutions. Finally, diversity and inclusion, as configured by engineering logic, positions firms not as “ethics owners” but as ethics allocators; while these companies claim expertise on AI ethics, the responsibility of defining who diversity and inclusion are meant to protect and where it is relevant is pushed downstream to their customers.

#69 Algorithmic Audit of Italian Car Insurance: Evidence of Unfairness in Access and Pricing

Alessandro Fabris, Alan Mishler, Stefano Gottardi, Mattia Carletti, Matteo Daicampi, Gian Antonio Susto, Gianmaria Silvello

We conduct an audit of pricing algorithms employed by companies in the Italian car insurance industry, primarily by gathering quotes through a popular comparison website. While acknowledging the complexity of the industry, we find evidence of several problematic practices. We show that birthplace and gender have a direct and sizeable impact on the prices quoted to drivers, despite national and international regulations against their use. Birthplace, in particular, is used quite frequently to the disadvantage of foreign-born drivers and drivers born in certain Italian cities. In extreme cases, a driver born in Laos may be charged 1,000€ more than a driver born in Milan, all else being equal. For a subset of our sample, we collect quotes directly on a company website, where the direct influence of gender and birthplace is confirmed. Finally, we find that drivers with riskier profiles tend to see fewer quotes in the aggregator result pages, substantiating concerns of differential treatment raised in the past by Italian insurance regulators.

#83 Modeling and Guiding the Creation of Ethical Human-AI Teams

Christopher Flathmann, Beau Schelble, Rui Zhang, Nathan McNeese

With artificial intelligence continuing to advance, so too do the ethical concerns that can potentially negatively impact humans and the greater society. When these systems begin to interact with humans, these concerns become much more complex and much more important. The field of human-AI teaming provides a relevant example of how AI ethics can have significant and continued effects on humans. This paper reviews research in ethical artificial intelligence, as well as ethical teamwork through the lens of the rapidly advancing field of human-AI teaming, resulting in a model demonstrating the requirements and outcomes of building ethical human-AI teams. The model is created to guide the prioritization of ethics in human-AI teaming by outlining the ethical teaming process, outcomes of ethical teams, and external requirements necessary to ensure ethical human-AI teams. A final discussion is presented on how the developed model will influence the implementation of AI teammates, as well as the development of policy and regulation surrounding the domain in the coming years.

#31 Ethical Obligations to Provide Novelty

Paige Golden, David Danks

TikTok is a popular platform that enables users to see tailored content feeds, particularly short videos with novel content. In recent years, TikTok has been criticized at times for presenting users with overly homogenous feeds, thereby reducing the diversity of content with which each user engages. In this paper, we consider whether TikTok has an ethical obligation to employ a novelty bias in its content recommendation engine. We explicate the principal morally relevant values and interests of key stakeholders, and observe that key empirical questions must be answered before a precise recommendation can be provided. We argue that TikTok’s own values and interests mean that its actions should be largely driven by the values and interests of its users and creators. Unlike some other content platforms, TikTok’s ethical obligations are not at odds with the values of its users, and so whether it is obligated to include a novelty bias depends on what will actually advance its users’ interests.

#205 Computing Plans that Signal Normative Compliance

Alban Grastien, Claire Benn, Sylvie Thiebaux

There has been increasing acceptance that agents must act in a way that is sensitive to ethical considerations. These considerations have been cashed out as constraints, such that some actions are permissible, while others are impermissible. In this paper, we claim that, in addition to only performing those actions that are permissible, agents should only perform those courses of action that are _unambiguously_ permissible. By doing so they signal normative compliance: they communicate their understanding of, and commitment to abiding by, the normative constraints in play. Those courses of action (or plans) that succeed in signalling compliance in this sense, we term `acceptable’. The problem this paper addresses is how to compute plans that signal compliance, that is, how to find plans that are acceptable as well as permissible. We do this by identifying those plans such that, were an observer to see only part of its execution, that observer would infer the plan enacted was permissible. This paper provides a formal definition of compliance signalling within the domain of AI planning, describes an algorithm for computing compliance signalling plans, provides preliminary experimental results and discusses possible improvements. The signalling of compliance is vital for communication, coordination and cooperation in situations where the agent is partially observed. It is equally vital, therefore, to solve the computational problem of finding those plans that signal compliance. This is what this paper does.

#14 Designing Shapelets for Interpretable Data-Agnostic Classification

Riccardo Guidotti, Anna Monreale

Time series shapelets are discriminatory subsequences which are representative of a class, and their similarity to a time series can be used for successfully tackling the time series classification problem. The literature shows that Artificial Intelligence (AI) systems adopting classification models based on time series shapelets can be interpretable, more accurate, and significantly fast. Thus, in order to design a data-agnostic and interpretable classification approach, in this paper we first extend the notion of shapelets to different types of data, i.e., images, tabular and textual data. Then, based on this extended notion of shapelets we propose an interpretable data-agnostic classification method. Since the shapelets discovery can be time consuming, especially for data types more complex than time series, we exploit a notion of prototypes for finding candidate shapelets, and reducing both the time required to find a solution and the variance of shapelets. A wide experimentation on datasets of different types shows that the data-agnostic prototype-based shapelets returned by the proposed method empower an interpretable classification which is also fast, accurate, and stable. In addition, we show and we prove that shapelets can be at the basis of explainable AI methods.

#249 Computer Vision and Conflicting Values: Describing People with Automated Alt Text

Margot Hanley, Solon Barocas, Karen Levy, Shiri Azenkot, Helen Nissenbaum

Scholars have recently drawn attention to a range of controversial issues posed by the use of computer vision for automatically generating descriptions of people in images. Despite these concerns, automated image description has become an important tool to ensure equitable access to information for blind and low vision people. In this paper, we investigate the ethical dilemmas faced by companies that have adopted the use of computer vision for producing alt text: textual descriptions of images for blind and low vision people. We use Facebook’s automatic alt text tool as our primary case study. First, we analyze the policies that Facebook has adopted with respect to identity categories, such as race, gender, age, etc., and the company’s decisions about whether to present these terms in alt text. We then describe an alternative—and manual—approach practiced in the museum community, focusing on how museums determine what to include in alt text descriptions of cultural artifacts. We compare these policies, using notable points of contrast to develop an analytic framework that characterizes the particular apprehensions behind these policy choices. We conclude by considering two strategies that seem to sidestep some of these concerns, finding that there are no easy ways to avoid the normative dilemmas posed by the use of computer vision to automate alt text.

#228 Can We Obtain Fairness for Free?

Rashidul Islam, Shimei Pan, James Foulds

There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization’s bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model’s fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts.

#198 The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-Making Systems

Atoosa Kasirzadeh, Colin Klein

Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more general ones. We defend this thesis by adapting Marr’s famous 1982 framework for understanding information-processing systems. We show how this framework allows one to situate ethical problems at the appropriate level of abstraction, which in turn can be used to target appropriate interventions.

#136 Measuring Group Advantage: A Comparative Study of Fair Ranking Metrics

Caitlin Kuhlman, Walter Gerych, Elke Rundensteiner

Ranking evaluation metrics play an important role in information retrieval, providing optimization objectives during development and means of assessment of deployed performance. Recently, {\it fairness of rankings} has been recognized as crucial, especially as automated systems are increasingly used for high impact decisions. While numerous fairness metrics have been proposed, a comparative analysis to understand their interrelationships is lacking. Even for fundamental statistical parity metrics which measure group advantage,it remains unclear whether metrics measure the same phenomena, or whenone metric may produce different results than another. To address these open questions, we formulate a conceptual framework for analytical comparison of metrics.We prove that under reasonable assumptions, popular metrics in the literature exhibit the same behavior andthat optimizing for one optimizes for all. However, our analysis also shows that the metrics vary in the degree of unfairness measured, in particular when one group has a strong majority. Based on this analysis, we design a practical statistical test to identify whether observed data is likely to exhibit predictable group bias. We provide a set of recommendations for practitioners to guide the choice of an appropriate fairness metric.

#270 Participatory Algorithmic Management: Elicitation Methods for Worker Well-Being Models

Min Kyung Lee, Ishan Nigam, Angie Zhang, Joel Afriyie, Zhizhen Qin, Sicun Gao

Artificial intelligence is increasingly being used to manage the workforce. Algorithmic management promises organizational efficiency, but often undermines worker well-being. How can we computationally model worker well-being so that algorithmic management can be optimized for and assessed in terms of worker well-being? Toward this goal, we propose a participatory approach for worker well-being models. We first define worker well-being models: Work preference models—preferences about work and working conditions, and managerial fairness models—beliefs about fair resource allocation among multiple workers. We then propose elicitation methods to enable workers to build their own well-being models leveraging pairwise comparisons and ranking. As a case study, we evaluate our methods in the context of algorithmic work scheduling with 25 shift workers and 3 managers. The findings show that workers expressed idiosyncratic work preference models and more uniform managerial fairness models, and the elicitation methods helped workers discover their preferences and gave them a sense of empowerment. Our work provides a method and initial evidence for enabling participatory algorithmic management for worker well-being.

#245 RAWLSNET: Altering Bayesian Networks to Encode Rawlsian Fair Equality of Opportunity

David Liu, Zohair Shafi, William Fleisher, Tina Eliassi-Rad, Scott Alfeld

We present RAWLSNET, a system for altering Bayesian Network (BN) models to satisfy the Rawlsian principle of fair equality of opportunity (FEO). RAWLSNET’s BN models generate aspirational data distributions: data generated to reflect an ideally fair, FEO-satisfying society. FEO states that everyone with the same talent and willingness to use it should have the same chance of achieving advantageous social positions (e.g., employment), regardless of their background circumstances (e.g., socioeconomic status). Satisfying FEO requires alterations to social structures such as school assignments. Our paper describes RAWLSNET, a method which takes as input a BN representation of an FEO application and alters the BN’s parameters so as to satisfy FEO when possible, and minimize deviation from FEO otherwise. We also offer guidance for applying RAWLSNET, including on recognizing proper applications of FEO. We demonstrate the use of RAWLSNET with publicly available data sets. RAWLSNET’s altered BNs offer the novel capability of generating aspirational data for FEO-relevant tasks. Aspirational data are free from biases of real-world data, and thus are useful for recognizing and detecting sources of unfairness in machine learning algorithms besides biased data.

#210 Unpacking the Expressed Consequences of AI Research in Broader Impact Statements

Priyanka Nanayakkara, Jessica Hullman, Nicholas Diakopoulos

The computer science research community and the broader public have become increasingly aware of negative consequences of algorithmic systems. In response, the top-tier Neural Information Processing Systems (NeurIPS) conference for machine learning and artificial intelligence research required that authors include a statement of broader impact to reflect on potential positive and negative consequences of their work. We present the results of a qualitative thematic analysis of a sample of statements written for the 2020 conference. The themes we identify broadly fall into categories related to how consequences are expressed (e.g., valence, specificity, uncertainty), areas of impacts expressed (e.g., bias, the environment, labor, privacy), and researchers’ recommendations for mitigating negative consequences in the future. In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.

#115 Measuring Lay Reactions to Personal Data Markets

Aileen Nielsen

The recording, aggregation, and exchange of personal data is necessary to the development of socially-relevant machine learning applications. However, anecdotal and survey evidence show that ordinary people feel discontent and even anger regarding data collection practices that are currently typical and legal. This suggests that personal data markets in their current form do not adhere to the norms applied by ordinary people. The present study experimentally probes whether market transactions in a typical online scenario are accepted when evaluated by lay people. The results show that a high percentage of study participants refused to participate in a data pricing exercise, even in a commercial context where market rules would typically be expected to apply. For those participants who did price the data, the median price was an order of magnitude higher than the market price. These results call into question the notice and consent market paradigm that is used by technology firms and government regulators when evaluating data flows. The results also point to a conceptual mismatch between cultural and legal expectations regarding the use of personal data.

#121 The Deepfake Detection Dilemma: A Multistakeholder Exploration of Adversarial Dynamics in Synthetic Media

Aviv Ovadya, Sean McGregor, Claire Leibowicz

Synthetic media detection technologies label media as either synthetic or non-synthetic and are increasingly used by journalists, web platforms, and the general public to identify misinformation and other forms of problematic content. As both well-resourced organizations and the non-technical general public generate more sophisticated synthetic media, the capacity for purveyors of problematic content to adapt induces a \newterm{detection dilemma}: as detection practices become more accessible, they become more easily circumvented. This paper describes how a multistakeholder cohort from academia, technology platforms, media entities, and civil society organizations active in synthetic media detection and its socio-technical implications evaluates the detection dilemma. Specifically, we offer an assessment of detection contexts and adversary capacities sourced from the broader, global AI and media integrity community concerned with mitigating the spread of harmful synthetic media. A collection of personas illustrates the intersection between unsophisticated and highly-resourced sponsors of misinformation in the context of their technical capacities. This work concludes that there is no “best” approach to navigating the detector dilemma, but derives a set of implications from multistakeholder input to better inform detection process decisions and policies, in practice. \end{abstract}

#127 Epistemic Reasoning for Machine Ethics with Situation Calculus

Maurice Pagnucco, David Rajaratnam, Raynaldio Limarga, Abhaya Nayak, Yang Song

With the rapid development of autonomous machines such as selfdriving vehicles and social robots, there is increasing realisation that machine ethics is important for widespread acceptance of autonomous machines. Our objective is to encode ethical reasoning into autonomous machines following well-defined ethical principles and behavioural norms. We provide an approach to reasoning about actions that incorporates ethical considerations. It builds on Scherl and Levesque’s [29, 30] approach to knowledge in the situation calculus. We show how reasoning about knowledge in a dynamic setting can be used to guide ethical and moral choices, aligned with consequentialist and deontological approaches to ethics. We apply our approach to autonomous driving and social robot scenarios, and provide an implementation framework.

#43 Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy’s Price Discrimination Algorithms

Akshat Pandey, Aylin Caliskan

Ridehailing applications that collect mobility data from individuals to inform smart city planning predict each trip’s fare pricing with automated algorithms that rely on artificial intelligence (AI). This type of AI algorithm, namely a price discrimination algorithm, is widely used in the industry’s black box systems for dynamic individualized pricing. Lacking transparency, studying such AI systems for fairness and disparate impact has not been possible without access to data used in generating the outcomes of price discrimination algorithms. Recently, in an effort to enhance transparency in city planning, the city of Chicago regulation mandated that transportation providers publish anonymized data on ridehailing. As a result, we present the first large-scale measurement of the disparate impact of price discrimination algorithms used by ridehailing applications. The application of random effects models from the meta-analysis literature combines the city-level effects of AI bias on fare pricing from census tract attributes, aggregated from the American Community Survey. An analysis of 100 million ridehailing samples from the city of Chicago indicates a significant disparate impact in fare pricing of neighborhoods due to AI bias learned from ridehailing utilization patterns associated with demographic attributes. Neighborhoods with larger non-white populations, higher poverty levels, younger residents, and high education levels are significantly associated with higher fare prices, with combined effect sizes, measured in Cohen’s d, of -0.32, -0.28, 0.69, and 0.24 for each demographic, respectively. Further, our methods hold promise for identifying and addressing the sources of disparate impact in AI algorithms learning from datasets that contain U.S. geolocations.

#138 Understanding the Representation and Representativeness of Age in AI Data Sets

Joon Sung Park, Michael S. Bernstein, Robin N. Brewer, Ece Kamar, Meredith Ringel Morris

A diverse representation of different demographic groups in AI training data sets is important in ensuring that the models will work for a large range of users. To this end, recent efforts in AI fairness and inclusion have advocated for creating AI data sets that are well-balanced across race, gender, socioeconomic status, and disability status. In this paper, we contribute to this line of work by focusing on the representation of age by asking whether older adults are represented proportionally to the population at large in AI data sets. We examine publicly-available information about 92 face data sets to understand how they codify age as a case study to investigate how the subjects’ ages are recorded and whether older generations are represented. We find that older adults are very under-represented; five data sets in the study that explicitly documented the closed age intervals of their subjects included older adults (defined as older than 65 years), while only one included oldest-old adults (defined as older than 85 years). Additionally, we find that only 24 of the data sets include any age-related information in their documentation or metadata, and that there is no consistent method followed across these data sets to collect and record the subjects’ ages. We recognize the unique difficulties in creating representative data sets in terms of age, but raise it as an important dimension that researchers and engineers interested in inclusive AI should consider.

#215 Quantum Fair Machine Learning

Elija Perrier

In this paper, we inaugurate the field of quantum fair machine learning. We undertake a comparative analysis of differences and similarities between classical and quantum fair machine learning algorithms, specifying how the unique features of quantum computation alter measures, metrics and remediation strategies when quantum algorithms are subject to fairness constraints. We present the first results in quantum fair machine learning by demonstrating the use of Grover’s search algorithm to satisfy statistical parity constraints imposed on quantum algorithms. We provide lower-bounds on iterations needed to achieve such statistical parity within $\epsilon$-tolerance. We extend canonical Lipschitz-conditioned individual fairness criteria to the quantum setting using quantum metrics. We examine the consequences for typical measures of fairness in machine learning context when quantum information processing and quantum data are involved. Finally, we propose open questions and research programmes for this new field of interest to researchers in computer science, ethics and quantum computation.

#38 Fair Bayesian Optimization

Valerio Perrone, Michele Donini, Muhammad Bilal Zafar, Robin Schmucker, Krishnaram Kenthapadi, Cedric Archambeau

Fairness and robustness in machine learning are crucial when individuals are subject to automated decisions made by models in high-stake domains. To promote ethical artificial intelligence, fairness metrics that rely on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias. However, fairness measures that rely on comparing the ability to achieve recourse have been relatively unexplored. In this paper, we present a novel formulation for training neural networks that considers the distance of data observations to the decision boundary such that the new objective: (1) reduces the disparity in the average ability of recourse between individuals in each protected group, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that models trained with this new objective are more fair and adversarially robust neural networks, with similar accuracies, when compared to models without it. We also investigate a trade-off between the recourse-based fairness and robustness objectives. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across protected groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse disparity across groups are considered to train fairer neural networks.

#188 We Haven’t Gone Paperless Yet: Why the Printing Press Can Help Us Understand Data and AI

Julian Posada, Nicholas Weller, Wendy H. Wong

How should we understand the social and political effects of the datafication of human life? This paper argues that the effects of data should be understood as a constitutive shift in social and political relations. We explore how datafication, or quantification of human and non-human factors into binary code, affects the identity of individuals and groups. This fundamental shift goes beyond economic and ethical concerns, which has been the focus of other efforts to explore the effects of datafication and AI. We highlight that technologies such as datafication and AI (and previously, the printing press) both disrupted extant power arrangements, leading to decentralization, and triggered a recentralization of power by new actors better adapted to leveraging the new technology. We use the analogy of the printing press to provide a framework for understanding constitutive change. The printing press example gives us more clarity on 1) what can happen when the medium of communication drastically alters how information is communicated and stored; 2) the shift in power from state to private actors; and 3) the tension of simultaneously connecting individuals while driving them towards narrower communities through algorithmic analyses of data.

#151 A Step Toward More Inclusive People Annotations for Fairness

Candice Schumann, Susanna Ricco, Utsav Prabhu, Vittorio Ferrari, Caroline Pantofaru

The Open Images Dataset contains approximately 9 million images and is a widely accepted dataset for computer vision research. As is common practice for large datasets, the annotations are not exhaustive, with bounding boxes and attribute labels for only a subset of the classes in each image. In this paper, we present a new set of annotations on a subset of the Open Images dataset called the MIAP (More Inclusive Annotations for People) subset, containing bounding boxes and attributes for all of the people visible in those images. The attributes and labeling methodology for the MIAP subset were designed to enable research into model fairness. In addition, we analyze the original annotation methodology for the person class and its subclasses, discussing the resulting patterns in order to inform future annotation efforts. By considering both the original and exhaustive annotation sets, researchers can also now study how systematic patterns in training annotations affect modeling.

#20 Fairness in the Eyes of the Data: Certifying Machine-Learning Models

Shahar Segal, Yossi Adi, Carsten Baum, Chaya Ganesh, Benny Pinkas, Joseph Keshet

We present a framework that allows to certify the fairness degree of a model based on an interactive and privacy-preserving test. The framework verifies any trained model, regardless of its training process and architecture. Thus, it allows us to evaluate any deep learning model on multiple fairness definitions empirically. We tackle two scenarios, where either the test data is privately available only to the tester or is publicly known in advance, even to the model creator. We investigate the soundness of the proposed approach using theoretical analysis and present statistical guarantees for the interactive test. Finally, we provide a cryptographic technique to automate fairness testing and certified inference with only black-box access to the model at hand while hiding the participants’ sensitive data.,0,1

#262 Digital Voodoo Dolls

Marija Slavkovik, Jon Askonas, Caroline Pitman, Clemens Stachl

An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.

#175 Comparing Equity and Effectiveness of Different Algorithms in an Application for the Room Rental Market

David Solans, Francesco Fabbri, Caterina Calsamiglia, Carlos Castillo, Francesco Bonchi

Machine Learning (ML) techniques have been increasingly adopted by the real estate market in the last few years. Applications include, among many others, predicting the market value of a property or an area, advanced systems for managing marketing and ads campaigns, and recommendation systems based on user preferences. While these techniques can provide important benefits to the business owners and the users of the platforms, algorithmic biases can result in inequalities and loss of opportunities for groups of people who are already disadvantaged in their access to housing.In this work, we present a comprehensive and independent algorithmic evaluation of a recommender system for the real estate market, designed specifically for finding shared apartments in metropolitan areas. We were granted full access to the internals of the platform, including details on algorithms and usage data during a period of 2 years.We analyze the performance of the various algorithms which are deployed for the recommender system and asses their effect across different population groups.Our analysis reveals that introducing a recommender system algorithm facilitates finding an appropriate tenant or a desirable room to rent, but at the same time, it strengthen performance inequalities between groups, further reducing opportunities of finding a rental for certain minorities.

#184 Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring

Tom Sühr, Sophie Hilgard, Himabindu Lakkaraju

Ranking algorithms are being widely employed in various online hiring platforms including LinkedIn, TaskRabbit, and Fiverr.Prior research has demonstrated that ranking algorithms employed by these platforms are prone to a variety of undesirable biases, leading to the proposal of fair ranking algorithms (e.g., Det-Greedy) which increase exposure of underrepresented candidates. However, there is little to no work that explores whether fair ranking algorithms actually improve real world outcomes (e.g., hiring decisions) for underrepresented groups. Furthermore, there is no clear understanding as to how other factors (e.g., job context, inherent biases of the employers) may impact the efficacy of fair ranking in practice.In this work, we analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of employers and establish how these factors interact with ranking algorithms to affect hiring decisions. To the best of our knowledge, this work makes the first attempt at studying the interplay between the aforementioned factors in the context of online hiring. We carry out a large-scale user study simulating online hiring scenarios with data from TaskRabbit, a popular online freelancing site. Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.

#111 Governing Algorithmic Systems with Impact Assessments: Six Observations

Elizabeth Anne Watkins, Emanuel Moss, Jacob Metcalf, Ranjit Singh, Madeleine Elish

Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, and provides access to opportunities. Yet collectively we do not adequately study how these systems affect people or document the actual or potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring and governing AI systems. Impact assessments are often used as instruments to create accountability relationships and grant some measure of agency and voice to communities affected by projects with environmental, financial, and human rights ramifications. Applying these tools—through Algorithmic Impact Assessments (AIA)—is a plausible way to establish accountability relationships for ADSs. At the same time, what an AIA would entail remains under-specified; they raise as many questions as they answer. Choices about the methods, scope, and purpose of AIAs structure the conditions of possibility for AI governance. In this paper, we present our research on the history of impact assessments across diverse domains, through a sociotechnical lens, to present six observations on how they co-constitute accountability. Decisions about what type of effects count as an impact; when impacts are assessed; whose interests are considered; who is invited to participate; who conducts the assessment; how assessments are made publicly available, and what the outputs of the assessment might be; all shape the forms of accountability that AIAs engender. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.

#88 Who’s Responsible? Jointly Quantifying the Contribution of the Learning Algorithm and Data

Gal Yona, Amirata Ghorbani, James Zou

A learning algorithm A trained on a dataset D is revealed to have poor performance on some subpopulation at test time. Where should the responsibility for this lay? It can be argued that the data is responsible, if for example training A on a more representative dataset D’ would have improved the performance. But it can similarly be argued that A itself is at fault, if training a different variant A’ on the same dataset D would have improved performance. As ML becomes widespread and such failure cases more common, these types of questions are proving to be far from hypothetical. With this motivation in mind, in this work we provide a rigorous formulation of the joint credit assignment problem between a learning algorithm A and a dataset D. We propose Extended Shapley as a principled framework for this problem, and experiment empirically with how it can be used to address questions of ML accountability.