You are browsing the site of a past edition of the AIES conference (2021). Navigate to present edition here.

Poster session 5

Fri 21 09:00 PDT Fri 21 10:00 PDT

#47 The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity

Mohamed Abdalla, Moustafa Abdalla

As governmental bodies rely on academics’ expert advice to shape policy regarding Artificial Intelligence, it is important that these academics not have conflicts of interests that may cloud or bias their judgement. Our work explores how Big Tech can actively distort the academic landscape to suit its needs. By comparing the well-studied actions of another industry (Big Tobacco) to the current actions of Big Tech we see similar strategies employed by both industries. These strategies enable either industry to sway and influence academic and public discourse. We examine the funding of academic research as a tool used by Big Tech to put forward a socially responsible public image, influence events hosted by and decisions made by funded universities, influence the research questions and plans of individual scientists, and discover receptive academics who can be leveraged. We demonstrate how Big Tech can affect academia from the institutional level down to individual researchers. Thus, we believe that it is vital, particularly for universities and other institutions of higher learning, to discuss the appropriateness and the tradeoffs of accepting funding from Big Tech, and what limitations or conditions should be put in place.

#57 Monitoring AI Services for Misuse

Seyyed Ahmad Javadi, Chris Norval, Richard Cloete, Jatinder Singh

Given the surge in interest in AI, we now see the emergence of Artificial Intelligence as a Service (AIaaS). AIaaS entails service providers offering remote access to ML models and capabilities at arms-length’, through networked APIs. Such services will grow in popularity, as they enable access to state-of-the-art ML capabilities, ‘on demand’, ‘out of the box’, at low cost and without requiring training data or ML expertise.However, there is much public concern regarding AI. AIaaS raises particular considerations, given there is much potential for such services to be used to underpin and drive problematic, inappropriate, undesirable, controversial, or possibly even illegal applications. A key way forward is through service providers monitoring their AI services to identify potential situations of problematic use. Towards this, we elaborate the potential for ‘misuse indicators’ as a mechanism for uncovering patterns of usage behaviour warranting consideration or further investigation. We introduce a taxonomy for describing these indicators and their contextual considerations, and use exemplars to demonstrate the feasibility analysing AIaaS usage to highlight situations of possible concern. We also seek to draw more attention to AI services and the issues they raise, given AIaaS’ increasing prominence, and the general calls for the more responsible and accountable use of AI.

#34 Measuring Model Biases in the Absence of Ground Truth

Osman Aka, Ken Burke, Alex Bauerle, Christina Greer, Margaret Mitchell

#137 Ethical Implementation of Artificial Intelligence to Select Embryos in In Vitro Fertilization

Michael Anis Mihdi Afnan, Cynthia Rudin, Vincent Conitzer, Julian Savulescu, Abhishek Mishra, Yanhe Liu, Masoud Afnan

AI has the potential to revolutionize many areas of healthcare. Radiology, dermatology, and ophthalmology are some of the areas most likely to be impacted in the near future, and they have received significant attention from the broader research community. But AI techniques are now also starting to be used in in vitro fertilization (IVF), in particular for selecting which embryos to transfer to the woman. The contribution of AI to IVF is potentially significant, but must be done carefully and transparently, as the ethical issues are significant, in part because this field involves creating new people.We first give a brief introduction to IVF and review the use of AI for embryo selection. We discuss concerns with the interpretation of the reported results from scientific and practical perspectives. We then consider the broader ethical issues involved. We discuss in detail the problems that result from the use of black-box methods in this context and advocate strongly for the use of interpretable models. Importantly, there have been no published trials of clinical effectiveness, a problem in both the AI and IVF communities, and we therefore argue that clinical implementation at this point would be premature. Finally, we discuss ways for the broader AI community to become involved to ensure scientifically sound and ethically responsible development of AI in IVF.

#93 Beyond Reasonable Doubt: Improving Fairness in Budget-Constrained Decision-Making Using Confidence Thresholds

Michiel Bakker, Duy Patrick Tu, Krishna Gummadi, Alex Pentland, Kush Varshney, Adrian Weller

Prior work on fairness in machine learning has focused on settings where all the information needed about each individual is readily available. However, in many applications, further information may be acquired at a cost. For example, when assessing a customer’s creditworthiness, a bank initially has access to a limited set of information but progressively improves the assessment by acquiring additional information before making a final decision. In such settings, we posit that a fair decision maker may want to ensure that decisions for all individuals are made with similar expected error rate, even if the features acquired for the individuals are different. We show that a set of carefully chosen confidence thresholds can not only effectively redistribute an information budget according to each individual’s needs, but also serve to address individual and group fairness concerns simultaneously. Finally, using two public datasets, we confirm the effectiveness of our methods and investigate the limitations.

#63 Person, Human, Neither: The Dehumanization Potential of Automated Image Tagging

Pınar Barlas, Kyriakos Kyriakou, Styliani Kleanthous, Jahna Otterbacher

Following the literature on dehumanization via technology, we audit six proprietary image tagging algorithms (ITAs) for their potential to perpetuate dehumanization. We examine the ITAs’ outputs on a controlled dataset of images depicting a diverse group of people for tags that indicate the presence of a human in the image. Through an analysis of the (mis)use of these tags, we find that there are some individuals whose ‘humanness’ is not recognized by an ITA, and that these individuals are often from marginalized social groups. Finally, we compare these findings with the use of the ‘face’ tag, which can be used for surveillance, revealing that people’s faces are often recognized by an ITA even when their ‘humanness’ is not. Overall, we highlight the subtle ways in which ITAs may inflict widespread, disparate harm, and emphasize the importance of considering the social context of the resulting application.

#214 Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs

Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, Hanna Wallach

Disaggregated evaluations of AI systems, in which system performanceis assessed and reported separately for different groups of people,are conceptually simple. However, their design involves a variety ofchoices. Some of these choices influence the results that will beobtained, and thus the conclusions that can be drawn; others influencethe impacts—both beneficial and harmful—that a disaggregatedevaluation will have on people, including the people whose data isused to conduct the evaluation. We argue that a deeper understandingof these choices will enable researchers and practitioners to designcareful and conclusive disaggregated evaluations. We also argue thatbetter documentation of these choices, along with the underlyingconsiderations and tradeoffs that have been made, will help otherswhen interpreting an evaluation’s results and conclusions.

#122 Automating Procedurally Fair Feature Selection in Machine Learning

Clara Belitz, Lan Jiang, Nigel Bosch

In recent years, machine learning has become more common in everyday applications. Consequently, numerous studies have explored issues of unfairness against specific groups or individuals in the context of these applications. Much of the previous work on unfairness in machine learning has focused on the fairness of outcomes rather than process. We propose a feature selection method inspired by fair process (procedural fairness) in addition to fair outcome. Specifically, we introduce the notion of unfairness weight, which indicates how heavily to weight unfairness versus accuracy when measuring the marginal benefit of adding a new feature to a model. Our goal is to maintain accuracy while reducing unfairness, as defined by six common statistical definitions. We show that this approach demonstrably decreases unfairness as the unfairness weight is increased, for most combinations of metrics and classifiers used. A small subset of all the combinations of datasets (4), unfairness metrics (6), and classifiers (3), however, demonstrated relatively low unfairness initially. For these specific combinations, neither unfairness nor accuracy were affected as unfairness weight changed, demonstrating that this method does not reduce accuracy unless there is also an equivalent decrease in unfairness. We also show that this approach selects unfair features and sensitive features for the model less frequently as the unfairness weight increases. As such, this procedure is an effective approach to constructing classifiers that both reduce unfairness and are less likely to include unfair features in the modeling process.

#143 Rawlsian Fair Adaptation of Deep Learning Classifiers

Chiranjib Bhattacharyya, Amit Deshpande, Pooja Gupta, Kulin Shah

Group-fairness in classification aims for equality of a predictive utility across different sensitive sub-populations, e.g., race or gender. Equality or near-equality constraints in group-fairness often worsen not only the aggregate utility but also the utility for the least advantaged sub-population. In this paper, we apply the principles of Pareto-efficiency and least-difference to the utility being accuracy, as an illustrative example, and arrive at the \emph{Rawls classifier} that minimizes the error rate on the worst-off sensitive sub-population. Our mathematical characterization shows that the \emph{Rawls classifier} uniformly applies a threshold to an ideal \emph{score} of features, in the spirit of fair equality of opportunity. In practice, such a score or a feature representation is often computed by a black-box model that has been useful but unfair. Our second contribution is practical \emph{Rawlsian fair adaptation} of any given black-box deep learning model, without changing the score or feature representation it computes. Given any score function or feature representation and only its second-order statistics on the sensitive sub-populations, we seek a threshold classifier on the given score or a linear threshold classifier on the given feature representation that achieves the \emph{Rawls error rate} restricted to this hypothesis class. Our technical contribution is to formulate the above problems using ambiguous chance constraints, and to provide efficient algorithms for Rawlsian fair adaptation, along with provable upper bounds on the Rawls error rate. Our empirical results show significant improvement over state-of-the-art group-fair algorithms, even without retraining for fairness.

#69 Algorithmic Audit of Italian Car Insurance: Evidence of Unfairness in Access and Pricing

Alessandro Fabris, Alan Mishler, Stefano Gottardi, Mattia Carletti, Matteo Daicampi, Gian Antonio Susto, Gianmaria Silvello

We conduct an audit of pricing algorithms employed by companies in the Italian car insurance industry, primarily by gathering quotes through a popular comparison website. While acknowledging the complexity of the industry, we find evidence of several problematic practices. We show that birthplace and gender have a direct and sizeable impact on the prices quoted to drivers, despite national and international regulations against their use. Birthplace, in particular, is used quite frequently to the disadvantage of foreign-born drivers and drivers born in certain Italian cities. In extreme cases, a driver born in Laos may be charged 1,000€ more than a driver born in Milan, all else being equal. For a subset of our sample, we collect quotes directly on a company website, where the direct influence of gender and birthplace is confirmed. Finally, we find that drivers with riskier profiles tend to see fewer quotes in the aggregator result pages, substantiating concerns of differential treatment raised in the past by Italian insurance regulators.

#249 Computer Vision and Conflicting Values: Describing People with Automated Alt Text

Margot Hanley, Solon Barocas, Karen Levy, Shiri Azenkot, Helen Nissenbaum

Scholars have recently drawn attention to a range of controversial issues posed by the use of computer vision for automatically generating descriptions of people in images. Despite these concerns, automated image description has become an important tool to ensure equitable access to information for blind and low vision people. In this paper, we investigate the ethical dilemmas faced by companies that have adopted the use of computer vision for producing alt text: textual descriptions of images for blind and low vision people. We use Facebook’s automatic alt text tool as our primary case study. First, we analyze the policies that Facebook has adopted with respect to identity categories, such as race, gender, age, etc., and the company’s decisions about whether to present these terms in alt text. We then describe an alternative—and manual—approach practiced in the museum community, focusing on how museums determine what to include in alt text descriptions of cultural artifacts. We compare these policies, using notable points of contrast to develop an analytic framework that characterizes the particular apprehensions behind these policy choices. We conclude by considering two strategies that seem to sidestep some of these concerns, finding that there are no easy ways to avoid the normative dilemmas posed by the use of computer vision to automate alt text.

#104 The Earth Is Flat and the Sun Is Not a Star: The Susceptibility of GPT-2 to Universal Adversarial Triggers

Hunter Heidenreich, Jake Williams

This work considers universal adversarial triggers, a method of adversarially disrupting natural language models, and questions if it is possible to use such triggers to affect both the topic and stance of conditional text generation models. In considering four ”controversial” topics, this work demonstrates success at identifying triggers that cause the GPT-2 model to produce text about targeted topics as well as influence the stance the text takes towards the topic. We show that, while the more fringe topics are more challenging to identify triggers for, they do appear to more effectively discriminate aspects like stance. We view this both as an indication of the dangerous potential for controllability and, perhaps, a reflection of the nature of the disconnect between conflicting views on these topics, something that future work could use to question the nature of filter bubbles and if they are reflected within models trained on internet content. In demonstrating the feasibility and ease of such an attack, this work seeks to raise the awareness that neural language models are susceptible to this influence–even if the model is already deployed and adversaries lack internal model access–and advocates the immediate safeguarding against this type of adversarial attack in order to prevent potential harm to human users.

#228 Can We Obtain Fairness for Free?

Rashidul Islam, Shimei Pan, James Foulds

There is growing awareness that AI and machine learning systems can in some cases learn to behave in unfair and discriminatory ways with harmful consequences. However, despite an enormous amount of research, techniques for ensuring AI fairness have yet to see widespread deployment in real systems. One of the main barriers is the conventional wisdom that fairness brings a cost in predictive performance metrics such as accuracy which could affect an organization’s bottom-line. In this paper we take a closer look at this concern. Clearly fairness/performance trade-offs exist, but are they inevitable? In contrast to the conventional wisdom, we find that it is frequently possible, indeed straightforward, to improve on a trained model’s fairness without sacrificing predictive performance. We systematically study the behavior of fair learning algorithms on a range of benchmark datasets, showing that it is possible to improve fairness to some degree with no loss (or even an improvement) in predictive performance via a sensible hyper-parameter selection strategy. Our results reveal a pathway toward increasing the deployment of fair AI methods, with potentially substantial positive real-world impacts.

#254 Towards Equity and Algorithmic Fairness in Student Grade Prediction

Weijie Jiang, Zachary Pardos

Equity of educational outcome and fairness of AI with respect to race have been topics of increasing importance in education. In this work, we address both with empirical evaluations of grade prediction in higher education, an important task to improve curriculum design, plan interventions for academic support, and offer course guidance to students. With fairness as the aim, we trial several strategies for both label and instance balancing to attempt to minimize differences in algorithm performance with respect to race. We find that an adversarial learning approach, combined with grade label balancing, achieved by far the fairest results. With equity of educational outcome as the aim, we trial strategies for boosting predictive performance on historically underserved groups and find success in sampling those groups in inverse proportion to their historic outcomes. With AI-infused technology supports increasingly prevalent on campuses, our methodologies fill a need for frameworks to consider performance trade-offs with respect to sensitive student attributes and allow institutions to instrument their AI resources in ways that are attentive to equity and fairness.

#246 AI and Shared Prosperity

Katya Klinova, Anton Korinek

Future advances in AI that automate away human labor may have stark implications for labor markets and inequality. This paper proposes a framework to analyze the effects of specific types of AI systems on the labor market, based on how much labor demand they will create versus displace, while taking into account that productivity gains also make society wealthier and thereby contribute to additional labor demand. This analysis enables ethically-minded companies creating or deploying AI systems as well as researchers and policymakers to take into account the effects of their actions on labor markets and inequality, and therefore to steer progress in AI in a direction that advances shared prosperity and an inclusive economic future for all of humanity.

#171 Ethical Data Curation for AI: An Approach Based on Feminist Epistemology and Critical Theories of Race

Susan Leavy, Eugenia Siapera, Barry O’Sullivan

The potential for bias embedded in data to lead to the perpetuation of social injustice though Artificial Intelligence (AI) necessitates an urgent reform of data curation practices for AI systems, especially those based on machine learning. Without appropriate ethical and regulatory frameworks there is a risk that decades of advances in human rights and civil liberties may be undermined. This paper proposes an approach to data curation for AI, grounded in feminist epistemology and informed by critical theories of race and feminist principles. The objective of this approach is to support critical evaluation of the social dynamics of power embedded in data for AI systems. We propose a set of fundamental guiding principles for ethical data curation that address the social construction of knowledge, call for inclusion of subjugated and new forms of knowledge, support critical evaluation of theoretical concepts within data and recognise the reflexive nature of knowledge. In developing this ethical framework for data curation, we aim to contribute to a virtue ethics for AI and ensure protection of fundamental and human rights.

#79 Risk Identification Questionnaire for Unintended Bias in Machine Learning Development Lifecycle

Michelle Seng Ah Lee, Jatinder Singh

Unintended biases in machine learning (ML) models have the potential to introduce undue discrimination and exacerbate social inequalities. The research community has proposed various technical and qualitative methods intended to assist practitioners in assessing these biases. While frameworks for identifying the risks of harm due to unintended biases have been proposed, they have not yet been operationalised into practical tools to assist industry practitioners.In this paper, we link prior work on bias assessment methods to phases of a standard organisational risk management process (RMP), noting a gap in measures for helping practitioners identify bias- related risks. Targeting this gap, we introduce a bias identification methodology and questionnaire, illustrating its application through a real-world, practitioner-led use case. We validate the need and usefulness of the questionnaire through a survey of industry practitioners, which provides insights into their practical requirements and preferences. Our results indicate that such a questionnaire is helpful for proactively uncovering unexpected bias concerns, particularly where it is easy to integrate into existing processes, and facilitates communication with non-technical stakeholders.Ultimately, the effective end-to-end management of ML risks requires a more targeted identification of potential harm and its sources, so that appropriate mitigation strategies can be formulated. Towards this, our questionnaire provides a practical means to assist practitioners in identifying bias-related risks.

#275 Fair Equality of Chances: Fairness for Statistical Prediction-Based Decision-Making

Michele Loi, Anders Herlitz, Heidari Hoda

This paper presents a fairness principle that can be used to evaluate decision-making based on predictions. We propose that a decision rule for decision-making based on predictions is unfair when the individuals directly subjected to the implications of the decision do not enjoy fair equality of chances. We define fair equality of chances as obtaining if and only if the individuals who are equal with respect to the features that justify inequalities in outcomes have the same statistical prospects of being benefited or harmed, irrespective of their morally irrelevant traits. The paper characterizes – in a formal way – the way in which luck can be allowed to impact outcomes without the process being unfair. This fairness principle can be used to evaluate decision-making based on predictions, a kind of decision-making that is becoming increasingly important to analyze in light of the growing prevalence of algorithmic decision-making. It can be used to evaluate decision-making rules based on different normative theories, and is compatible with a broad range of normative views according to which inequalities due to brute luck can be fair.

#280 Towards Accountability in the Use of Artificial Intelligence for Public Administrations

Michele Loi, Matthias Spielkamp

We argue that the phenomena of distributed responsibility, induced acceptance, and acceptance through ignorance constitute instances of imperfect delegation when tasks are delegated to computationally-driven systems. Imperfect delegation challenges human accountability. We hold that both direct public accountability via public transparency and indirect public accountability via transparency to auditors in public organizations can be both instrumentally ethically valuable and required as a matter of deontology from the principle of democratic self-government. We analyze the regulatory content of 16 guideline documents about the use of AI in the public sector, by mapping their requirements to those of our philosophical account of accountability, and conclude that while some guidelines refer processes that amount to auditing, it seems that the debate would benefit from more clarity about the nature of the entitlement of auditors and the goals of auditing, also in order to develop ethically meaningful standards with respect to which different forms of auditing can be evaluated and compared.

#177 How Do the Score Distributions of Subpopulations Influence Fairness Notions?

Carmen Mazijn, Jan Danckaert, Vincent Ginis

Automated decisions based on trained algorithms influence human life in an increasingly far-reaching way. In recent years, it has become clear that these decisions are often accompanied by bias and unfair treatment of different subpopulations.Meanwhile, several notions of fairness circulate in the scientific literature, with trade-offs between profit and fairness and between fairness metrics among themselves. Based on both analytical calculations and numerical simulations, we show in this study that some profit-fairness trade-offs and fairness-fairness trade-offs depend substantially on the underlying score distributions given to subpopulations and we present two complementary perspectives to visualize this influence. We further show that higher symmetry in scores of subpopulations can significantly reduce the trade-offs between fairness notions within a given acceptable strictness, even when sacrificing expressiveness. Our exploratory study may help to understand how to overcome the strict mathematical statements about the statistical incompatibility of certain fairness notions.

#115 Measuring Lay Reactions to Personal Data Markets

Aileen Nielsen

The recording, aggregation, and exchange of personal data is necessary to the development of socially-relevant machine learning applications. However, anecdotal and survey evidence show that ordinary people feel discontent and even anger regarding data collection practices that are currently typical and legal. This suggests that personal data markets in their current form do not adhere to the norms applied by ordinary people. The present study experimentally probes whether market transactions in a typical online scenario are accepted when evaluated by lay people. The results show that a high percentage of study participants refused to participate in a data pricing exercise, even in a commercial context where market rules would typically be expected to apply. For those participants who did price the data, the median price was an order of magnitude higher than the market price. These results call into question the notice and consent market paradigm that is used by technology firms and government regulators when evaluating data flows. The results also point to a conceptual mismatch between cultural and legal expectations regarding the use of personal data.

#188 We Haven’t Gone Paperless Yet: Why the Printing Press Can Help Us Understand Data and AI

Julian Posada, Nicholas Weller, Wendy H. Wong

How should we understand the social and political effects of the datafication of human life? This paper argues that the effects of data should be understood as a constitutive shift in social and political relations. We explore how datafication, or quantification of human and non-human factors into binary code, affects the identity of individuals and groups. This fundamental shift goes beyond economic and ethical concerns, which has been the focus of other efforts to explore the effects of datafication and AI. We highlight that technologies such as datafication and AI (and previously, the printing press) both disrupted extant power arrangements, leading to decentralization, and triggered a recentralization of power by new actors better adapted to leveraging the new technology. We use the analogy of the printing press to provide a framework for understanding constitutive change. The printing press example gives us more clarity on 1) what can happen when the medium of communication drastically alters how information is communicated and stored; 2) the shift in power from state to private actors; and 3) the tension of simultaneously connecting individuals while driving them towards narrower communities through algorithmic analyses of data.

#159 The Theory, Practice, and Ethical Challenges of Designing a Diversity-Aware Platform for Social Relations

Laura Schelenz, Ivano Bison, Matteo Busso, Amalia de Götzen, Daniel Gatica-Perez, Fausto Giunchiglia, Lakmal Meegahapola, Salvador Ruiz-Correa

Diversity-aware platform design is a paradigm that responds to the ethical challenges of existing social media platforms. Available platforms have been criticized for minimizing users’ autonomy, marginalizing minorities, and exploiting users’ data for profit maximization. This paper presents a design solution that centers the well-being of users. It presents the theory and practice of designing a diversity-aware platform for social relations. In this approach, the diversity of users is leveraged in a way that allows like-minded individuals to pursue similar interests or diverse individuals to complement each other in a complex activity. The end users of the envisioned platform are students, who participate in the design process. Diversity-aware platform design involves numerous steps, of which two are highlighted in this paper: 1) defining a framework and operationalizing the "diversity" of students, 2) collecting "diversity" data to build diversity-aware algorithms. The paper further reflects on the ethical challenges encountered during the design of a diversity-aware platform.

#20 Fairness in the Eyes of the Data: Certifying Machine-Learning Models

Shahar Segal, Yossi Adi, Carsten Baum, Chaya Ganesh, Benny Pinkas, Joseph Keshet

We present a framework that allows to certify the fairness degree of a model based on an interactive and privacy-preserving test. The framework verifies any trained model, regardless of its training process and architecture. Thus, it allows us to evaluate any deep learning model on multiple fairness definitions empirically. We tackle two scenarios, where either the test data is privately available only to the tester or is publicly known in advance, even to the model creator. We investigate the soundness of the proposed approach using theoretical analysis and present statistical guarantees for the interactive test. Finally, we provide a cryptographic technique to automate fairness testing and certified inference with only black-box access to the model at hand while hiding the participants’ sensitive data.,0,1

#262 Digital Voodoo Dolls

Marija Slavkovik, Jon Askonas, Caroline Pitman, Clemens Stachl

An institution, be it a body of government, commercial enterprise, or a service, cannot interact directly with a person. Instead, a model is created to represent us. We argue the existence of a new high-fidelity type of person model which we call a digital voodoo doll. We conceptualize it and compare its features with existing models of persons. Digital voodoo dolls are distinguished by existing completely beyond the influence and control of the person they represent. We discuss the ethical issues that such a lack of accountability creates and argue how these concerns can be mitigated.

#240 Skilled and Mobile: Survey Evidence of AI Researchers’ Immigration Preferences

Remco Zwetsloot, Baobao Zhang, Noemi Dreksler, Lauren Kahn, Markus Anderljung, Allan Dafoe, Michael C. Horowitz

Countries, companies, and universities are increasingly competing over top-tier artificial intelligence (AI) researchers. Where are these researchers likely to immigrate and what affects their immigration decisions? We conducted a survey $(n = 524)$ of the immigration preferences and motivations of researchers that had papers accepted at one of two prestigious AI conferences: the Conference on Neural Information Processing Systems (NeurIPS) and the International Conference on Machine Learning (ICML). We find that the U.S. is the most popular destination for AI researchers, followed by the U.K., Canada, Switzerland, and France. A country’s professional opportunities stood out as the most common factor that influences immigration decisions of AI researchers, followed by lifestyle and culture, the political climate, and personal relations. The destination country’s immigration policies were important to just under half of the researchers surveyed, while around a quarter noted current immigration difficulties to be a deciding factor. Visa and immigration difficulties were perceived to be a particular impediment to conducting AI research in the U.S., the U.K., and Canada. Implications of the findings for the future of AI talent policies and governance are discussed.