You are browsing the site of a past edition of the AIES conference (2020). Navigate to present edition here.

Fri 7, Spotlight (w/coffee)

15:30 – 16:00 pm
15:30
Ethics of Food Recommender Applications

ABSTRACT. The recent unprecedented popularity of food recommender applications has raised several issues related to the ethical, societal and legal implications of relying on these applications. In this paper, in order to assess the relevant ethical issues, we rely on the emerging principles across the AI\&Ethics community and define them tailored context specifically. Considering the popular F-RS in the European market (YUKA, Fairtrade, Shopgun, etc..) cannot be regarded as personalised F-RS, we show how merely this lack of feature shifts the relevance of the focal ethical concerns. We identify the major challenges and propose a scheme for how explicit ethical agendas should be explained. We also argue how a multi-stakeholder approach is indispensable to ensure producing longterm benefits for all stakeholders. Given the argumentative nature of the paper, we limit ourselves to point to further research directions that could build on the defined ethical desiderata given by this paper from an AI architectural and, more importantly, from a legal perspective.

15:32
AI and Holistic Review: Informing Human Reading in College Admissions

ABSTRACT. College admissions in the United States is carried out by a human-centered method of evaluation known as holistic review, which typically involves reading original narrative essays submitted by each applicant. The legitimacy and fairness of holistic review, which gives human readers significant discretion over determining each applicant’s fitness for admission, has repeatedly been challenged in courtrooms and the public sphere. Using a unique corpus of 283,676 application essays submitted to a large, selective, state university system between 2015 and 2016, we assess the extent to which applicant demographic characteristics can be inferred from application essays. We find a relatively interpretable classifier (logistic regression) was able to predict gender and household income with high levels of accuracy. Findings suggest that auditing data might be useful in informing holistic review, and perhaps other evaluative systems, by checking potential bias in human or computational readings.

15:34
More Than “If Time Allows”: The Role of Ethics in AI Education

ABSTRACT. Even as public pressure mounts for technology companies to consider societal impacts of products, industries and governments in the AI race are demanding technical talent. To meet this demand, universities clamor to add technical artificial intelligence (AI) and machine learning (ML) courses into computing curriculum–but how are societal and ethical considerations part of this landscape? We explore two pathways for ethics content in AI education: (1) standalone AI ethics courses, and (2) integrating ethics into technical AI courses. For both pathways, we ask: What is being taught? As we train computer scientists who will build and deploy AI tools, how are we training them to consider the consequences of their work? In this exploratory work, we qualitatively analyzed 31 standalone AI ethics classes from 22 U.S. universities and 20 AI/ML technical courses from 12 U.S. universities to understand which ethics-related topics professors include in courses. We identify and categorize topics in AI ethics education, share notable practices, and note omissions. Our analysis will help AI educators identify what topics should be taught and create scaffolding for developing future AI ethics education.

15:36
A Deontic Logic for Programming Rightful Machines

ABSTRACT. A “rightful machine” is an explicitly moral, autonomous machine agent whose behavior conforms to principles of justice and the positive public law of a legitimate state. In this paper, I set out some basic elements of a deontic logic appropriate for capturing conflicting legal obligations for purposes of programming rightful machines. Justice demands that the prescriptive system of enforceable public laws be consistent, yet statutes or case holdings may often describe legal obligations that contradict; moreover, even fundamental constitutional rights may come into conflict. I argue that a deontic logic of the law should not try to work around such conflicts but, instead, identify and expose them so that the rights and duties that generate inconsistencies in public law can be explicitly qualified and the conflicts resolved. I then argue that a credulous, non-monotonic deontic logic can describe inconsistent legal obligations while meeting the normative demand for consistency in the prescriptive system of public law. I propose an implementation of this logic via a modified form of “answer set programming,” which I demonstrate with some simple examples.

15:38
Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning

ABSTRACT. Epistemology is the systematic philosophical examination of knowledge and is concerned with the nature of knowledge and how we acquire it (Lewis 1996). Amongst philosophers, there is consensus that for a mental state to count as a knowledge state it must minimally be a justified, true belief. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of justification it involves.Prima facie, one might think that machine learning is epistemologically inscrutable (Selbst and Barocas 2018). After all, we don’t usually have access to the black box in which models make decisions. Thus it might appear that machine learning decisions qua knowledge don’t have sufficient justification to count as knowledge. One might think this is because the models don’t appear to have evidence or accessible reasons for their output. We suggest that this underlies the widespread interest in explainable or interpretable AI within the research community as well as the general public. Despite this inscrutability, machine learning is being deployed in human-consequential domains at a rapid pace. How do we rationalize on the one hand this seeming justificatory black box with the wide application of machine learning? We argue that, in general, people adopt implicit reliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable processor method (Goldman 2012). In this paper, we explore what this means in the ML context. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification—moral justification.

15:40
“The Global South is everywhere but also always somewhere” : National policy narratives and AI justice

ABSTRACT. There is more attention than ever before on the social implications of AI. In contrast to universalized paradigms of ethics and fairness, there is a move towards critical work that situates AI within the frame of social justice and human rights (“AI justice”). The geographical location of much of this critique in the West could however be engendering its own blind spots. AI’s global supply chain (data, labour, computation power, natural resources) today replicates geopolitical inequities, and the continued subordination of Global South countries. This paper draws attention to recent official policy narratives from India and United Nations Conference on Trade and Development (UNCTAD) aimed at influencing the role and place of these regions in the global political economy of AI. The flaws in these policies do not take away from the urgency of acknowledging colonial histories and the questions they raise of redistributive justice. Without a deliberate effort at initiating that conversation it is inevitable that mainstream discourse on AI justice will grow parallel to (and potentially undercut) demands emanating from Global South governments and communities.

15:42
Arbiter: A Domain-Specific Language for Ethical Machine Learning

ABSTRACT. The widespread deployment of machine learning models in high-stakes decision making scenarios requires a code of ethics for machine learning practitioners. We identify four of the primary components required for the ethical practice of machine learning: transparency, fairness, accountability, and reproducibility. We introduce Arbiter, a domain-specific programming language for machine learning practitioners that is designed for ethical machine learning. Arbiter provides a notation for recording how machine learning models will be trained, and we show how this notation can encourage the four described components of ethical machine learning.

15:44
Should Artificial Intelligence Governance be Centralised? Design Lessons from History

ABSTRACT. Can effective international AI governance remain fragmented, or do we need a centralised international organisation for AI? We draw on the history of other international regimes to identify trade-offs in centralising AI governance. Some of these—(1) prevention of forum shopping; (2) policy coordination and political power; (3) efficiency and participation—speak for centralising governance. Others—(4) slowness and brittleness; (5) mutual supportiveness of decentralised approaches; (6) a lower buy-in threshold to—speak for decentralisation. Given these lessons, we conclude with two core recommendations. First, the outcome will depend on the details. A well-designed centralised regime would be optimal. But locking-in an inadequate structure one may be a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.

15:46
Robot Rights? Let’s talk about human welfare instead

ABSTRACT. The ‘robot rights’ debate, and its related question of ‘robot responsibility’, invokes some of the most polarized positions in AI ethics. While some advo-cate for granting robots rights on a par with human beings, others, in a stark opposition argue that ro-bots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to de-ny robots ‘rights’, but to deny that robots, as arti-facts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the ‘ro-bots rights’ debate is focused on first world prob-lems, at the expense of urgent ethical concerns, such as machine bias and machine elicited human labour exploitation, both impacting society’s least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well the lack of taking responsibility by people designing, buying and de-ploying such machines, remains the only relevant ethical discussion in AI.

15:48
Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles

ABSTRACT. Autonomous vehicles (AVs) and accidents they are involved in attest to the urgent need to consider the ethics of AI. The question dominating the discussion has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The Agent-Deed-Consequence (ADC) model provides a promising account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the Agent, Deed, and Consequence in any given situation. These intuitive evaluations combine to produce a judgment of moral acceptability. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general.

15:50
Artificial Intelligent and Indigenous Perspectives: Protecting and Empowering Intelligent Human Beings

ABSTRACT. Today the societal influence of Artificial Intelligence (AI) is significantly widespread and continues to raise novel human rights concerns. As ‘control’ is increasingly ceded to AI systems, and potentially Artificial General Intelligence (AGI) humanity may be facing an identity crisis sooner rather than later, whereby the notion of ‘intelligence’ no longer remains solely our own. This paper characterizes the problem in terms of an emerging responsibility gap and loss of control and proposes a relational shift in our attitude towards AI. This shift can potentially be achieved through value alignment by incorporating Indigenous perspectives into AI development. The value of Indigenous perspectives has not been canvassed widely in the literature and becomes clear when considering the existence of well-developed epistemologies adept at accounting for the non-human, a task that defies Western anthropocentrism. Accommodating the non-human AI by considering it as part of our network is a step towards building a symbiotic relationship with AI. It is argued that in order to co-exist, as AGI potentially questions our fundamental notions of what it means to have human rights, we find assistance in well tested Indigenous traditions such as the Hawaiian (kānaka maoli) and Lakota ontologies.

15:52
The Windfall Clause: Distributing the Benefits of AI for the Common Good

ABSTRACT. As the transformative potential of AI has become increasingly salient as a matter of public and political interest, there has been growing discussion about the need to ensure that AI broadly benefits humanity. This in turn has spurred debate on the social responsibilities of large technology companies to serve the interests of society at large. In response, ethical principles and codes of conduct have been proposed to meet the escalating demand for this responsibility to be taken seriously. As yet, however, few institutional innovations have been suggested to translate this responsibility into legal commitments which apply to companies positioned to reap large financial gains from the development and use of AI. This pa- per offers one potentially attractive tool for addressing such issues: the Windfall Clause, which is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. By this we mean an early commitment that profits that a firm could not earn without achieving fundamental, economically transformative breakthroughs in AI capabilities will be donated to benefit humanity broadly, with particular attention towards mitigating any downsides from deployment of windfall-generating AI.

15:54
A Fairness-aware Incentive Scheme for Federated Learning

ABSTRACT. In federated learning, the federation crowdsources data owners to share their local data by leveraging privacy preserving technologies in order to build a federated model. The model can achieve better performance than that of training with just the local data. However, in FL, participants need to incur some cost for contributing to the FL models with their local datasets. The training and commercialization of the models will take time. Thus, there will be some delays before the federation has enough budget to pay back the participants. This temporary mismatch between contributions and rewards has not been accounted for by existing payoff-sharing schemes. To address this limitation, we propose the Federated Learning Incentivizer (FLI) payoff-sharing scheme in this paper. The scheme dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff gained by them and the waiting time for receiving payoffs. Extensive experimental comparisons with five state-of-the-art payoff-sharing schemes show that FLI is the most attractive to high quality data owners and achieves the highest expected revenue for a data federation.