You are browsing the site of a past edition of the AIES conference (2020). Navigate to present edition here.

Paper session 4: Ethics on the surface

14:45
When Trusted Black Boxes Don’t Agree: Incentivizing Iterative Improvement and Accountability in Critical Software Systems

ABSTRACT. Software is increasingly used to direct and manage critical aspects of all of our lives from how we get our news, to how we find a spouse, to how we navigate the streets of our cities. Beyond personal decisions, software plays a key role in regulated areas like housing, hiring and credit and major public areas like criminal justice and elections. Anyone who develops software knows how easy it is for there to be unintended defects. Bugs enter the systems at design time, during implementation and during deployment. Preventing, finding and fixing these flaws is a key focus of both industrial software development efforts as well as academic research in software engineering. In this paper, we discuss flaws in the larger socio-technical decision-making processes in which critical black-box software systems are approved, chosen, deployed and trusted. We use criminal justice software, specifically probabilistic genotyping (PG) software, as a concrete example. We describe how PG software systems, designed to do the same job, produce different results and discuss the impact of these differences on how the results are presented in court. We propose concrete changes to the socio-technical decision-making processes surrounding the use of PG software that could be used to incentivize debugging and improvements in the accuracy, fairness and reliability of these systems.

15:00
An Empirical Approach to Capture Moral Uncertainty in Ethical AI

ABSTRACT. As AI Systems become increasingly autonomous they are expected to engage in complex moral decision-making processes. For the purpose of guidance of such processes theoretical and empirical solutions have been sought. In this research we integrate both theoretical and empirical lines of thought to address the matters of moral reasoning in AI Systems. We reconceptualize a metanormative framework for decision-making under moral uncertainty within the Discrete Choice Analysis domain and we operationalize it through a latent class choice model. The discrete choice analysis-based formulation of the metanormative framework is theory-rooted and practical as it captures moral uncertainty through a small set of latent classes. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. In the proof of concept two AI systems make policy choices on behalf of a society but while one of the systems uses a baseline moral certain model the other uses a moral uncertain model. It was observed that there are cases in which the AI Systems disagree about the policy to be chosen which we believe is an indication about the relevance of moral uncertainty.

15:15
Ethics for AI Writing: The Importance of Rhetorical Context

ABSTRACT. Implicit in any rhetorical interaction—between humans or between humans and machines—are ethical codes, including a purpose code that provides the reason for our interacting in the first place. These ethical understandings are a key part of rhetorical context, the social situation in which communication happens but also the engine that drives communicative interaction. Such codes are usually invisible to AI writing systems because they do not typically exist in the databases the systems use to produce discourse. Can AI writing systems learn to learn rhetorical context, particularly the implicit codes for communication ethics? We see evidence that some systems do address issues of rhetorical context, at least in rudimentary ways. But we critique the information transfer communication model supporting many AI writing systems, arguing for a social context model that accounts for what is “not there” in the data set but that is critical for the production of meaningful, significant, and ethical communication. We offer two ethical principles to guide design of AI writing systems: transparency about machine presence and critical data awareness, a methodological reflexivity about omissions in the data that need to be provided by a human agent or accounted for in machine learning.