You are browsing the site of a past edition of the AIES conference (2021). Navigate to present edition here.

Paper session 7: Machine Ethics

Fri 21 03:00 PDT

#162 A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior

Rémy Chaput, Jérémy Duval, Olivier Boissier, Mathieu Guillermin, Salima Hassas

The recent field of Machine Ethics is experiencing rapid growth to answer the societal need for Artificial Intelligence (AI) algorithms imbued with ethical considerations, such as benevolence toward human users and actors.Several approaches already exist for this purpose, mostly either by reasoning over a set of predefined ethical principles (Top-Down), or by learning new principles (Bottom-Up).While both methods have their own advantages and drawbacks, only few works have explored hybrid approaches, such as using symbolic rules to guide the learning process for instance, combining the advantages of each.This paper draws upon existing works to propose a novel hybrid method using symbolic judging agents to evaluate the ethics of learning agents’ behaviors, and accordingly improve their ability to ethically behave in dynamic multi-agent environments.Multiple benefits ensue from this separation between judging and learning agents: agents can evolve (or be updated by human designers) separately, benefiting from co-construction processes; judging agents can act as accessible proxies for non-expert human stakeholders or regulators; and finally, multiple points of view (one per judging agent) can be adopted to judge the behavior of the same agent, which produces a richer feedback.Our proposed approach is applied to an energy distribution problem, in the context of a Smart Grid simulator, with continuous and multi-dimensional states and actions.The experiments and results show the ability of learning agents to correctly adapt their behaviors to comply with the judging agents’ rules, including when rules evolve over time.

Fri 21 03:15 PDT

#192 Ethically Compliant Planning within Moral Communities

Samer Nashed, Justin Svegliato, Shlomo Zilberstein

Ethically compliant autonomous systems (ECAS) are the state-of-the-art for solving sequential decision-making problems under uncertainty while respecting constraints that encode ethical considerations. This paper defines a novel concept in the context of ECAS that is from moral philosophy, the \emph{moral community}, which leads to a nuanced taxonomy of explicit ethical agents. We then propose new ethical frameworks that extend the applicability of ECAS to domains where a moral community is required. Next, we provide a formal analysis of the proposed ethical frameworks and conduct experiments that illustrate their differences. Finally, we discuss the implications of explicit moral communities that could shape research on standards and guidelines for ethical agents in order to better understand and predict common errors in their design and communicate their capabilities.

Fri 21 03:30 PDT

#272 Moral Disagreement and Artificial Intelligence

Pamela Robinson

Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without consensus about the relevant moral facts. I argue that what makes moral disagreement especially challenging is that there are two different ways of handling it: political solutions, which aim to find a fair compromise, and epistemic solutions, which aim at moral truth.