The Moral Machine Experiment: 40 Million Decisions and the Path to Universal Machine Ethics
Iyad Rahwan and Edmond Awad, MIT
Invited talk AI, 9:00am-10:00am, Feb.2nd, 2018
We describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game enabled us to gather 40 million decisions from 3 million people in 200 countries/territories. We report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. We also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. We discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics.
AI, Civil Rights, and Civil Liberties: Can Law Keep Pace with Technology?
Carol Rose, ACLU
Invited talk AI and law: 4:30pm-5:30pm, Feb.2nd
At the dawn of this era of human-machine interaction, humans beings have an opportunity to shape fundamentally the ways in which machine learning will expand or contract the human experience, both individually and collectively. As efforts to develop guiding ethical principles and legal constructs for human-machine interaction move forward, how do we address not only what we do with AI, but also the question of who gets to decide and how? Are guiding principles of Liberty and Justice for All still relevant? Does a new era require new models of open leadership and collaboration around law, ethics, and AI?
The Great AI/Robot Jobs Scare: reality of automation fear redux
Richard Freeman (Harvard University)
Invited talk AI and jobs, 9:00am-10:00am, Feb.3rd
This talk will consider the impact of AI/robots on employment, wages and the future of work more broadly. We argue that we should focus on policies that make AI robotics technology broadly inclusive both in terms of consumption and ownership so that billions of people can benefit from higher productivity and get on the path to the coming age of intolerable abundance.
AI Decisions, Risk, and Ethics: Beyond Value Alignment
Patrick Lin, California Polytechnic State University
Invited talk AI and philosophy, 4:30pm-5:30pm, Feb.3rd, 2018
When we think about the values AI should have in order to make right decisions and avoid wrong ones, there’s a large but hidden third category to consider: decisions that are not-wrong but also not-right. This is the grey space of judgment calls, and just having good values might not help as much as you’d think here. I’ll use autonomous cars as my case study here, with lessons for broader AI: ethical dilemmas can arise in everyday scenarios such as lane positioning and navigation, not just in crazy crash scenarios. This is the space where one good value might conflict with another good value, and there’s no “right” answer or even broad consensus on an answer; so it’s important to recognize the hard cases—which are potential limits—in the study of AI ethics.