Paper session 3: The Politics of AI

Wed 19 14:00 PDT

#218 Artificial Intelligence and the Purpose of Social Systems

Sebastian Benthall, Jake Goldenfein

The law and ethics of Western democratic states have their basis in liberalism. This extends to regulation and ethical discussion of technology and businesses doing data processing. Liberalism relies on the privacy and autonomy of individuals, their ordering through a public market, and, more recently, a measure of equality guaranteed by the state. We argue that these forms of regulation and ethical analysis are largely incompatible with the techno-political and techno-economic dimensions of artificial intelligence. By analyzing liberal regulatory solutions in the form of privacy and data protection, regulation of public markets, and fairness in AI, we expose how the data economy and artificial intelligence have transcended liberal legal imagination. Organizations use artificial intelligence to exceed the bounded rationality of individuals and each other. This has led to the private consolidation of markets and an unequal hierarchy of control operating mainly for the purpose of shareholder value. An artificial intelligence will be only as ethical as the purpose of the social system that operates it. Inspired by the science of artificial life as an alternative to artificial intelligence, we consider data intermediaries: sociotechnical systems composed of individuals associated around collectively pursued purposes. An attention cooperative, that prioritizes its incoming and outgoing data flows, is one model of a social system that could form and maintain its own autonomous purpose.

Wed 19 14:15 PDT

#76 Hard Choices and Hard Limits in Artificial Intelligence

Bryce Goodman

Artificial intelligence (AI) is supposed to help us make better choices. Some of these choices are small, like what route to take to work, or what music to listen to. Others are big, like what treatment to administer for a disease or how long to sentence someone for a crime. If AI can assist with these big decisions, we might think it can also help with hard choices, cases where alternatives are neither better, worse nor equal but on a par. The aim of this paper, however, is to show that this view is mistaken: the fact of parity shows that there are hard limits on AI in decision making and choices that AI cannot, and should not, resolve.

Wed 19 14:30 PDT

#180 Emergent Unfairness: Normative Assumptions and Contradictions in Algorithmic Fairness-Accuracy Trade-Off Research

A. Feder Cooper, Ellen Abrams

Across machine learning (ML) sub-disciplines, researchers make explicit mathematical assumptions in order to facilitate proof-writing. We note that, specifically in the area of fairness-accuracy trade-off optimization scholarship, similar attention is not paid to the normative assumptions that ground this approach. Such assumptions presume that 1) accuracy and fairness are in inherent opposition to one another, 2) strict notions of mathematical equality can adequately model fairness, 3) it is possible to measure the accuracy and fairness of decisions independent from historical context, and 4) collecting more data on marginalized individuals is a reasonable solution to mitigate the effects of the trade-off. We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions: While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness. We conclude by suggesting a concrete path forward toward a potential resolution.