You are browsing the site of a past edition of the AIES conference (2020). Navigate to present edition here.

Paper session 7: Policy and Governance

Policy versus Practice: Conceptions of Artificial Intelligence

ABSTRACT. The recent flood of concern around issues such as social biases implicit in algorithms, economic impacts of artificial intelligence (AI), and potential existential threats posed by the development of AI technology motivate consideration of regulatory action to forestall or constrain certain developments in the fields of AI and machine learning. However, definitional ambiguity hampers the possibility of conversation about these urgent topics of public concern. Legal and regulatory interventions require agreed-upon definitions, but consensus around a definition of AI has been elusive, especially in policy conversations. With an eye towards practical working definitions and a broader understanding of positions on these issues, we use a series of surveys and a review of published policy documents to examine variation in researcher and policy-maker conceptions of AI. We find that while AI researchers tend to favor definitions of AI that emphasize technical functionality, policy-makers favor definitions that emphasize comparison to human thinking and behavior. We point out that definitions that adhere closely to the functionality of AI systems are more inclusive of technologies in use today, whereas definitions that emphasize human-like capabilities are most applicable to hypothetical future technologies. As a result of this gap, ethical and regulatory efforts may emphasize concern about future technologies over pressing issues with existing deployed technologies.

U.S. Public Opinion on the Governance of Artificial Intelligence

ABSTRACT. Artificial intelligence (AI) has wide societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public’s trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N=2000), we examined Americans’ perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.

The AI Liability Puzzle and a Fund-Based Work-Around

ABSTRACT. Certainty around the regulatory environment is crucial to facilitate responsible AI innovation and its social acceptance. However, the existing legal liability system is inapt to assign responsibility where a potentially harmful conduct and/or the harm itself are unforeseeable, yet some instantiations of AI and/or the harms they may trigger are not foreseeable in the legal sense. The unpredictability of how courts would handle such cases makes the risks involved in the investment and use of AI incalculable, creating an environment that is not conducive to innovation and may deprive society of some benefits AI could provide. To tackle this problem, we propose to draw insights from financial regulatory best-practices and establish a system of AI guarantee schemes. We envisage the system to form part of the broader market-structuring regulatory framework, with the primary function to provide a readily available, clear, and transparent funding mechanism to compensate claims that are either extremely hard or impossible to realize via conventional litigation. We propose at least partial industry-funding, with funding arrangements depending on whether it would pursue other potential policy goals.

What’s Next for AI Ethics, Policy, and Governance? A Global Overview

ABSTRACT. Since 2016, more than 80 AI ethics documents – including codes, principles, frameworks, and policy strategies – have been produced by corporations, governments, and NGOs. In this paper, we examine three topics of importance related to our ongoing empirical study of ethics and policy issues in these emerging documents. First, we review possible challenges associated with the relative homogeneity of the documents’ creators. Second, we provide a novel typology of motivations to characterize both obvious and less obvious goals of the documents. Third, we discuss the varied impacts these documents may have on the AI governance landscape, including what factors are relevant to assessing whether a given document is likely to be successful in achieving its goals.