You are browsing the site of a past edition of the AIES conference (2021). Navigate to present edition here.

Paper session 6: Responsible Design and Development

Thu 20 17:00 PDT

#5 Co-Design and Ethical Artificial Intelligence for Health: Myths and Misconceptions

Joseph Donia, Jay Shaw

Applications of artificial intelligence / machine learning (AI/ML) are dynamic and rapidly growing, and although multi-purpose, are particularly consequential in health care. One strategy for anticipating and addressing ethical challenges related to AI/ML for health care is co-design – or involvement of end users in design. Co-design has a diverse intellectual and practical history, however, and has been conceptualized in many different ways. Moreover, the unique features of AI/ML introduce challenges to co-design that are often underappreciated. This review summarizes the research literature on involvement in health care and design, and informed by critical data studies, examines the extent to which co-design as commonly conceptualized is capable of addressing the range of normative issues raised by AI/ML for health. We suggest that AI/ML technologies have amplified existing challenges related to co-design, and created entirely new challenges. We outline five co-design ‘myths and misconceptions’ related to AI/ML for health that form the basis for future research and practice. We conclude by suggesting that the normative strength of a co-design approach to AI/ML for health can be considered at three levels: technological, health care system, and societal. We also suggest research directions for a ‘new era’ of co-design capable of addressing these challenges.

Thu 20 17:15 PDT

#173 Reflexive Design for Fairness and Other Human Values in Formal Models

Benjamin Fish, Luke Stark

Algorithms and other formal models purportedly incorporating human values like fairness have grown increasingly popular in computer science. In response to sociotechnical challenges in the use of these models, designers and researchers have taken widely divergent positions on how formal models incorporating aspects of human values should be used: encouraging their use, moving away from them, or ignoring the normative consequences altogether. In this paper, we seek to resolve these divergent positions by identifying the main conceptual limits of formal modeling, and develop four reflexive values–value fidelity, appropriate accuracy, value legibility, and value contestation–vital for incorporating human values adequately into formal models. We then provide a brief methodology for reflexively designing formal models incorporating human values.

Thu 20 17:30 PDT

#219 Machine Learning Practices Outside Big Tech: How Resource Constraints Hinder Responsible Development

Aspen Hopkins, Serena Booth

Practitioners from diverse occupations and backgrounds are increasingly using machine learning (ML) methods. Nonetheless, studies on ML Practitioners typically draw populations from Big Tech and academia, as researchers have easier access to these communities. Through this selection bias, past research often excludes the broader, lesser-resourced ML community—for example, practitioners working at startups, at non-tech companies, and in the public sector. These practitioners share many of the same ML development difficulties and ethical conundrums as their Big Tech counterparts; however, their experiences are subject to additional under-studied challenges stemming from deploying ML with limited resources, increased existential risk, and absent access to in-house research teams. We contribute a qualitative analysis of 17 interviews with stakeholders from organizations which are less represented in prior studies. We uncover a number of tensions which are introduced or exacerbated by these organizations’ resource constraints—tensions between privacy and ubiquity, resource management and performance optimization, and access and monopolization. Increased academic focus on these practitioners can facilitate a more holistic understanding of ML limitations, and so is useful for prescribing a research agenda to facilitate responsible ML development for all.