You are browsing the site of a past edition of the AIES conference (2020). Navigate to present edition here.

Invited talks

Sat 8th, 10:55am – 11:45 am

Title: How to Put the Data Subject’s Sovereignty into Practice. Ethical Considerations and Governance Perspectives

Peter Dabrock (Friedrich-Alexander-University Erlangen Nuremberg/German Ethics Council)
Peter Dabrock
Peter Dabrock
​Chair: TBD
Abstract:

Ethical considerations and governance approaches of AI are at a crossroads. Either one tries to convey the impression that one can bring back a status quo ante of our given “onlife”-era, or one accepts to get responsibly involved in a digital world in which informational self-determination can no longer be safeguarded and fostered through the old fashioned data protection principles of informed consent, purpose limitation and data economy. The main focus of the talk is on how under the given conditions of AI and machine learning, data sovereignty (interpreted as controllability [not control (!)] of the data subject over the use of her data throughout the entire data processing cycle) can be strengthened without hindering innovation dynamics of digital economy and social cohesion of fully digitized societies. In order to put this approach into practice the talk combines a presentation of the concept of data sovereignty put forward by the German Ethics Council with recent research trends in effectively applying the AI ethics principles of explainability and enforceability.

Short Bio:

Professor Peter Dabrock is the Chair of the German Ethics Council and Chair for Ethics in the Department of Theology at the University of Erlangen-Nuremberg (Germany). After several positions in academia as researcher, Assistant, Associate and Full Professor in Bochum and Marburg (from 1995-2010) he has been Chair of Systematic Theology (Ethics) at the University of Erlangen-Nuremberg since October 2010. Beyond serving in many high-level national and international advisory bodies in academia, Church, and Society including the European Group on Ethics in Science and New Technology (2011-2016) he has been an appointed member of the German Ethics Council since 2012. Since 2016 he has been its elected Chairperson. Since 2017 he has been an appointed member of ACATECH (German National Academy of Science and Engineering). Dabrock has published several books and more than 200 articles with special focus on ethics of emerging technologies, ethics of life sciences, on social justice, and on sexual ethics.

Fri 7th, 5:15am – 6:15 am

Title: The AI-development connection – a view from the South

Anita Gurumurthy
Anita Gurumurthy
Anita Gurumurthy (IT for Change)
​Chair: TBD
Abstract:

The socialisation of Artificial Intelligence and the reality of an intelligence economy marks an epochal moment. The impacts of AI are now systemic – restructuring economic organisation and value chains, public sphere architectures and sociality. These shifts carry deep geo-political implications, reinforcing historical exclusions and power relations and disrupting the norms and rules that hold ideas of equality and justice together.

At the centre of this rapid change is the intelligent corporation and its obsessive pursuit of data. Directly impinging on bodies and places, the de facto rules forged by the intelligent corporation are disenfranchising the already marginal subjects of development. Using trade deals to liberalise data flows, tighten trade secret rules and enclose AI-based innovation, Big Tech and their political masters have effectively taken away the economic and political autonomy of states in the global south. Big Tech’s impunity extends to a brazen exploitation – enslaving labour through data over-reach and violating female bodies to universalise data markets.

Thinking through the governance of AI needs new frameworks that can grapple with the fraught questions of data ownership, data sovereignty, economic democracy, and institutional ethics, in a global world with local aspirations. Any effort towards norm development in this domain will need to see the geo-economics of digital intelligence and the geo-politics of development ideologies as two sides of the same coin.

Short Bio:

Anita Gurumurthy is executive director of IT for Change, an NGO that works on digital technologies and social justice. At IT for Change, Anita undertakes research and policy advocacy on the platform economy, data for development, and feminist frameworks of the digital, through southern frameworks.

Thu 6th, 7:00 pm – 8:30 pm (at NYU Cantor Film Center)

Title: Computerize the Race Problem? Why We Must Plan for a Just AI Future

Charlton McIlwain
Charlton McIlwain
Charlton McIlwain (New York university)
​Chair: TBD
Abstract:

1960s civil rights and racial justice activists tried to warn us about our technological ways, but we didn’t hear them talk. The so-called wizards who stayed up late ignored or dismissed black voices, calling out from street corners to pulpits, union halls to the corridors of Congress. Instead, the men who took the first giant leaps towards conceiving and building our earliest “thinking” and “learning” machines aligned themselves with industry, government and their elite science and engineering institutions. Together, they conspired to make those fighting for racial justice the problem that their new computing machines would be designed to solve. And solve that problem they did, through color-coded, automated, and algorithmically-driven indignities and inumahities that thrive to this day. But what if yesterday’s technological elite had listened to those Other voices? What if they had let them into their conversations, their classrooms, their labs, boardrooms and government task forces to help determine what new tools to build, how to build them and – most importantly – how to deploy them? What might our world look like today if the advocates for racial justice had been given the chance to frame the day’s most preeminent technological question for the world and ask, “Computerize the Race Problem?” Better yet, what might our AI-driven future look like if we ask ourselves this question today?

Short Bio:

Author of the new book Black Software: The Internet & Racial Justice, From the Afronet to Black Lives Matter, Charlton McIlwain is Vice Provost for Faculty Engagement & Development at New York University and Professor of Media, Culture, and Communication. His work focuses on the intersections of computing technology, race, inequality, and racial justice activism. In addition to Black Software, McIlwain has authored Racial Formation, Inequality & the Political Economy of Web Traffic, in the journal Information, Communication & Society, and co-authored, with Deen Freelon and Meredith Clark, the recent report Beyond the Hashtags: Ferguson, #BlackLivesMatter, and the Online Struggle for Offline Justice. He recently testified before the U.S. House Committee on Financial Services about the impacts of automation and artificial intelligence on the financial services sector.

Fri 8th, 4:30 pm – 5:30 pm

Title: From Bad Users and Failed Uses to Responsible Technologies:A Call to Expand the AI Ethics Toolkit

Gina Neff (Oxford Internet Institute, University of Oxford.)
Gina Neff
Gina Neff
​Chair: TBD
Abstract:

Recent advances in artificial intelligence applications have sparked scholarly and public attention to the challenges of the ethical design of technologies. These conversations about ethics have been targeted largely at technology designers and concerned with helping to inform building better and fairer AI tools and technologies. This approach, however, addresses only a small part of the problem of responsible use and will not be adequate for describing or redressing the problems that will arise as more types of AI technologies are more widely used.
Many of the tools being developed today have potentially enormous and historic impacts on how people work, how society organises, stores and distributes information, where and how people interact with one another, and how people’s work is valued and compensated. And yet, our ethical attention has looked at a fairly narrow range of questions about expanding the access to, fairness of, and accountability for existing tools. Instead, I argue that scholars should develop much broader questions of about the reconfiguration of societal power, for which AI technologies form a crucial component.
This talk will argue that AI ethics needs to expand its theoretical and methodological toolkit in order to move away from prioritizing notions of good design that privilege the work of good and ethical technology designers. Instead, using approaches from feminist theory, organization studies, and science and technology, I argue for expanding how we evaluate uses of AI. This approach begins with the assumption of socially informed technological affordances, or “imagined affordances” shaping how people understand and use technologies in practice. It also gives centrality to the power of social institutions for shaping technologies-in-practice.

Short Bio:

Professor Gina Neff is a Senior Research Fellow at the Oxford Internet Institute and the Department of Sociology at the University of Oxford. Science called her book, Self-Tracking, co-authored with Dawn Nafus (MIT Press, 2016), “excellent” and a reviewer in the New York Review of Books said it was “easily the best book I’ve come across on the subject—‘about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.’” Her book about the rise of internet industries in New York City, Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT Press, 2012), won the 2013 American Sociological Association’s Communication and Information Technologies Best Book Award. Her next book, Building Information: How teams, companies and industries make new technologies work is co-authored with Carrie Sturts Dossick, with whom she directed the Collaboration, Technology and Organizations Practices Lab at the University of Washington. A leader in the new area of “human-centred data science,” Professor Neff leads a new project on the organizational challenges companies face using AI for decision making.
She holds a Ph.D. in sociology from Columbia University, where she is a faculty affiliate at the Center on Organizational Innovation. Professor Neff has had fellowships at the British Academy, the Institute for Advanced Study and Princeton University’s Center for Information Technology Policy. Her writing for the general public appears in Wired, Slate and The Atlantic, among other outlets. As a member of the University of Oxford’s Innovation Forum, she advises the university’s entrepreneurship policies. She is the responsible technology advisor to GMG Ventures, a venture capital firm investing in digital news, media and entertainment companies. She is a strategic advisor on AI to the Women’s Forum for the Economy & Society and leads the Minderoo Foundation’s working group on responsible AI. She serves the steering committee for the Reuters Institute for the Study of Journalism, the advisory board of Data & Society and the academic council for AI Now, and is on the Royal Society’s high-level expert commission on online information.

Fri 7th, 8:45am – 9:45 am

Machines Judging Humans: The Promise and Perils of Formalizing Evaluative Criteria

Frank Pasquale
Frank Pasquale
Frank Pasquale (Univ. of Maryland)
​Chair: TBD
Abstract:

Over the past decade, algorithmic accountability has become an important concern for social scientists, computer scientists, journalists, and lawyers [1]. Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles.

This policy work is just beginning, as experts translate academic research and activist demands into statutes and regulations. Lawmakers are proposing bills requiring basic standards of algorithmic transparency and auditing. We are starting down on a long road toward ensuring that AI-based hiring practices and financial underwriting are not used if they have a disparate impact on historically marginalized communities. And just as this “first wave” of algorithmic accountability research and activism has targeted existing systems, an emerging “second wave” of algorithmic accountability has begun to address more structural concerns. Both waves will be essential to ensure a fairer, and more genuinely emancipatory, political economy of technology. Second wave work is particularly important when it comes to illuminating the promise & perils of formalizing evaluative criteria.

Short Bio:

Frank Pasquale​, JD, MPhil, Piper & Marbury Professor of Law at the University of Maryland, is an expert on the law of big data, predictive analytics, artificial intelligence, and algorithms. He is author of The Black Box Society, (Harvard University Press, 2015) and has served as a member of the Council on Big Data, Ethics, & Society.