Associate Professor, UNC Law, Founding Director,
AI Decision-Making Research Program
Automated decision-making has been shown to return (unintended) biased results. Could a Rawlsian approach to justice provide both a greater explanation for this phenomenon and also a way forward?
Dr. Ajunwa is an Associate Professor at the University of North Carolina School of Law and an adjunct Associate Professor at the Kenan-Flagler Business School where she is a Rethinc. lab Fellow. She is also the Founding Director of the Artificial Intelligence Decision-Making Research (AI-DR) Program at UNC Law. Professor Ajunwa is a 2019 recipient of the NSF CAREER Award, and a 2018 recipient of the Derrick A. Bell Award from the Association of American Law Schools (AALS). Previously, she was an Associate professor in the Labor Relations, Law, and History Department of Cornell University’s Industrial and Labor Relations School (ILR) where she received the Junior Faculty Champion Award from Cornell University and earned tenure in 2020. She has been a Faculty Associate at the Berkman Klein Center at Harvard University since 2017. Dr. Ajunwa’s research interests are at the intersection of law and technology with a particular focus on the ethical governance of workplace technologies. Dr. Ajunwa’s forthcoming book, “The Quantified Worker,” which examines the role of technology in the workplace and its effects on management practices as moderated by employment law will be published by Cambridge University Press. Dr. Ajunwa is a Founding Board Member of the Labor Tech Research Network which is an international group of scholars committed to the research of the ethics of AI used in the workplace and for labor.
Co-Founder, Black in AI
In spite of the rising number of papers and other work discussing issues of fairness in AI, structural issues at the root of these problems remain unaddressed and unconfronted by the academic community, leading to those who are from communities treated as “subjects” of these studies taking the fall. This talk will discuss changes that need to be advocated by this community, if we are truly going to move beyond the fairness rhetoric and work towards meaningful change.
Until she recently got fired, Timnit Gebru co-lead the Ethical Artificial Intelligence research team at Google, working to reduce the potential negative impacts of AI. Timnit earned her doctorate at Stanford University in 2017 and did a postdoc at Microsoft Research NYC in the FATE team. She is also the cofounder of Black in AI, a place for sharing ideas, fostering collaborations and discussing initiatives to increase the presence of Black people in the field of Artificial Intelligence.
Associate Professor, Princeton University
Machine learning research culture is driven by benchmark datasets to a greater degree than most other research fields. But the centrality of datasets also amplifies the harms associated with data, including privacy violation and underrepresentation or erasure of some populations. This has stirred a much needed debate on the ethical responsibilities of dataset creators and users. I argue that clarity on this debate requires taking a step back to better understand the benefits of the dataset-driven approach. I show that benchmark datasets play at least six different roles and that the potential harms depend on the roles a dataset plays. By understanding this relationship, we can mitigate the harms while preserving what is scientifically valuable about the prevailing approach.
Arvind Narayanan is an associate professor of computer science at Princeton. He leads the Princeton Web Transparency and Accountability Project that has helped uncover how companies collect and use our personal information and how they use “dark patterns” to manipulate users. Narayanan has published foundational empirical work showing how machine learning reflects cultural stereotypes. He co-created a Massive Open Online Course and textbook on Bitcoin and cryptocurrency technologies which has been used in over 150 courses worldwide. Narayanan is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE), twice recipient of the Privacy Enhancing Technologies Award, and thrice recipient of the Privacy Papers for Policy Makers Award.