You are browsing the site of a past edition of the AIES conference (2021). Navigate to present edition here.

Student Papers

Fri 21 13:00 PDT Fri 21 14:00 PDT

The Coloniality of Latin American Data Work

Julian Posada

How should we understand the social and political effects of the datafication of human life? This paper argues that the effects of data should be understood as a constitutive shift in social and political relations. We explore how datafication, or quantification of human and non-human factors into binary code, affects the identity of individuals and groups. This fundamental shift goes beyond economic and ethical concerns, which has been the focus of other efforts to explore the effects of datafication and AI. We highlight that technologies such as datafication and AI (and previously, the printing press) both disrupted extant power arrangements, leading to decentralization, and triggered a recentralization of power by new ac- tors better adapted to leveraging the new technology. We use the analogy of the printing press to provide a framework for understanding constitutive change. The printing press example gives us more clarity on 1) what can happen when the medium of communication drastically alters how information is communicated and stored; 2) the shift in power from state to private actors; and 3) the tension of simultaneously connecting individuals while driving them towards narrower communities through algorithmic analyses of data.

Designing effective and accessible consumer protections against unfair treatment in markets where automated decision making….

Linda Przhedetsky

The use of data-driven Automated Decision Making (ADM) to determine access to products or services in competitive markets can enhance or limit access to equality and fair treatment. In cases where essential services such housing, energy and telecommunications, are accessed through a competitive market, consumers who are denied access to one or more of these services may not be able to access a suitable alternative if there are none available to match their needs, budget, and unique circumstances. Being denied access to an essential service such as electricity or housing can be an issue of life or death. Competitive essential services markets therefore illuminate the ways that using ADM to determine access to products or services, if not balanced by appropriate consumer protections, can cause significant harm. My research explores existing and emerging consumer protections that are effective in preventing consumers being harmed by ADM-facilitated decisions in essential services markets.

AIES Student Track Application: Algorithmic Fairness and Economic Insecurity

Pegah Nokhiz

Training for Implicit Norms in Deep Reinforcement Learning Agents through Adversarial Multi-Objective Reward Optimization

Markus Peschl

Empowering the “common citizen” in a world filled with AI-based products against overpowering by private or state interests

Clàudia Figueras

While AI systems become more pervasive, their social impact is increasingly hard to measure. To help mitigate possible risks and guide practitioners into a more responsible design, diverse organizations have released AI ethics frameworks. However, it remains unclear how ethical issues are dealt with in the everyday practices of AI developers. To this end, we have carried an exploratory empirical study interviewing AI developers working for Swedish public organizations to understand how ethics are enacted in practice. Our analysis found that several AI ethics issues are not consistently tackled, and AI systems are not fully recognized as part of a broader sociotechnical system.

Causality in Neural Networks – An Extended Abstract

Abbavaram Gowtham Reddy

Causal reasoning is the main learning and explanation tool used by humans. AI systems should possess causal reasoning capabilities to be deployed in the real world with trust and reliability. Introducing the ideas of causality to machine learning helps in providing better learning and explainable models. Explainability, causal disentanglement are some important aspects of any machine learning model. Causal explanations are required to believe in a model’s decision and causal disentanglement learning is important for transfer learning applications. We exploit the ideas of causality to be used in deep learning models to achieve better and causally explainable models that are useful in fairness, disentangled representation, etc.

To Scale: The Universalist and Imperialist Narrative of Big Tech

Jessica de Jesus de Pinho Pinhal

I am currently working on a research project about scaling and universalism as an epistemic value. I showed how the claim that algorithms can scale over dimensions such as time, space, complexity and domains is tionally fallacious. Now, I wish to explore how power can explain why this narrative, while rationally ungrounded, is still spreading and gaining influence, with some disastrous consequences.

Examining Religion Bias in AI Text Generators

Deepa Muralidhar

One of the biggest reasons artificial intelligence (AI) gets a backlash is because of inherent biases in AI software. Deep learning algorithms use data fed into the systems to find patterns to draw conclusions used to make application decisions. Patterns in data fed into machine learning algorithms have revealed that the AI software decisions have biases embedded within them. Algorithmic audits can certify that the software is making responsible decisions. These audits verify the standards centered around the various AI principles such as explainability, accountability, human-centered values, such as, fairness and transparency, to increase the trust in the algorithm and the software systems that implement AI algorithms.