You are browsing the site of a past edition of the AIES conference (2021). Navigate to present edition here.

Paper session 2: AI & the Social Sciences

Wed 19 11:30 PDT

#70 On the Validity of Arrest as a Proxy for Offense: Race and the Likelihood of Arrest for Violent Crimes

Riccardo Fogliato, Alice Xiang, Zachary Lipton, Alexandra Chouldechova

Re-offense risk is considered in decision-making at many stages of the criminal justice system, from pre-trial, to sentencing, to parole. To aid decision-makers in their assessments, institutions increasingly rely on algorithmic risk assessment instruments (RAIs). These tools assess the likelihood that an individual will be arrested for a new criminal offense within some time window following their release. However, since not all crimes result in arrest, RAIs do not directly assess the risk of re-offense. Furthermore, disparities in the likelihood of arrest can potentially lead to biases in the resulting risk scores. Several recent validations of RAIs have therefore focused on arrests for violent offenses, which are viewed as being more accurate and less biased reflections of offending behavior. In this paper, we investigate biases in violent arrest data by analysing racial disparities in the likelihood of arrest for White and Black violent offenders. We focus our study on 2007–2016 incident-level data of violentoffenses from 16 US states as recorded in the National Incident Based ReportingSystem (NIBRS). Our analysis shows that the magnitude and direction of theracial disparities depend on various characteristics of the crimes. In addition,our investigation reveals large variations in arrest rates across geographicallocations and offense types. We discuss the implications of the observed disconnect between re-arrest and re-offense in the context of RAIs and the challenges around the use of data from NIBRS to correct for the sampling bias.

Wed 19 11:45 PDT

#209 Watching the Watchers: Estimating the Prevalence of Surveillance Cameras across the United States with Street View Data

Hao Sheng, Keniel Yao, Sharad Goel

The use of video surveillance in public spaces–both by government agencies and by private citizens–has attracted considerable attention in recent years, particularly in light of rapid advances in face-recognition technology. But it has been difficult to systematically measure the prevalence and placement of cameras, hampering efforts to assess the implications of surveillance on privacy and public safety. Here we present a novel approach for estimating the spatial distribution of surveillance cameras: applying computer vision algorithms to large-scale street view image data. Specifically, we build a camera detection model and apply it to 1.6 million street view images sampled from 10 large U.S. cities and 6 other major cities around the world, with positive model detections verified by human experts. After adjusting for the estimated recall of our model, and accounting for the spatial coverage of our sampled images, we are able to estimate the density of surveillance cameras visible from the road. Across the 16 cities we consider, the estimated number of surveillance cameras per linear kilometer ranges from 0.1 (in Seattle) to 0.9 (in Seoul). In a detailed analysis of the 10 U.S. cities, we find that cameras are concentrated in commercial, industrial, and mixed zones, and in neighborhoods with higher shares of non-white residents—a pattern that persists even after adjusting for land use. These results help inform ongoing discussions on the use of surveillance technology, including its potential disparate impacts on communities of color.

Wed 19 12:00 PDT

#238 Algorithmic Hiring in Practice: Recruiter and HR Professional’s Perspectives on AI Use in Hiring

Lan Li, Tina Lassiter, Joohee Oh, Min Kyung Lee

The increasing adoption of AI-enabled hiring software raises questions about the practice of Human Resource (HR) professionals’ use of the software and its consequences. We interviewed 15 recruiters and HR professionals who used AI-enabled hiring software for two decision-making processes in hiring: sourcing and assessment. For both, AI-enabled software allowed the efficient processing of candidate data, thus providing the ability to introduce or advance candidates from broader and more diverse pools. For sourcing, it can serve as a useful learning resource to find candidates. Though, a lack of trust in data accuracy and an inadequate level of control over algorithmic candidate matches can create reluctance to embrace it. For assessment, its implementation varied across companies depending on the industry and the hiring scenario. Its inclusion may redefine HR professionals’ job content as it automates or augments pieces of the existing hiring process. Our research highlights the importance of understanding the contextual factors that shape how algorithmic hiring is practiced in organizations.