The increasing prevalence of artificial intelligence technologies in the United States has generated a new kind of redlining risk through algorithmic bias. President Biden’s recent executive order provides one of several opportunities to curb this risk.
by Kristina Lorch, '24/'25 for Annotations Blog
Municipal opportunity hoarding, fines and fees, and local secession are long-standing and well-documented forms of redlining, which has come to reference any practice that contributes to racial segregation and socioeconomic inequality in housing in the United States (or any community where these practices are found). Such practices have long left real impacts on individuals’ physical geography: where one works, lives, and plays. Activists, advocates, and policymakers, in turn, have long worked to combat these segregationist effects.
Today, the increasing prevalence of artificial intelligence (AI) technologies in the United States has generated a new kind of redlining risk through algorithmic bias. President Joseph R. Biden’s recent executive order on safe, secure, and trustworthy AI is one important step in combating historical bias and contemporary “technological redlining”—that is, racial segregation and marginalization through technology.
AI and Technological Redlining
Over the last several years, AI technologies have endeavored to make life easier and cheaper for individuals, companies, and governments alike. Advocates for these technologies often celebrate them as lowering costs and removing implicit and explicit biases from decision-making loops. However, these same AI systems have also presented serious issues across various sectors by perpetuating group separation based on algorithmic bias.
Algorithmic bias arises when an AI system uses “unrepresentative or incomplete training data or [relies] on flawed information that reflects historical inequalities.” Because AI systems rely upon their human developers’ and users’ instructions and inputs, incorrect or incomplete data can lead to systematically deviated—in other words, biased—outputs. These individually biased judgments can then accumulate and manifest in community- and society-wide biases against populations that have been historically marginalized. UCLA professor Safiya Noble refers to the unfair effects perpetuated by black-boxed algorithms as “technological redlining.” This virtual version of the centuries-long physical practice of segregating racial groups manifests “when algorithms produce inequitable outcomes and replicate known inequalities, leading to the systematic exclusion” of racial minorities.
While U.S. law has prohibited redlining for decades, curbing technological redlining is particularly challenging because its discriminatory effects are generated virtually through online platforms and often proprietary software rather than through physical acts. As a result, attribution and remediation is complex.
AI’s Implications in Industry and the Public Sector
When unmonitored and unregulated, algorithms used in financial services, online advertising, and certain law enforcement tools may generate biased output. In housing and financial services, research has found that automated valuation models that were introduced to both permit housing transactions during the COVID-19 pandemic and mitigate previously documented bias in in-person housing appraisals generated larger errors (although these were not only downwardly biased errors) in property sale prices in majority-Black neighborhoods than in majority-white neighborhoods.
Another study found that borrowers who refinance their student loans through companies that use education data may pay more for the same loan if they went to a Historically Black College or University (HBCU) than a non-HBCU. For example, when comparing a five-year loan application at the financial technology company Upstart and controlling for all other “inputs,” the study found that a notional Howard University graduate was charged almost $3,499 more than a comparable NYU graduate.
President Joseph R. Biden’s recent executive order on safe, secure, and trustworthy AI is one important step in combating historical bias and contemporary “technological redlining”—that is, racial segregation and marginalization through technology.
In a different study on online advertisement delivery, Harvard professor Latanya Sweeney used queries of 2,184 racially-associated names to demonstrate statistically significant ad discrimination. On one search engine, first names more commonly associated with Black than white individuals were 25% more likely to generate advertisements offering arrest records.
AI-informed systems that generate property values, loan terms, and online search results all have tangible effects on what financial assets and socioeconomic opportunities are available to individuals of different races and backgrounds. So, when consumers, businesses, and regulators cannot tell how such systems generate their outputs and they then take such outputs as “truth,” this undermines AI’s equalizing potential and hinders the public’s ability to understand and mitigate bias.
The adoption of unregulated AI tools in law enforcement and the criminal justice system also risks perpetuating racial and ethnic bias. The New York Times reported that of the three people known to be arrested based on a faulty facial recognition match, every single one was a Black man. These individual injustices can reemerge at a societal level. The U.S. National Institute of Standards and Technology conducted a study on 189 commercially available facial recognition systems (including systems from Microsoft, the biometric technology company Cognitec, and a Chinese AI company) and found that most of them inaccurately identified Black and Asian faces between 10 and 100 times more frequently than white faces.
Predictive policing—in which software ingests and analyzes previous crime reporting, arrest records, and other data points and then predicts where crime is most likely to occur in the future—similarly increases the threat of over-policing in areas that have historically experienced criminal activity. Relying on smart policing technologies, like facial recognition or geofencing, risks focusing disproportionate law enforcement activity in historically marginalized communities, sustaining long-standing racial and socioeconomic inequalities without additional policy interventions.
Operationalizing Biden’s Executive Order
President Biden’s executive order outlines several general strategies to promote the safety, security, and trustworthiness of AI. Notably, the order mandates:
- Evaluating how federal agencies collect and use commercially available information to understand and mitigate privacy risks arising from the use of AI
- Providing tailored guidance to landlords, federal benefits programs, and federal contractors on combating algorithmic bias
- Facilitating training, technical assistance, and coordination between the Department of Justice and federal civil rights offices for investigating and prosecuting AI-generated civil rights violations
- Developing best practices on the use of AI throughout the lifecycle of criminal investigations and adjudications
While operationalizing these strategies to mitigate algorithmic bias will take time, research has provided many promising places to start. To unleash AI’s potential benefits responsibly, AI developers can add more inputs in decision-making systems, implement accuracy standards, and maintain continuous human oversight. To earn the public’s trust, AI developers should hire diverse teams to develop the code for decision-making processes, conduct internal bias audits, and release equity impact assessments. And to ensure accountability, Congress and state governments must pass complementary regulation to ensure that AI—particularly those systems that have significant impacts on individuals’ civil rights and access to critical services—receives continuous oversight and course correction. Such measures will help curb technological redlining and allow AI to achieve its potential as an enabling technology for all.
Meet the Author: Kristina Lorch
Kristina is an MPA/JD student at Princeton’s School of Public and International Affairs and the University of Virginia School of Law. She is particularly interested in how emerging trends and technologies affect international and national security law in the United States. She holds a bachelor’s degree in Government from Harvard University. The views expressed here are her own and do not necessarily reflect those of any employer.