This is the third and final entry analyzing technology policy issues (the gig economy, online speech, and algorithmic bias) under the Biden administration.

Algorithmic Bias in the Public and Private Sector

Private companies, federal agencies, and law enforcement are increasingly using Artificial intelligence (AI) and machine learning to evaluate information. According to the National Security Commission on Artificial Intelligence, AI refers to the “ability of a computer system to solve problems and to perform tasks that would otherwise require human intelligence.” AI-powered systems may be faster and more accurate than humans but, as a result of flawed datasets and design, they can still discriminate and exhibit bias.

AI consists of a series of algorithms, “instructions” for solving a problem. Algorithmic decision-making refers to the process of inputting data to generate a score, choice, or other output. The result is used to render a decision such as classification, prioritization, or sorting.

Although algorithms are inherently methodical processes, a 2019 Microsoft research study demonstrated how they can still discriminate. After being trained with Google News articles, a natural language processing program was tasked to predict words in analogies. The program produced gender stereotypes to an alarming extent because it learned from flawed data. Although the technology did not contain prejudice, it mimicked human bias present in the training dataset.

Algorithmic decision-making is used for a range of purposes, including hiring, personal finance, and policing. Thus, when algorithmic bias occurs, it can have significant effects on social and economic opportunities.

This section focuses on the impact of algorithmic bias in both the private and public sector:

  • In the private sector, AI-powered tools assist professionals in sorting and decision-making. Recruiters use automated processes to expedite applicant screening, interviewing, and selection. Algorithms are analogously used in financial services for credit risk assessment and underwriting.
  • In the public sector, computerized facial recognition is used by police for identification. This technology confirms the identity of someone by detecting a human face in a photo or video and analyzing its physical attributes. Accuracy varies depending on the subject’s race and gender.

In both applications, a complete and representative dataset is necessary to avoid algorithmic biases and inaccuracies. Policymakers are weighing the benefits and risks of algorithmic decision-making, while simultaneously addressing civil rights and privacy considerations.

Biden-Harris Administration

President Joe Biden and Vice President Kamala Harris support greater regulations for AI-powered systems. In their view, algorithms can be conduits for racial prejudice and amplify inequalities. To that end, the Biden administration will be focused on eliminating racial disparities perpetuated by technology.

During the campaign, President Biden promised to create a new public credit reporting and scoring division within the Consumer Financial Protection Bureau to “minimize racial disparities.” According to the Biden-Harris campaign website, he plans to address “algorithms used for credit scoring [and their] discriminatory impact… by accepting non-traditional sources of data like rental history and utility bills to establish credit.”

While serving as a U.S. senator, Vice President Harris was an outspoken critic of algorithmic bias. She co-sponsored the Justice in Policing Act and sent letters to several federal agencies about the dangers of facial recognition technology. The letters, sent to the Equal Employment Opportunity Commission, Federal Trade Commission, and the Federal Bureau of Investigation, asked officials to clarify how they were “addressing the potentially discriminatory impacts of facial analysis technologies.”

In a 2019 speech, Vice President Harris cautioned that “there is a real need to be very concerned about [artificial intelligence and machine learning]… how being built into it is racial bias.” She also noted that “unlike the racial bias that all of us can pretty easily detect when you get stopped in a department store or while you’re driving, the bias that is built into technology will not be easy to detect.”

On January 15th, President Biden appointed Alondra Nelson to be the deputy director for science and society at the White House Office of Science and Technology Policy. Nelson, a sociologist who has studied the social impact of emerging technologies, has stated that “we have a responsibility to work together to make sure that our science and technology reflects us.”

Current State of Regulation

Federal Trade Commission (FTC)

While artificial intelligence and machine learning pose new challenges for existing regulatory frameworks, automated decision-making has existed for years. The FTC has enforced federal consumer protection laws, such as the Fair Credit Reporting Act (1970) and the Equal Credit Opportunity Act (1974). Both laws regulate automated decision-making systems in the financial services industry.

In recent years, the FTC has issued guidelines for businesses who use algorithmic systems, including a 2016 report and blog post last year.

Congressional Proposals

Several bills have been proposed in recent years to ameliorate algorithmic bias. They include the following:

  • Algorithmic Accountability Act (2019): The bill was introduced by Senators Cory Booker (D‑NJ), Ron Wyden (D‑OR), and Representative Yvette Clarke (D‑NY). According to Senator Wyden, the bill would have required “companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find.”
  • Consumer Online Privacy Rights Act (2019): The bill, sponsored by Senator Maria Cantwell (D‑WA), would have established new requirements for companies that use algorithmic decision-making to process data.
  • Justice in Policing Act (2020): The bill was sponsored by then-Senator Kamala Harris (D‑CA), Senator Cory Booker (D‑NJ), and Representatives Karen Bass (D‑CA) and Jerrold Nadler (D‑NY). It would have been the first federal restriction on facial recognition technology.
  • Facial Recognition and Biometric Technology Moratorium Act (2020): Sponsored by Senator Edward Markey (D‑MA) and Jeff Merkley (D‑OR), along with Representatives Pramila Jayapal (D‑WA) and Ayanna Pressley (D‑MA). The bill would have established a five-year moratorium on police use of facial recognition technology. It is set to be reintroduced this year.

State Proposals

Lawmakers in Illinois, New Jersey, Washington, and California have also proposed bills to regulate algorithmic systems.

In 2017, New York City passed Local Law 49, the first law in the United States to tackle algorithmic bias and discrimination. Local Law 49 established the Automated Decision Systems Task Force to monitor city use of algorithmic decision-making and provide recommendations. The twenty-member task force has faced criticism from legal experts due to its inability to fully define “automated decision system.” Members have also blamed the city officials for denying access to critical information needed to make recommendations. New York University professor Julia Stoyanovich told The Verge that she “would not have signed for this task force if [she] knew [it] was just a formal sort of exercise.”

Jurisdictions across the country have banned government use of facial recognition. In Illinois, a 2008 law entitled the Biometric Information Privacy Act (BIPA) has been used to curtail facial recognition. Illinois residents sued Clearview AI under the BIPA for creating a facial recognition software from billions of social media photos scrapped without permission. The technology company subsequently canceled all contracts in the state and promised to exclusively work with government entities.

Addressing Algorithmic Bias

Policymakers can take steps to help locate and mitigate algorithmic bias. Since the effects of such discrimination vary across sectors, ethical and regulatory considerations should consequently be proportionate to the impact.

AI has the potential to positively impact personal finance and employment. At this juncture, companies are faced with competing legal obligations that make bias even harder to detect. Laws such as the Civil Rights Act of 1964 and Equal Credit Opportunity Act incentivize companies to ignore protected class characteristics such as age, race, and sex— even though this information would improve the algorithm’s accuracy. Such requirements were written with human bias in mind and do not effectively attenuate algorithmic bias. To address discrimination in AI-powered tools, policymakers should reevaluate how existing regulations and enable companies to train their algorithms with full and complete information.

Algorithmic bias in law enforcement facial recognition tools presents greater challenges. Facial recognition technology is trained with billions of photos and videos, often repurposed and used without consent. This technology is relatively underdeveloped, having emerged within the past few years as the software of choice for police despite major flaws. (Accuracy is dependent upon the quality of the image, in addition to the subject themselves.) Last year, three men were wrongfully arrested by police because a facial recognition software misidentified them. Given the significant risk posed by facial analysis tools, more transparency and oversight are needed to prevent abuse and civil liberties violations.

The bias issues associated with facial recognition systems have prompted calls for police use of the technology to be banned. In a handful of jurisdictions lawmakers have implemented such bans. While the potential abuse and misuse of facial recognition does raise significant civil liberties concerns, outright bans of the technology are not the best policy.

Rather than ban facial recognition, lawmakers should consider making the use of facial recognition contingent on a set of policies that allow for police to use facial recognition while also protecting civil liberties. Currently, there are no police departments in the United States that have implemented these policies, which include prohibitions on real-time capability, accuracy requirements, restrictions on what data can be queried. Although the vast majority of policing in the United States is handled at the state and local level, the federal government can nonetheless condition grants on best practices related to surveillance technology, including facial recognition systems.