Agencies Issue Joint Statement Against Discrimination and Bias in Automated Systems

On April 25, 2023, four federal agencies issued a joint statement reiterating their commitment to enforce fairness, equality, and justice as the emerging use of automated systems, including those sometimes marketed as “artificial intelligence” or “AI,” have impacted  civil rights, fair competition, consumer protection, and equal opportunity. The Civil Rights Division of the United States Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission, and the U.S. Equal Employment Opportunity Commission released the joint statement outlining their enforcement efforts to protect the public from bias in automated systems and artificial intelligence.

According to the agencies, automated systems offer the promise of advancement but also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes. Potential discrimination in automated systems may come from different sources, including problems with:

  • Data and Datasets - automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors.

  • Model Opacity and Access - “Black boxes” often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.

  • Design and Use - Developers do not always understand or account for the contexts in which private or public entities will use their automated systems, thus systems may be designed on the basis of flawed assumptions.

The CFPB, on its part, has taken steps to protect consumers including:

The agencies reiterated their intent to monitor the development and use of automated systems and promote responsible innovation.

Read the CFPB’s press release here.

The joint statement can be found here.

Treasury Announces 2023 De-Risking Strategy

FinCEN’s Publishes Its 2022 Year in Review