White House Releases AI Bill of Rights Blueprint

On September 4, 2022, White House Office of Science and Technology Policy (“OSTP”) released its Blueprint for an AI Bill of Rights (“Blueprint”) to make “automated systems work for the American people.”

In the release, the White House stated, “[a]mong the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.” These tools are often “used to limit our opportunities and prevent our access to critical resources or services[,]” by presenting discriminatory or bias results and lending to the practice of privacy invasion.

The White House poses “[t]hese outcomes are deeply harmful—but they are not inevitable.” The Blueprint is intended to “protect[] all people from these threats—and uses technologies in ways that reinforce our highest values.”

This blueprint identifies five principles to be used as a “handbook” “that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”

Five Principles:

Safe and Effective Systems

Automated Systems should, among other things:

  • “be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system”

  • undergo testing before being released to the public, have risk identification and mitigation tools to prevent unsafe outcomes, and should be monitored on an ongoing basis to ensure they are operating as intended

  • be designed “to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems”

Algorithmic Discrimination Protections

Algorithms contained in these automated systems should not present consumers with “face discrimination,” meaning, “unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.”

The designers, developers, and deployers of these systems “should take proactive and continuous measures” such as:

  • equity assessments

  • “use of representative data and protection against proxies for demographic features”

  • ensuring the design of these systems are accessible to those with disabilities

  • disparity testing and mitigative efforts before deployment and on a continuous basis 

  • deploying oversight                                                                        

Data Privacy

Algorithms should not subject consumers to “abusive data practices,” such as violations of privacy. According to the blueprint, to protect against these potentially unlawful privacy practices, designers, developers, and deployers should:

  • engage in data collection in accordance with reasonable expectations;

  • minimize data collection practices such that only data that is “strictly necessary for the specific context is collected;”

  • comply with consumer preferences for the “collection, use, access, transfer, and deletion” of their personal information ;

  • make design choices that promote consumers to exercise their rights, including selecting default privacy settings that favor consumer privacy;

  • provide consumer requests that are “brief, . . . understandable in plain language, and give [consumers] agency over data collection and the specific context of use;”

  • subjecting any surveillance technologies deployed by the system to “heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties.”

Notice and Explanation

Designers, developers and deployers of automated systems should notify consumers that an automated system is being used and consumers should understand how and why it contributes to outcomes that impact consumers.  This notice should be “generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.”

Human Alternatives Consideration, and Fallback

Consumers should be able to opt-out of the use automated systems “in favor of a human alternative” who can assist in remedying any issues incurred. This human consideration “should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public.”

Previous
Previous

Colorado Releases Draft Proposal of State Privacy Act

Next
Next

Democratic Senators Sent FTC Chair Request Updates to COPPA