FTC Warns Against Over-Reliance on AI for Combatting Online Harm
On June 16, 2022, the Federal Trade Commission (“FTC”) issued a Report to Congress on Combatting Online Harms Through Innovation[1]. In its report, the FTC warned companies not to “over-rely” on artificial intelligence (“AI”).
The report is the result of a request from Congress in the 2021 Appropriations Act for the FTC to analyze whether and how AI “may be used to identify, remove, or take any other appropriate action necessary to address” online harms such as “content that is deceptive, fraudulent, manipulated, or illegal[.]”
The FTC concluded that the use of AI for these purposes could be counterintuitive as these tools can produce inaccurate, biased, or discriminatory results. Additionally, regardless of the amount of cybersecurity maintained by the organization, AI systems are susceptible to cyber attacks which can lead to the authorized distribution of private information as well as data manipulation.
The FTC recommends that organizations using AI to combat online harms utilize the following approaches to minimize risks associated with AI systems:
Human intervention should be maintained by an organization to ensure human monitoring over the use and decisions of the AI tools.
Ensuring AI use is “meaningfully transparent” such that the AI decision is “explainable and contestable, especially when people’s rights are involved or when personal data is being collected or used.”
Establish accountability both for data practices and results. This includes “meaningful appeal and redress mechanisms for consumers and others . . . and the use of independent audits and algorithmic impact assessments (AIAs).”
The data scientists and their employers who build the AI tools should be responsible for both the inputs and the outputs of the AI systems. This responsibility includes retaining diverse teams and avoiding use of “training data and classifications that reflect existing societal and historical inequities.”
Platforms should “use the range of interventions at their disposal, such as tools that slow the viral spread or otherwise limit the impact of certain harmful content.” These interventions include, among others, “limiting ad targeting options, downranking, labeling, or inserting interstitial pages with respect to problematic content.”
“[G]ive individuals the ability to use AI tools to limit their personal exposure to certain harmful or otherwise unwanted content.” These additional tools include the ability to “block certain kinds of sensitive or harmful content” or the use of middleware, which is “a tailored, third-party content moderation system that would ride atop and filter the content shown on a given platform.”
Grant smaller platforms and other organizations access to AI tools that operate as intended to combat these online harms and do not produce unfair or biased results. This level of access is not common as these effective AI tools tend to be “developed and deployed by several large technology companies as proprietary items.”
Utilize key complementary measures to AI systems, such as the use of “authentication tools to identify the source of particular content and whether it has been altered.”