NIST Releases Concept Paper Analyzing AI Risk Management Framework
On December 14, 2021, the National Institute of Standards and Technology ("NIST") released a concept paper addressing Artificial Intelligence Risk Management Framework (AI RMF) incorporating comments and ideas from a NIST Request for Information and a workshop on the issue held in October 2021.
Intended Audience
The intended primary audiences of the concept paper are:
1) people who are responsible for designing or developing AI systems;
2) people who are responsible for using or deploying AI systems;
3) people who are responsible for evaluating or governing of AI systems; and
4) people who experience potential harm or inequities affected by areas of risk that are newly introduced or amplified by AI systems.
Framework Attributes
According to the NIST, all AI RMF attributes should:
1) Be consensus-driven and developed and regularly updated through an open, transparent process. All stakeholders should have the opportunity to contribute to and comment on the AI RMF development.
2) Be clear. Use plain language that is understandable by a broad audience, including senior executives, government officials, NGO leadership, and, more broadly, those who are not AI professionals, while still of sufficient technical depth to be useful to practitioners. The AI RMF should allow for communication of AI risks across an organization, with customers, and the public at large.
3) Provide common language and understanding to manage AI risks. The AI RMF should provide taxonomy, terminology, definitions, metrics, and characterizations for aspects of AI risk that are common and relevant across sectors.
4) Be easily usable. Enable organizations to manage AI risk through desired actions and outcomes. Be readily adaptable as part of an organization’s broader risk management strategy and processes.
5) Be appropriate for both technology agnostic (horizontal) as well as context-specific (vertical) use cases to be useful to a wide range of perspectives, sectors, and technology domains.
6) Be risk-based, outcome-focused, cost-effective, voluntary, and non-prescriptive. It should provide a catalog of outcomes and approaches to be used voluntarily, rather than a set of one-size-fits-all requirements.
7) Be consistent or aligned with other approaches to managing AI risks. The AI RMF should, when possible, take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks as well as illustrate the need for additional, improved resources. It should be law- and regulation-agnostic to support organizations' abilities to operate under applicable domestic and international legal or regulatory regimes.
8) Be a living document. The AI RMF should be capable of being readily updated as technology, understanding, and approaches to AI trustworthiness and uses of AI change and as stakeholders learn from implementing AI risk management generally and this framework, in particular.
Proposed Structure
The proposed structure for the AI RMF is composed of three components:
1) Core,
“The Core provides a granular set of activities and outcomes that enable an organizational dialogue about managing AI risk.”
Core should include three elements:
I. functions,
“Functions organize AI risk management activities at their highest level to establish the context and enumerate, assess, treat, monitor, review, and report risk.”
II. categories, and
“Categories are the subdivisions of a function into groups of outcomes closely tied to programmatic needs and particular activities.
III. subcategories.
“Subcategories further divide a category into specific outcomes of technical and/or management activities.”
2) Profiles, and
“Profiles enable users to prioritize AI-related activities and outcomes that best meet an organization’s values, mission, or business needs and risks.”
3) Implementation Tiers
“Implementation Tiers support decision-making and communication about the sufficiency of organizational processes and resources, including engineering tools and infrastructure and engineers with appropriate AI expertise, to manage AI risks deemed appropriate for the organization or situation and achieve outcomes and activities in the Profile(s).”
Public Comment
Feedback that the NIST receives on its paper “will inform further development of this approach and the first draft of the AI RMF for public comment.” Feedback can be sent to AIframework@nist.gov before January 25, 2022.
NIST is requesting input on the following questions:
1) Is the approach described in this concept paper generally on the right track for the eventual AI RMF?
2) Are the scope and audience (users) of the AI RMF described appropriately?
3) Are AI risks framed appropriately?
4) Will the structure – consisting of Core (with functions, categories, and subcategories), Profiles, and Tiers – enable users to appropriately manage AI risks?
5) Will the proposed functions enable users to appropriately manage AI risks?
6) What, if anything, is missing?
If you have any questions or concerns about the NIST's concept paper, please contact Kennedy Sutherland.