NIST Risk Management Framework and Guidance Addressing Bias in AI – The National Law Review | Region & Cash

As more companies develop and/or deploy artificial intelligence (AI), it is important to consider risk management and best practices to address issues such as bias in AI. The National Institute of Standards and Technology (NIST) recently published a draft of its AI Risk Management Framework (Framework) and Guidance on Countering Bias in AI (Guidance). The voluntary framework addresses risks in the design, development, use and evaluation of AI systems. The guidance provides considerations for developing and using AI in a trustworthy and responsible manner, including in particular proposed governance processes to combat bias.

Who should pay attention?

The framework and guidance are useful to those designing, developing, using, or evaluating AI technologies. The language is designed to be understandable to a wide audience, including senior executives and those who are not AI experts. At the same time, NIST contains technical depth, so the framework and guidance will be useful to practitioners. The framework is designed to be scalable for organizations of any size, public or private, across different sectors, as well as national and international organizations.

Who is NIST and what is this publication?

NIST is part of the US Department of Commerce and was founded in 1901. Congress directed NIST back in 2020 to develop a framework for AI risk management. Elham Tabassi, chief of staff at NIST’s Information Technology Laboratory and coordinator of the agency’s AI work, says: “We developed this blueprint with extensive input from both the private and public sectors, knowing how rapidly AI technologies are being developed and deployed and how much there is to learn about the associated benefits and risks.” The framework considers approaches to developing trustworthiness characteristics including accuracy, explainability and interpretability, reliability, privacy, robustness, security, and mitigation of unintended and/or malicious uses.

In summary, the framework includes the following points:

  1. Technical characteristics, socio-technical characteristics, guiding principles;

  2. governance, including risk mapping, measurement and management; and

  3. Practical guide.

The guide addresses three important points:

  1. Describes the use and challenge of bias in artificial intelligence and provides examples of how and why it can affect public trust;

  2. Identifies three categories of biases in AI – systemic, statistical and human – and describes how and where they contribute to harm; and

  3. Describes three major challenges to mitigating bias – datasets, testing and evaluation, and human factors – and provides tentative guidelines for overcoming them.

Why is AI governance important?

Governance processes impact almost every aspect of managing AI. Governance includes administrative procedures and standard operating policies, but is also part of organizational processes and cultural competencies that directly impact those involved in training, deploying, and monitoring such systems. Such monitoring systems and channels of recourse help end users to flag and hold accountable for false or potentially harmful results. It is also important to ensure that written policies and procedures address key roles, responsibilities, and processes at different stages of the AI ​​lifecycle. Clear documentation aids in the systematic implementation of policies and procedures and standardizes how an organization’s bias management is implemented.

AI and Bias

Harmful effects of AI do not only affect individuals or organizational levels, but can quickly spread to a broader level. The magnitude and speed of AI damage makes it a unique risk. NIST highlights that machine learning processes and data used to train AI software are susceptible to bias, both human and systematic. Bias influences the development and use of AI. Systemic biases can emanate from institutions that disadvantage certain groups, such as B. Discrimination against people because of their race. Human bias can come from people making biased conclusions or inferences from data. When human, systemic, and computational biases combine, they can amplify bias effects and lead to crippling consequences. To address these issues, the NIST authors advocate a “sociotechnical” approach to addressing bias in AI. This approach combines sociology and technology by acknowledging that AI operates in a larger social context, so efforts to eliminate bias should go beyond technical efforts.

AI and State Privacy Laws

Upcoming state privacy laws also address AI activity. Beginning in 2023, comprehensive privacy laws in California, Virginia, and Colorado will govern AI, profiling, and other forms of automated decision-making, including the right for consumers to opt-out of certain processing of their personal information by AI and similar processes. Organizations should also prepare to provide information about the logic of automated decision-making processes in response to access requests.

What’s next?

NIST will receive public comments on the AI ​​framework by April 29. In addition, NIST is hosting a public workshop on March 29-31. NIST’s contribution requesting comments on its framework and providing more information about its guidance is available here. You can find the instructions here. NIST is planning a series of public workshops over the next few months aimed at producing a technical report on addressing AI bias and connecting the guidance to the framework. More details about the workshop will follow shortly. A second draft of the framework will be released this summer or fall, including comments to be received by April 29th.

Author: Amine

Leave a Reply

Your email address will not be published.