Using the NIST AI Risk Management Framework // Applied AI Meetup October 2023
TLDRIn this informative session, Research Scientist R Schwarz from NIST's Information Technology Laboratory discusses the AI Risk Management Framework (AI RMF). The AI RMF is a voluntary, non-regulatory framework aimed at managing AI risks with a rights-preserving approach. Schwarz delves into the framework's development, its purpose to align with societal values, and its practical application. She also highlights the importance of the framework in fostering a culture of responsible AI practice, the challenges of AI risk management, and the role of diverse AI actors across the AI lifecycle. The talk concludes with an invitation to join a public working group focused on generative AI, emphasizing the collective effort in shaping trustworthy AI technologies.
Takeaways
- 📚 R. Schwarz is a research scientist at NIST, a federal agency under the Department of Commerce, focusing on advancing technologies and developing standards to make technology more trustworthy.
- 🛡️ NIST's AI risk management framework (AI RMF) is a voluntary, non-regulatory approach to managing AI risks, emphasizing the protection of individual rights and aligning with societal values.
- 📈 The AI RMF adopts a 'rights preserving approach' to AI, aiming to ensure that the development and use of AI prioritize individual rights and societal values.
- 🔧 The framework is designed to be practical and adaptable to the evolving tech landscape, with an expected stability of 3-5 years before updates.
- 🔄 The AI RMF Playbook is a continuously updated companion piece to the framework, providing practical guidance and recommendations for organizations.
- 🔍 NIST defines AI systems broadly, including generative AI, and focuses on the development of standards and metrics for trustworthy AI characteristics.
- 🤖 The framework emphasizes the importance of a multi-disciplinary approach to AI, involving various 'AI actors' across the AI lifecycle to anticipate harms and impacts.
- 🔑 The AI RMF identifies seven key trustworthy characteristics for AI systems: valid, reliable, safe, secure, resilient, accountable, transparent, explainable, privacy-enhanced, and fair with managed harmful bias.
- 🏢 The framework is not just for tech companies; it's intended for any organization that deals with AI, encouraging a culture of responsible AI practice and use.
- 🌐 NIST is actively involved in international standards coordination for AI, working with various stakeholders to identify and develop critical AI standards.
Q & A
What is the primary mission of NIST?
-NIST's mission is to promote U.S. innovation and industrial competitiveness by advancing critical and emerging technologies and developing standards and metrics that strengthen measurement science and make technology more secure, usable, interoperable, robust, and reliable.
What is the AI Risk Management Framework (AI RMF)?
-The AI Risk Management Framework is a voluntary, non-regulatory framework for managing risks posed by AI technology. It adopts a rights-preserving approach, prioritizing the protection of individual rights in the development and use of AI.
How often is the AI RMF Playbook updated?
-The AI RMF Playbook is continuously updated and is revised approximately twice a year, incorporating feedback and new developments in the field of AI.
What are the seven trustworthy characteristics of AI systems mentioned in the framework?
-The seven trustworthy characteristics are validity, reliability, safety, security, resilience, accountability, transparency, explainability, interpretability, privacy enhancement, and fair management of harmful bias.
How does the AI RMF define risk?
-Risk is defined as the probability of an event occurring and the magnitude or degree of consequences that would result from that event, which can be positive or negative, representing opportunities or threats.
What is the significance of the AI life cycle in the AI RMF?
-The AI life cycle is significant as it highlights the iterative nature of AI development and deployment. It emphasizes the importance of test and evaluation, validation, and verification throughout the life cycle, ensuring that AI systems are designed, developed, and deployed with a focus on trustworthiness and responsible use.
What are the four functions of the AI RMF core?
-The four functions of the AI RMF core are Govern, Map, Measure, and Manage. These functions organize risk management activities, enabling organizations to move from principles to practice.
How does the AI RMF address the challenges of AI risk management?
-The AI RMF addresses challenges by focusing on risk identification, mapping, and measuring, and then managing those risks in line with established policies. It emphasizes the need for interdisciplinary approaches, testing and evaluation in real-world contexts, and communication across the AI life cycle.
What is the role of the AI RMF in fostering a culture of responsible AI practice and use?
-The AI RMF fosters a culture of responsible AI practice and use by providing a framework that helps organizations align their internal culture and practices with intended aims and shared societal values. It encourages responsible practices throughout the AI life cycle and emphasizes the importance of governance in managing risks.
How can organizations get involved in the development of profiles under the AI RMF?
-Organizations can get involved in the development of profiles by participating in public working groups, sharing real-life examples of how they implement the framework, and collaborating with others in their industry to build guidance that can be used across sectors.
Outlines
📚 Introduction to NIST and AI Risk Management Framework
R. Schwarz, a research scientist at the National Institute of Standards and Technology (NIST), introduces the organization's role in promoting U.S. innovation and competitiveness. NIST is known for developing standards and metrics to ensure technology is secure and trustworthy. Schwarz's work focuses on the Trustworthy and Responsible AI team, where she contributed to the AI Risk Management Framework (AI RMF) and its companion playbook. The AI RMF, developed under a Congressional mandate, is a voluntary and non-regulatory framework aimed at managing AI risks with a rights-preserving approach, emphasizing the protection of individual rights. The framework is designed to adapt to the evolving tech landscape and includes the development of an AI RMF Playbook, which is continuously updated based on feedback.
🔍 Deep Dive into AI Risk Management Framework
The AI RMF is a dynamic tool designed to help organizations manage the risks associated with AI technologies. It adopts a rights-preserving approach, ensuring individual rights are prioritized in AI development and use. The framework is not a static checklist but a guide to align organizational practices with societal values. It encourages a practical approach that evolves with AI technology. The AI RMF has been developed through extensive public feedback, involving over 240 organizations from various sectors. The framework operationalizes seven trustworthy characteristics of AI systems, which go beyond accuracy to include safety, security, resilience, and fairness, among others. It also addresses the challenges of AI risk management, such as measuring risk in real-world contexts and setting risk tolerance levels that align with organizational objectives and legal requirements.
🌐 AI Life Cycle and the Importance of AI Actors
The AI RMF incorporates an AI life cycle approach, emphasizing the iterative nature of AI development and the importance of continuous test, evaluation, validation, and verification. The framework adapts the life cycle from the OECD, focusing on socio-technical dimensions and the involvement of various AI actors. These actors include not only the people directly involved in the AI life cycle but also end-users, civil society organizations, and researchers. The framework stresses the shared responsibility of all AI actors in designing, developing, and deploying trustworthy AI systems. It also highlights the need for external voices to be included from the outset to ensure a comprehensive consideration of potential impacts.
🛠️ The AI RMF Core: Functions and Application
The core of the AI RMF is structured around four functions: Govern, Map, Measure, and Manage. These functions provide a systematic approach for organizations to move from principles to practice in AI risk management. The Govern function focuses on fostering a risk culture within an organization, setting policies and procedures that align with societal values and legal requirements. The Map function establishes the context for framing AI-related risks, ensuring organizations have the necessary expertise to identify and evaluate contextual factors. The Measure function involves setting up objective and scalable measures to track the trustworthy characteristics of AI systems. The Manage function is about allocating resources to treat identified risks, responding to AI system failures, and communicating negative impacts.
🔗 AI RMF Profiles and Future Directions
Profiles are a key component of the AI RMF, allowing organizations to share real-life examples of implementing the framework. These profiles can be industry-specific or cross-sectoral, providing guidance that can be used within a company or across an industry. The AI RMF also includes a roadmap for future activities, prioritizing the alignment with international standards, expanding test and evaluation efforts, and developing the NIST Trustworthy and Responsible AI Resource Center. This resource center serves as a hub for foundational content, technical documents, and community engagement.
🤖 Addressing Unintended Consequences and Generative AI
The AI RMF is designed to address unintended consequences and emerging properties in AI systems, particularly in generative AI. A public working group with over 2,600 participants is focused on developing best practices for managing generative AI risks. The framework encourages interdisciplinary approaches and testing in real-world contexts to identify and manage unforeseen risks. It also includes mechanisms for decommissioning systems and managing incidents swiftly when necessary.
🏢 Organizational Adoption and Incentives for the AI RMF
The voluntary nature of the AI RMF provides organizations with the flexibility to adopt as much of the framework as they find beneficial. The development of the framework involved extensive input from various stakeholders, including private industry, academia, and government agencies. While there may be challenges for small and medium-sized organizations, the framework aims to provide a culture change that can lead to better risk management and cost savings. The AI RMF is applicable across various sectors, including research, and its adoption can be a strategic decision for organizations to ensure responsible AI practices.
Mindmap
Keywords
💡NIST
💡AI Risk Management Framework (AI RMF)
💡Trustworthiness
💡Socio-Technical Approach
💡Governance
💡Risk Tolerance
💡AI Life Cycle
💡Generative AI
💡Responsible AI
💡Interdisciplinarity
💡Unforeseen Risks
Highlights
Introduction of R Schwarz, a research scientist at NIST, focusing on trustworthy and responsible AI.
NIST's role in advancing technologies and developing standards to ensure technology trustworthiness.
The AI Risk Management Framework (AI RMF) is a voluntary, non-regulatory approach to managing AI risks.
AI RMF adopts a rights-preserving approach, prioritizing individual rights in AI development and use.
The framework is designed to adapt to the evolving tech landscape and is not a static checklist.
AI RMF was developed with broad feedback from over 240 organizations across various sectors.
Key terminology defined in the framework, including the concept of AI systems and their scope.
Risk defined as the probability of an event and its potential consequences, which can be positive or negative.
The seven trustworthy characteristics of AI systems outlined in the framework.
AI risk management is complex due to the difficulty in measuring risk and varying risk tolerance.
The AI life cycle is iterative and includes socio-technical approaches to risk management.
AI actors across the life cycle share responsibility for designing, developing, and deploying trustworthy AI systems.
The AI RMF Core's four functions: Govern, Map, Measure, and Manage, facilitate the transition from principles to practice.
The AI RMF Playbook provides practical recommendations and references for implementing the framework.
Profiles are used to share real-life examples of implementing the framework in various sectors.
NIST's road map for the future includes international standards alignment and expanded test and evaluation efforts.
The NIST Trustworthy and Responsible AI Resource Center serves as a hub for AI risk management resources.
Engagement with the generative AI public working group, which has over 2600 participants from diverse backgrounds.
Transcripts
Browse More Related Video
The Future of Artificial Intelligence
Andrew Ng: Opportunities in AI - 2023
The Turing Lectures: Addressing the risks of generative AI
Why this top AI guru thinks we might be in extinction level trouble | The InnerView
Artificial Intelligence | 60 Minutes Full Episodes
EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
5.0 / 5 (0 votes)
Thanks for rating: