NIST Launches ARIA: A New Initiative to Advance AI Sociotechnical Testing and Evaluation

The National Institute of Standards and Technology (NIST) has unveiled an innovative program named Assessing Risks and Impacts of AI (ARIA), aimed at evaluating the societal implications of artificial intelligence (AI) systems. ARIA seeks to understand how AI interacts with society by simulating real-world scenarios, providing a comprehensive assessment of AI’s functionality beyond controlled laboratory settings.

Key Objectives of ARIA

  1. Comprehensive Risk Assessment: ARIA will help organizations and individuals evaluate whether AI technologies are valid, reliable, safe, secure, private, and fair.
  2. Supporting Trustworthy AI: The program’s findings will bolster the U.S. AI Safety Institute’s efforts to develop trustworthy AI systems.
  3. Operationalizing AI Risk Management: Building on NIST’s AI Risk Management Framework, ARIA aims to create new methodologies and metrics for assessing AI’s real-world impacts

Strategic Vision

U.S. Commerce Secretary Gina Raimondo highlighted the importance of testing AI in realistic scenarios to fully understand its societal impacts. Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio emphasized that ARIA is designed to meet real-world needs as AI technology continues to grow.

Reva Schwartz, the ARIA program lead at NIST’s Information Technology Lab, noted that measuring AI’s impact involves assessing the broader context, including human interactions with AI in everyday use. This holistic approach will provide a more complete understanding of AI’s net effects.


The ARIA program represents a significant step towards ensuring AI technologies are safe, secure, and trustworthy. By expanding the scope of AI evaluation to include real-world interactions, NIST is laying the groundwork for AI systems that can positively integrate into societal contexts.