Rethinking Safety Certification for Autonomous Systems

Share this content

February 19, 2026
Self Driving stock photo

Through a Multidisciplinary University Research Initiative project led by UT Austin, researchers are examining the foundations needed to make dynamic certification practical. The work brings together expertise in controls, formal methods, machine learning, human factors, robotics, and systems engineering from six universities.

Rather than proposing a single tool or standard, the research focuses on three interconnected directions:

  • Specification and alignment: Developing methods to capture safety expectations from multiple stakeholders, including designers, operators, and users, and to reason about how those expectations change over time.
  • Verification and learning: Creating verification techniques that interact with learning-based components, enabling systems to adapt while maintaining quantifiable safety margins.
  • Extrapolation and adaptation: Understanding how safety guarantees degrade outside previously tested conditions and how systems can reason about and respond to unforeseen situations. 

A central theme across these directions is the management of the co-evolution of autonomous behavior, operational context, and human expectations of safety.

The effort also examines how developers, regulators, and operators might interact more continuously to ensure safety. One motivating analogy is the staged evaluation used in clinical trials, where systems are initially evaluated in limited contexts and gradually expanded as evidence accumulates, without presupposing any specific regulatory framework.

Click here to read the full article.