Formal Analysis of AI-Based Autonomy: From Modeling to Runtime Assurance
Hazem Torfah, Sebastian Junges, Daniel J. Fremont, and Sanjit A. Seshia. Formal Analysis of AI-Based Autonomy: From Modeling to Runtime Assurance. In 21st International Conference on Runtime Verification (RV), pp. 311–330, Lecture Notes in Computer Science 12974, Springer, 2021.
Download
Abstract
Autonomous systems are increasingly deployed in safety-critical applications and rely more on high-performance components based on artificial intelligence (AI) and machine learning (ML). Runtime monitors play an important role in raising the level of assurance in AI/ML-based autonomous systems by ensuring that the autonomous system stays safe within its operating environment. In this tutorial, we present VerifAI, an open-source toolkit for the formal design and analysis of systems that include AI/ML components. VerifAI provides features supporting a variety of use cases including formal modeling of the autonomous system and its environment, automatic falsification of system-level specifications as well as other simulation-based verification and testing methods, automated diagnosis of errors, and automatic specification-driven parameter and component synthesis. In particular, we describe the use of VerifAI for generating runtime monitors that capture the safe operational environment of systems with AI/ML components. We illustrate the advantages and applicability of VerifAI in real-life applications using a case study from the domain of autonomous aviation.
BibTeX
@inproceedings{torfah-rv21, author = {Hazem Torfah and Sebastian Junges and Daniel J. Fremont and Sanjit A. Seshia}, title = {Formal Analysis of {AI}-Based Autonomy: From Modeling to Runtime Assurance}, booktitle = {21st International Conference on Runtime Verification (RV)}, series = {Lecture Notes in Computer Science}, volume = {12974}, pages = {311--330}, publisher = {Springer}, year = {2021}, abstract = {Autonomous systems are increasingly deployed in safety-critical applications and rely more on high-performance components based on artificial intelligence (AI) and machine learning (ML). Runtime monitors play an important role in raising the level of assurance in AI/ML-based autonomous systems by ensuring that the autonomous system stays safe within its operating environment. In this tutorial, we present VerifAI, an open-source toolkit for the formal design and analysis of systems that include AI/ML components. VerifAI provides features supporting a variety of use cases including formal modeling of the autonomous system and its environment, automatic falsification of system-level specifications as well as other simulation-based verification and testing methods, automated diagnosis of errors, and automatic specification-driven parameter and component synthesis. In particular, we describe the use of VerifAI for generating runtime monitors that capture the safe operational environment of systems with AI/ML components. We illustrate the advantages and applicability of VerifAI in real-life applications using a case study from the domain of autonomous aviation.}, }