Learning Monitorable Operational Design Domains for Assured Autonomy

Hazem Torfah, Carol Xie, Sebastian Junges, Marcell Vazquez-Chanlatte, and Sanjit A. Seshia. Learning Monitorable Operational Design Domains for Assured Autonomy. In Proceedings of the International Symposium on Automated Technology for Verification and Analysis (ATVA), October 2022.

Download

[pdf] 

Abstract

AI-based autonomous systems are increasingly relying on machine learning (ML) components to perform a variety of complex tasks in perception, prediction, and control. The use of ML components is projected to grow and with it the concern of using these components in systems that operate in safety-critical settings. To guarantee a safe operation of autonomous systems, it is important to run an ML component in its operational design domain (ODD), i.e., the conditions under which using the component does not endanger the safety of the system. Building safe and reliable autonomous systems which may use machine-learningbased components, calls therefore for automated techniques that allow to systematically capture the ODD of systems. In this paper, we present a framework for learning runtime monitors that capture the ODDs of black-box systems. A runtime monitor of an ODD predicts based on a sequence of monitorable observations whether the system is about to exit the ODD. We particularly investigate the learning of optimal monitors based on counterexample-guided refinement and conformance testing. We evaluate the applicability of our approach on a case study from the domain of autonomous driving.

BibTeX

@InProceedings{torfah-atva22,
    author = {Hazem Torfah and Carol Xie and Sebastian Junges and Marcell Vazquez-Chanlatte and Sanjit A. Seshia},
    title = {Learning Monitorable Operational Design Domains for Assured Autonomy},
    booktitle =  {Proceedings of the International Symposium on Automated Technology for Verification and Analysis (ATVA)},
  month = {October},
  year = 	 {2022},
  OPTpages = {20--34},
  abstract = {AI-based autonomous systems are increasingly relying on machine learning (ML) components to perform a variety of complex tasks in perception, prediction, and control. The use of ML components is projected to grow and with it the concern of using these components in systems that operate in safety-critical settings. To guarantee a safe operation of autonomous systems, it is important to run an ML component in its operational design domain (ODD), i.e., the conditions under which using the component does not endanger the safety of the system. Building safe and reliable autonomous systems which may use machine-learningbased components, calls therefore for automated techniques that allow to systematically capture the ODD of systems.  In this paper, we present a framework for learning runtime monitors that capture the ODDs of black-box systems. A runtime monitor of an ODD predicts based on a sequence of monitorable observations whether the system is about to exit the ODD. We particularly investigate the learning of optimal monitors based on counterexample-guided refinement and conformance testing. We evaluate the applicability of our approach on a case study from the domain of autonomous driving.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Oct 09, 2022 12:16:25