Counterexample-Guided Synthesis of Perception Models and Control

Shromona Ghosh, Yash Vardhan Pant, Hadi Ravanbakhsh, and Sanjit A. Seshia. Counterexample-Guided Synthesis of Perception Models and Control. In American Control Conference (ACC), pp. 3447–3454, IEEE, 2021.

Download

[pdf] 

Abstract

Recent advances in learning-based perception systems have led to drastic improvements in the performance of robotic systems like autonomous vehicles and surgical robots. These perception systems, however, are hard to analyze and errors in them can propagate to cause catastrophic failures. In this paper, we consider the problem of synthesizing safe and robust controllers for robotic systems which rely on complex perception modules for feedback. We propose a counterexample-guided synthesis framework that iteratively builds simple surrogate models of the complex perception module and enables us to find safe control policies. The framework uses a falsifier to find counterexamples, or traces of the systems that violate a safety property, to extract information that enables efficient modeling of the perception modules and errors in it. These models are then used to synthesize controllers that are robust to errors in perception. If the resulting policy is not safe, we gather new counterexamples. By repeating the process, we eventually find a controller which can keep the system safe even when there is a perception failure. We demonstrate our framework on two scenarios in simulation, namely lane keeping and automatic braking, and show that it generates controllers that are safe, as well as a simpler model of a deep neural network-based perception system that can provide meaningful insight into operations of the perception system.

BibTeX

@inproceedings{ghosh-acc21,
  author    = {Shromona Ghosh and
               Yash Vardhan Pant and
               Hadi Ravanbakhsh and
               Sanjit A. Seshia},
  title     = {Counterexample-Guided Synthesis of Perception Models and Control},
  booktitle = {American Control Conference (ACC)},
  pages     = {3447--3454},
  publisher = {{IEEE}},
  year      = {2021},
  abstract  = {Recent advances in learning-based perception systems have led to drastic improvements in the performance of robotic systems like autonomous vehicles and surgical robots. These perception systems, however, are hard to analyze and errors in them can propagate to cause catastrophic failures. In this paper, we consider the problem of synthesizing safe and robust controllers for robotic systems which rely on complex perception modules for feedback. We propose a counterexample-guided synthesis framework that iteratively builds simple surrogate models of the complex perception module and enables us to find safe control policies. The framework uses a falsifier to find counterexamples, or traces of the systems that violate a safety property, to extract information that enables efficient modeling of the perception modules and errors in it. These models are then used to synthesize controllers that are robust to errors in perception. If the resulting policy is not safe, we gather new counterexamples. By repeating the process, we eventually find a controller which can keep the system safe even when there is a perception failure. We demonstrate our framework on two scenarios in simulation, namely lane keeping and automatic braking, and show that it generates controllers that are safe, as well as a simpler model of a deep neural network-based perception system that can provide meaningful insight into operations of the perception system.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Mon Jan 03, 2022 13:26:52