Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems

David Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark, Sanjit Seshia, Steve Omohundro, Christian Szegedy, Ben Goldhaber, Nora Ammann, Alessandro Abate, Joe Halpern, Clark W. Barrett, Ding Zhao, Tan Zhi-Xuan, Jeannette Wing, and Joshua B. Tenenbaum. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. CoRR, abs/2405.06624, 2024.

Download

[HTML] 

Abstract

Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.

BibTeX

@article{gsai-arxiv24,
  author       = {David Dalrymple and
                  Joar Skalse and
                  Yoshua Bengio and
                  Stuart Russell and
                  Max Tegmark and
                  Sanjit Seshia and
                  Steve Omohundro and
                  Christian Szegedy and
                  Ben Goldhaber and
                  Nora Ammann and
                  Alessandro Abate and
                  Joe Halpern and
                  Clark W. Barrett and
                  Ding Zhao and
                  Tan Zhi{-}Xuan and
                  Jeannette Wing and
                  Joshua B. Tenenbaum},
  title        = {Towards Guaranteed Safe {AI:} {A} Framework for Ensuring Robust and Reliable {AI} Systems},
  journal      = {CoRR},
  volume       = {abs/2405.06624},
  year         = {2024},
  abstract = {Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviours is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence, or systems used in safety-critical contexts. In this paper, we will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees. This is achieved by the interplay of three core components: a world model (which provides a mathematical description of how the AI system affects the outside world), a safety specification (which is a mathematical description of what effects are acceptable), and a verifier (which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model). We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them. We also argue for the necessity of this approach to AI safety, and for the inadequacy of the main alternative approaches.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sat Oct 19, 2024 18:38:44