A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications

Dorsa Sadigh, Eric S. Kim, Samuel Coogan, Shankar Sastry, and Sanjit A. Seshia. A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications. In Proceedings of the 53rd IEEE Conference on Decision and Control (CDC), pp. 1091–1096, December 2014.

Download

[pdf] 

Abstract

We propose a method to synthesize a control policy for a Markov decision process (MDP) such that the resulting traces of the MDP satisfy a linear temporal logic (LTL) property. We construct a product MDP that incorporates a deterministic Rabin automaton generated from the desired LTL property. The reward function of the product MDP is defined from the acceptance condition of the Rabin automaton. This construction allows us to apply techniques from learning theory to the problem of synthesis for LTL specifications even when the transition probabilities are not known a priori. We prove that our method is guaranteed to find a controller that satisfies the LTL property with probability one if such a policy exists, and we suggest empirically with a case study in traffic control that our method produces reasonable control strategies even when the LTL property cannot be satisfied with probability one.

BibTeX

@inproceedings{sadigh-cdc14,
  author    = {Dorsa Sadigh and Eric S. Kim and Samuel Coogan and Shankar Sastry and Sanjit A. Seshia},
  title     = {A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications},
 booktitle = {Proceedings of the 53rd IEEE Conference on Decision and Control (CDC)},
 month = "December",
 year = {2014},
 pages = {1091--1096},
 abstract = {We propose a method to synthesize a control 
policy for a Markov decision process (MDP) such that the 
resulting traces of the MDP satisfy a linear temporal logic 
(LTL) property. We construct a product MDP that incorporates 
a deterministic Rabin automaton generated from the desired 
LTL property. The reward function of the product MDP is 
defined from the acceptance condition of the Rabin automaton. 
This construction allows us to apply techniques from learning 
theory to the problem of synthesis for LTL specifications even 
when the transition probabilities are not known a priori. We 
prove that our method is guaranteed to find a controller that 
satisfies the LTL property with probability one if such a policy 
exists, and we suggest empirically with a case study in traffic 
control that our method produces reasonable control strategies 
even when the LTL property cannot be satisfied with probability 
one.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Jun 21, 2015 12:08:14