Learning Branching Heuristics for Propositional Model Counting

Pashootan Vaezipoor, Gil Lederman, Yuhuai Wu, Chris J. Maddison, Roger B. Grosse, Sanjit A. Seshia, and Fahiem Bacchus. Learning Branching Heuristics for Propositional Model Counting. In Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), pp. 12427–12435, AAAI Press, February 2021.

Download

[HTML] 

Abstract

Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.

BibTeX

@inproceedings{vl-aaai21,
  author    = {Pashootan Vaezipoor and
               Gil Lederman and
               Yuhuai Wu and
               Chris J. Maddison and
               Roger B. Grosse and
               Sanjit A. Seshia and
               Fahiem Bacchus},
  title     = {Learning Branching Heuristics for Propositional Model Counting},
  booktitle = {Thirty-Fifth {AAAI} Conference on Artificial Intelligence (AAAI)},
  pages     = {12427--12435},
  publisher = {{AAAI} Press},
  year      = {2021},
  month	= feb,
  abstract  = {Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Mon Jan 03, 2022 13:26:52