CS294-4 Project

·         The project distribution directory (updated as of 10/19/04). The distribution also contains instructions on how to build and run the prover.

·         Hand-in instructions:

·         The result of your project should be either an ML file containing the module, or a directory, if you need more than one module. You should check these in the project CVS repository. Make sure that the names you select for your modules are unique.

·          There are instructions in doc/kettle.tex for how to add your module to the build process (change configure.ac, Makefile.in, etc.). Adam will change the engine to ensure that only the satisfiability procedures for which an "--enable" argument is passed are run. This should make it possible to have a build with all modules and to run them selectively.

·          Everybody should also be adding their regression suite in the test directory (perhaps in separate subdirectories, or maybe you can add to the existing arith, equality, etc., subdirectories).

·          You must add your tests to the Kettle regression suite, and must edit the end of test/testkettle.pl using the model that you see there for “arith”. You should put your tests in separate groups and must make sure to pass the right --enable arguments. You can then run the tests: “./testkettle --run  --group=mytests”.

·         Team assignments so far:

1.      Adam: congruence closure + integration

2.      Gilad, Dave: simplex

3.      AJ, Bill: satisfiability procedure for pointer-based data structures

4.      Vinayak, Arindam:  SUP-INF

5.      Manu: Arrays

6.      Jeremy, Matt: randomized arithmetic (or Simplex for a second group)

7.      Evan (perhaps arrays, or congruence closure or ararys for a second prover)

8.      Donald: SAT-based for pointers

9.      Marco Zenaro: Omega procedure

10.  others: still thinking

The class project will be an implementation of a Nelson-Oppen theorem prover with capabilities for equality (using the congruence closure procedure), arithmetic (using either Simplex or Shostak's procedures, or both). The test cases for this prover will be both synthetic and generated using BLAST.

Everybody registered for more than 1 unit in the class is required to take part in the project. Your effort on this project must be commensurate with the number of units you are taking. 

General guidelines for selecting the teams

Please let me know soon of how you want to split into two teams. Each team will implement a prover. Within each team you should also split into subgroups of 1 or 2 students. Here is a potential work distribution. The projects marked with an asterisk (*) include a more significant element of research (i.e., I am not certain what the actual algorithm is, or what its characteristics are). Of course, you will not be penalized for choosing one of these projects and then failing to produce a perfect result.

·         Congruence closure: 1 person, also responsible with the integration of various satisfiability procedures. This latter task should be very easy given the existing skeleton but I think there should be somebody in charge of it. This procedure is discussed in detail in Nelson’s thesis.

·         Simplex: 2 persons. This procedure is discussed in detail in Nelson’s thesis.

·         Shostak's and/or SUP-INF satisfiability procedure: 1 or 2 persons. If we have too many people in the team, you may want to implement both of these. In that case, I would like your report to include a comparison in terms of code complexity and performance.  (* for proof generation)

·         Randomized procedure for arithmetic: 2 persons. This is described in the CADE2003 paper by Gulwani. What is missing from that description is the procedure to produce proofs. This will essentially entail de-randomizing the algorithm. I am quite curious how/if this can be done easily. I am also interested in the performance you are getting. (an obvious *)

·         Arrays: 1 person. This procedure is also described in Nelson’s thesis. The procedure is simple but expensive. I expect that some heuristics will work for many common cases. Ideally, the person who does this will find a way to generate proof obligations (test cases), perhaps from real code. (* for the heuristics)

·         Pointer-based data structures (perhaps 2 people). Scott McPeak will describe a simple formalism for specifying pointer-based data structures using simple conditional axioms. Knuth-Bendix can be used to solve goals, or an extension of congruence closure. *

·         SAT-based procedure, perhaps for the pointer-based data structures: 2 people (an obvious *)

·         A satisfiability procedure for a logic with transitive closure, which could every useful for pointer-based data structures. In such a procedure, one should be able to say that the variable x reaches the contents of y, or that the lists pointed to by x and y do not reach a common element. (*)

·         You are free to suggest other procedures that would fit the model of the prover.

Each theorem prover must have at least congruence closure, and some arithmetic, and must pass (most of) the tests that are packaged with the code.

With the breakdown as above each person would have to write and test on the order of 1000 lines of code.

OCAML

The project will be implemented in the OCAML dialect of ML. The distribution and a lot of information about OCAML can be found at http://pauillac.inria.fr/ocaml/. If you have experience with ML already then you should be able to get up to speed by skimming the manual (focus on the core language,  the built-in primitives and the standard library). There are also other libraries. The Makefile for the project also links in the unix library but you should not need that.

Those without prior experience programming in ML should be able to learn it fairly quickly by reading the manual and the examples provided in the project files. Start with the core language (but can skip 1.8 and 1.9). You do not need to understand the OO facilities and you need to understand only basic facts about modules.

Differences from Standard ML/NJ

·         Many small syntactic differences. Hard to say if for better or worse. What is worse however is the very poor parser that will basically tell you that there is a syntax error with no other help to figure out why. 

·         Mutable records. Very convenient.

·         Objects and classes (although the project is not setup to use them)

·         Nice assert statement that reports the position in the program of an assertion failure

·         Significant differences in the standard library

·         A debugger with backstep capability (works on Unix/Linux only and is integrated with Emacs). You can easily do without this.

·         Good integration with C

·         Good profiler

·         Faster compilation and reasonably fast compiled code

·         Not quite as powerful of a compilation manager

·         Very portable in its bytecode incarnation. A few platforms (e.g, x86) have native compilers as well. but don't be afraid of using the bytecode. It is only 2-3 times slower than the native code version. 

Watch for

·         In general sorting through the syntax errors will be somewhat painful in the beginning.

·         Make sure you read section 3.5 in the manual. It explains how the batch compiler expects ML modules to be mapped to source files.  

OCAML has both a bytecode compiler (writen in C and thus very portable) and also native code compilers for x86. Our Makefile uses the bytecode compiler by default. Run "make RELEASE=1" to use the native code compiler.

 


Last modified: 05/14/00 06:24 PMby necula@cs