RESOLUTE – A Simple Proof Format for SMT-LIB

Although the SMT-LIB format defines a command for obtaining proofs, it does not prescribe the format for these proofs. The only requirement is that the result follows the grammar rule for s-expr. This document explains the proof format RESOLUTE, which is the lowlevel proof format returned by the SMT solver SMTInterpol.

SMTInterpol internally can switch between different proof levels. The lowlevel proof format described in this document can be activated by (set-option :proof-level lowlevel). The goal of the lowlevel proof format is an easy to check format. There are also the proof-levels clauses (which adds all input clauses as oracles) and full, which uses the internal proof rules using oracle clauses.

We have a proof checker for these proofs.

Resolution proofs

The proof is given as a resolution proof. Each subproof proves the validity of a clause where a clause is a list of literals and a literal is a positive or negative atom. An atom is an SMT-LIB term of sort Bool, in other words, an SMT-LIB formula. In particular input formulas are seen as unit clauses with a single positive atom. We use + t to denote the positive literal for the term t and - t for the negative literal. As usual, a positive literal represents the fact that the term is true, a negative literal that the term is false, and a clause represents the fact that at least one of its literals hold.

A proof is build from the resolution rule, axioms, and assumptions. A proof is valid, if the side conditions for every axiom is fulfilled and it proves the empty clause. The main rule is the resolution rule:

The concrete syntax for the resolution rule is (res t proof1 proof2) where t is the pivot atom (an SMT-LIB term), proof1 a proof for a clause , and proof2 a proof for a clause . The proof (res t proof1 proof2) is then a proof for a new clause containing all literals from and where denotes the clause with any occurrence of the literal removed. A clause is seen as a set of literals and literals appearing in both and appear only once in the conclusion.

Furthermore, there are several axioms for the different theories. In particular, the core theory defines the axioms for the logical operators. As an example the axiom (or+ 1 t0 t1 t2) where t0, t1, and t2 are arbitrary terms is a proof for the valid clause (+(or t0 t1 t2) - t1).

Facts from the SMT-LIB script can be introduced using the syntax (assume term), which is a proof of the unit clause (+ term) provided the SMT-LIB script contains the assertion (assert term). A proof of the unsatisfiability of an SMT-LIB input script is a proof of the empty clause that only uses the assumptions appearing in the SMT-LIB script.

Axioms for core theory

The meaning of logical operators is expressed by a set of core axioms that are available in every SMT-LIB theory. The axioms explicitly support associative SMT-LIB operators like and, or, => with more than two arguments.

; axioms are written as name-of-axiom: (clause)
; logical operators
true+: (+ true)                      false-: (- false)
 not+: (+ (not p0) + p0)               not-: (- (not p0 ) - p0)
 and+: (+ (and p0 … pn) - p0 … - pn)   and-: (- (and p0 … pn) + pi)
  or+: (+ (or p0 … pn) - pi )           or-: (- (or p0 … pn) + p0 … + pn)
  =>+: (+ (=> p0 … pn) +/- pi )         =>-: (- (=> p0 … pn) - p0 … + pn)
  =+1: (+ (= p0 p1) + p0 + p1)          =-1: (- (= p0 p1) + p0 - p1)
  =+2: (+ (= p0 p1) - p0 - p1)          =-2: (- (= p0 p1) - p0 + p1)
 xor+: (+(xor l1) +(xor l2) -(xor l3)) xor-: (-(xor l1) -(xor l2) -xor(l3))
    where each term in l1,l2,l3 occurs an even number of times.

; quantifiers
forall+: (+ (forall x (F x)) - (F (choose x (F x))))
forall-: (- (forall x (F x)) + (F t))
exists+: (+ (exists x (F x)) - (F t))
exists-: (- (exists x (F x)) + (F (choose x (F x))))

; equality axioms
     refl: (+ (= t t))                    symm: (+(= t0 t1) -(= t1 t0))
    trans: (+(= t0 tn) -(= t0 t1) … -(= tn-1 tn))
     cong: (+(= (f t0 … tn) (f t0' … tn')) -(= t0 t0') … -(= tn tn'))
       =+: (+ (= t0 … tn) -(= t0 t1) … -(= tn-1 tn))
       =-: (- (= t0 … tn) +(= ti tj))
distinct+: (+ (distinct t0 … tn) +(= t0 t1) … +(= t0 tn) … +(= tn-1 tn))
distinct-: (- (distinct t0 … tn) -(= ti tj))   where i != j

; ite, define-fun, annotations
  ite1: (+(= (ite t0 t1 t2) t1) - t0)   ite2: (+(= (ite t0 t1 t2) t2) + t0)
  del!: (+(= (! t :annotation…) t))
expand: (+(= (f t0 … tn) (let ((x0 t0)…(xn tn)) d)))
   where f is defined by (define-fun f ((x0 τ0)…(xn τn)) τ d)

A detailed explanation of the axioms will be given in the section Core Axioms below.

Syntactic extensions

Conceptually a resolution proof is just a big proof term applying the resolution rule res to axioms and assumptions. However, there are a few important extensions to concisely express large proof.

let-terms

A large proof may use the same large term multiple times. These large terms can be shared between different clauses in the proof. To express proofs in a concise manner, we support the let syntax from SMT-LIB to bind terms to variables. Such let terms are seen as syntactic sugar and are equal to their expanded form, i.e., no proof rule is required to expand a let. In particular, the resolution rule will also remove the pivot literal from the proved sub-clause if they are only identical after let expansion.

To bind a proof to a variable, the let-proof keyword is used with the same syntax as let. This can be used to avoid repeating the same subproof multiple times. This proof variable may then be used whereever a proof is expected, e.g., as arguments of the resolution rule.

The let-proof syntax is also useful for expressing the proofs in a linear form. A solver may choose to print its proof on the fly by printing each learned clause and binding it to a variable until it derives the empty clause:

(let-proof ((C1 (assume …)))
(let-proof ((C2 (res … C1 …)))
…
 C100 ; empty clause
)…)

Function definitions

The proof format also supports custom defined functions that are only used inside the proof. This can be usefule for auxiliary functions that are created by Skolemization or by CNF conversion of quantified formulas. The syntax for this is ((define-fun f ((x0 τ0)…(xn τn)) d) subproof) where subproof may use the function symbol f and the cong and expand axioms for this function symbol.

Extended resolution

The proof calculus trivially supports extended resolution, where a new literal p is introduced that represents (and p1 p2) and the three clauses (- p1 - p2 + p), (- p + p1), and (- p + p2) are added. This is facilitated by the let syntax and the axioms for logical operators as follows:

(let ((p (and p1 p2)))
  (let-proof ((C1 (and+ (and p1 p2)))
              (C2 (and- 0 (and p1 p2)))
              (C3 (and- 1 (and p1 p2))))
    …))

Resolution proofs are usually only refutation complete, i.e., they can prove the empty clause if the conjunction of the input clauses are unsatisfiable. However, the addition of the core axioms makes the calculus complete in the sense that for every valid SMT-LIB formula t there is a proof that proves the unit clause (+ t).

The proof calculus does not support RUP proofs directly. They need to be converted to applications of res by explicitly listing the clauses and pivot term used in the unit propagation subproof. More importantly DRAT proofs are not supported and there is no syntax for deleting clauses. Instead they need to be converted to extended resolution (which blows up the proof quadratically). On the other hand, this simplifies the proof checker, which would otherwise need to rediscover the necessary clauses and subproofs.

Example Proof: eq-diamond2

We give an example proof for the following simple benchmark from the SMT-LIB benchmark set.

(set-option :produce-proofs true)
(set-logic QF_UF)
(declare-sort U 0)
(declare-fun x0 () U)
(declare-fun y0 () U)
(declare-fun z0 () U)
(declare-fun x1 () U)
(declare-fun y1 () U)
(declare-fun z1 () U)
(assert (and (or (and (= x0 y0) (= y0 x1)) (and (= x0 z0) (= z0 x1))) (not (= x0 x1))))
(check-sat)
(get-proof)

A valid output for this benchmark would be the following.

unsat
(let ((t1 (= x0 y0))
      (t2 (= y0 x1))
      (t3 (= x0 z0))
      (t4 (= z0 x1))
      (t5 (= x0 x1)))
(let ((t6 (and t1 t2))
      (t7 (and t3 t4))
      (t8 (not t5)))
(let ((t9 (or t6 t7)))
(let ((t10 (and t9 t8)))
(let-proof ((C0 (assume t10)))
(let-proof ((C1 (res t9 (res t10 C0 (and- 0 t10)) (or- t9))))
(let-proof ((C2 (res t8 (res t10 C0 (and- 1 t10)) (not- t8))))
(let-proof ((C3 (and- 0 t6)))
(let-proof ((C4 (and- 1 t6)))
(let-proof ((C5 (and- 0 t7)))
(let-proof ((C6 (and- 1 t7)))
(let-proof ((C7 (res t5 (res t1 C3 (res t2 C4 (trans x0 y0 x1))) C2)))
    (res t5 (res t6 (res t7 C1 (res t4 C6 (res t3 C5 (trans x0 z0 x1)))) C7) C2)
))))))))))))

As mentioned earlier, the proof follows mostly the syntax of SMT-LIB terms. In particular it uses let and let-proof to bind expression and sub-proofs to variables.

Introducing names for subterms

The first ten lines define names for the sub-terms of the formula from bottom to top. Note that t10 is the asserted formula in the input file (after variable expansion of letted variables). It is not required to introduce an identifier for every subterm, but it is recommended to keep the proof size small.

CNF conversion

Then C0 is defined, which is an input clause. Using the assume rule an input clause can be proved as a unit clause. As usual a clause is a list of literals, which are atoms with positive or negative polarity. In our proof calculus the atoms are SMT-LIB terms of type Boolean. So the input formula is represented by a unit clause containing one positive atom that is the asserted term.

Every proof rule produces a proof for a clause. The assume rule produces the proof that the unit clause ( + t10 ) holds. Clauses are written by listing the literals in parenthesis and the + indicates that the literal has positive polarity. This proof is then assigned to the variable C0 so it can be referenced in later proofs.

The next step is to convert the input formula into CNF. We use C1C7 for the clauses resulting from the conversion (or more precise, they are the proofs for the clauses of the CNF). So lets look how this is done here. The input formula t10 is

(and (or t6 t7) (not t5))

Thus, we want to split it into C1: (+ t6, + t7) and C2: (- t5). To split the and-formula, we use the and-elimination rule. In this example the axiom (and- 0 t10) is used to proof t9 from t10. Note that t10 is just an abbreviation for (and t9 t8) and that t9 is the 0th argument. So the axiom (and- 0 t10) proves the tautology clause (- t10 + t9), which states that t9 is implied by t10. By resolving this clause with the input formula t10, we get a proof for t9. The resolution is written as (res t10 C0 (and- 0 t10)), where t10 is the pivot literal, C0 the proof of a clause containing it positively, and (and- 0 t10) a proof containing it negatively. The result is a proof for ( + t9 ).

The next step is to eliminate (or t6 t7), that is t9, using the or-elimination rule. The axiom (or- t9) proves the tautology clause ( -t9 + t6 + t7) ) and resolving it with the previous proof on the pivot t9 we finally obtain C1 which is a proof for (+ t6 + t7).

(let-proof ((C1 (res t9 (res t10 C0 (and- 0 t10)) (or- t9))))

Similarly we use the other and-elimination rule (and- 1 t10) and the not-elimination rule (not- t8) to obtain from C0 the clause C2 that proves (- t5).

(let-proof ((C2 (res t8 (res t10 C0 (and- 1 t10)) (not- t8))))

Furthermore, we add the clauses of the Plaisted–Greenbaum-encoding C3C6. These clauses are just the and-elimination clauses, but this time for (and t1 t2) and (and t3 t4). This gives us the full CNF of the input:

C1: (+ t6 + t7)
C2: (- t5)
C3: (- t6 + t1)
C4: (- t6 + t2)
C5: (- t7 + t3)
C6: (- t7 + t4)

SMT solving

The only unit clause C2 propagates -t5. We assume, the DPLL-engine now decides the literal +t6 and propagates +t1 and +t2 by unit-propagation. Then the theory solver finds the conflict involving transitivity:

(trans x0 y0 x1): (- t1 - t2 + t5)

By definition t1 is (= x0 y0), t2 is (= y0 x1) and t5 is (= x0 x1), so the transitivity axiom for x0 = y0 = x1 has exactly this form.

CDCL now explains the conflict to obtain C7: (- t6)

(let-proof ((C7 (res t5 (res t2 C4 (res t1 C3 (trans x0 y0 x1))) C2)))

Now unit resolution propagates -t6, +t7, +t3, and +t4 and again a theory conflict is found:

(trans x0 z0 x1): (- t3 - t4 + t5)

The explanation yields the empty clause.

(res t5 (res t6 (res t7 C1 (res t4 C6 (res t3 C5 (trans x0 z0 x1)))) C7) C2)

This concludes the proof.

Grammar

The grammar for proofs uses the grammar for ⟨numeral⟩, ⟨term⟩, ⟨sort⟩, ⟨symbol⟩, ⟨sorted_var⟩, ⟨attribute⟩ from the SMT-LIB standard, version 2.6. It is given below:

⟨term_or_proof⟩ ::= ⟨term⟩ | ⟨proof⟩
⟨polarity⟩ ::= + | -
⟨literal⟩ ::= ⟨polarity⟩ ⟨term⟩
⟨clause⟩ ::= ( ⟨literal⟩ ⃰ )
⟨proof⟩ ::= (assume ⟨term⟩)
          | (res ⟨term⟩ ⟨proof⟩ ⟨proof⟩)
          | (let ((⟨symbol⟩ ⟨term⟩)⁺) ⟨proof⟩)
          | (let-proof ((⟨symbol⟩ ⟨proof⟩)⁺) ⟨proof⟩)
          | ⟨symbol⟩
          | ((define-fun ⟨symbol⟩ (⟨sorted_var⟩⁺) ⟨term⟩) ⟨proof⟩)
          | ((declare-fun ⟨symbol⟩ (⟨sort⟩⁺) ⟨sort⟩) ⟨proof⟩)
          | (oracle ⟨clause⟩ ⟨attributes⟩ ⃰)
          | false- | true+
          | (not+ ⟨term⟩)
          | (not- ⟨term⟩)
          | (and+ ⟨term⟩)
          | (and- ⟨numeral⟩ ⟨term⟩)
          | (or+ ⟨numeral⟩ ⟨term⟩)
          | (or- ⟨term⟩)
          | (=>+ ⟨numeral⟩ ⟨term⟩)
          | (=>- ⟨term⟩)
          | (=+1 ⟨term⟩)
          | (=+2 ⟨term⟩)
          | (=-1 ⟨term⟩)
          | (=-2 ⟨term⟩)
          | (xor+ (⟨term⟩⁺) (⟨term⟩⁺) (⟨term⟩⁺))
          | (xor- (⟨term⟩⁺) (⟨term⟩⁺) (⟨term⟩⁺))
          | (forall+ ⟨term⟩)
          | (forall- (⟨term⟩⁺) ⟨term⟩)
          | (exists+ (⟨term⟩⁺) ⟨term⟩)
          | (exists- ⟨term⟩)
          | (refl ⟨term⟩)
          | (symm ⟨term⟩ ⟨term⟩)
          | (trans ⟨term⟩⁺)
          | (cong ⟨term⟩ ⟨term⟩)
          | (ite1 ⟨term⟩)
          | (ite2 ⟨term⟩)
          | (=+ ⟨term⟩)
          | (=- ⟨numeral⟩ ⟨numeral⟩ ⟨term⟩)
          | (distinct+ ⟨term⟩)
          | (distinct- ⟨numeral⟩ ⟨numeral⟩ ⟨term⟩)
          | (del! ⟨term⟩ ⟨attributes⟩ ⃰)
          | (expand ⟨term⟩)

In our settings, an atom is an SMT-LIB term. A literal is a term with a polarity + or -. A clause is a set of literal, written as a sequence with a pair of parenthesis around it.

Every proof proves a clause. The whole proof given by an SMT-solver as the solution, should prove the empty clause. A proof may be produced from subproofs and for this purpose a subproof may be bound to a variable symbol, so it can be efficiently used multiple times.

The first rule, (assume t) where t is a term, proves the unit clause (+ t). Its side condition (checked by the proof checker) is that it was asserted in the input problem by an SMT-LIB assert command.

The rule (res t p1 p2) is the resolution rule, the main work-horse of this proof format. Here p1 is a proof that proves some clause and p2 a proof that proves . The side-condition is that + t occurs in and - t in (the order of the arguments p1 and p2 is important). The resulting proof term proves the clause .

The let binder let ((x t)) p binds the value of t to a symbol x. Here t can be an arbitrary term. The symbol x can then be used in terms as free variable. Similarly, the let binder let-proof ((x p1)) p binds a proof p1 to the variable x and x can be referenced in p using the ⟨symbol⟩ rule. The scope of x is the proof p and the resulting proof proves the same clause as p.

The oracle rule can be used to introduce clauses that cannot be easily explained. It should be used as a last resort, if a full low-level proof is too tedious, or if the axioms for the theory are missing. The proof-checker allows any oracle clause but will warn about these clauses.

The remaining rules are the low-level axioms of the proof format. Each of them creates a tautological clause. They are given in the next section.

Core Axioms

The core axioms are ⟨proof⟩ objects that prove a tautological clauses. Their inputs are usually ⟨term⟩s. We distinguish the following categories.

Logical axioms – elimination and introduction

For every logical operator we have the usual elimination and introduction rules. In our calculus they take the form of a clause where the premises have negative polarity and the conclusion has positive polarity. They are also identical to the clauses that Tseitin-encoding gives. The following list the axioms and the corresponding clause proved by this axiom. In this list t, t0,… denote terms, n,i,j denote numerals. The elimination rules are indicated by - and the introduction rules are indicated by +.

(false-):         ( - false )
(true+):          ( + true )
(not+ (not t)):         ( +(not t) + t )
(not- (not t)):         ( -(not t) - t )
(and+ (and t0 … tn)):   ( +(and t0 … tn) - t0 … - tn)
(and- i (and t0 … tn)): ( -(and t0 … tn) + ti)
(or+ i (or t0 … tn)):   ( +(or t0 … tn) - ti)
(or- (or t0 … tn)):     ( -(or t0 … tn) + t0 … + tn)
(=>+ (=> i t0 … tn)):   ( +(=> t0 … tn) + ti)        (for i < n)
(=>+ n (=> t0 … tn)):   ( +(=> t0 … tn) - tn)
(=>- (=> t0 … tn)):     ( -(=> t0 … tn) - t0 … - tn-1 + tn)
(=+1 (= t0 t1)):        ( +(= t0 t1) + t0 + t1)      (only for Boolean t0/t1)
(=+2 (= t0 t1)):        ( +(= t0 t1) - t0 - t1)      (only for Boolean t0/t1)
(=-1 (= t0 t1)):        ( -(= t0 t1) + t0 - t1)      (only for Boolean t0/t1)
(=-2 (= t0 t1)):        ( -(= t0 t1) - t0 + t1)      (only for Boolean t0/t1)

All rules containing i have the implicit constraint 0 <= i <= n.

Equality rules

We support the usual axioms of reflexivity, symmetry, transitivity (with arbitrarily long chains), and congruence:

(refl t):              ( +(= t t) )
(symm t0 t2):          ( +(= t0 t1) -(= t1 t0) )
(trans t0 … tn):       ( +(= t0 tn) -(= t0 t1) … -(= tn-1 tn) )  (for n >= 2)
(cong (f t0 … tn) (f t'0 … t'n)):
                       ( +(= (f t0 … tn) (f t'0 … t'n))
                         -(= t0 t'0) … -(= tn t'n))

Additionally we have elimination and introduction rules for equality chains and for distinct:

(=+ (= t0 … tn)):               ( +(= t0 … tn) -(= t0 t1) … -(= tn-1 tn))
(=- i j (= t0 … tn)):           ( -(= t0 … tn) +(= ti tj))
(distinct+ (distinct t0 … tn)): ( +(distinct t0 … tn)
                                  +(= t0 t1) … +(= t0 tn)
                                  +(= t1 t2) … +(= t2 tn)…
                                  +(= tn-1 tn))
(distinct- i j (distinct t0 … tn)): ( -(distinct t0 … tn) -(= ti tj))      (for i != j)

The rules containing i,j have the implicit constraint 0 <= i <= n, 0 <= j <= n. For = we have n >= 2. for distinct we have n >= 1.

ite rules

To support ite terms, we have two simple axioms:

(ite1 (ite c t e)):  ( + (= (ite c t e) t) - c)
(ite2 (ite c t e)):  ( + (= (ite c t e) e) + c)

xor rules

For efficient xor reasoning we allow arbitrary split and reordering of the xor arguments between three xor terms. The rules are

(xor+ (t̅0) (t̅1) (t̅2): ( +(xor t̅0) +(xor t̅1) -(xor t̅2) )
(xor- (t̅0) (t̅1) (t̅2): ( -(xor t̅0) -(xor t̅1) -(xor t̅2) )

where t̅0,t̅1,t̅2 are three non-empty sequences of terms that in total contains every term an even number of times. Furthermore, if one of the sequences t̅i consists of only a single term ti then the clause contains the atom ti instead of the corresponding xor-term. For example:

(xor+ (t0 t1 t2) (t1) (t2 t0)): ( + (xor t0 t1 t2) + t1 -(xor t2 t0) )
(xor- (t t) (t t) (t t)):       ( - (xor t t) )

Note that the last rule uses the fact that a clause is a set of literals, so multiple occurrences of literals are implicitly removed.

Quantifier instantiation and Skolemization

The rules for quantifier introduction and elimination bind the quantified variables either by an arbitrary term or use the choose operator (the Hilbert epsilon operator).

(forall+ (forall ((x0 X0) … (xn Xn)) F))
   ( +(forall ((x0 X0) … (xn Xn)) F)
     -(let ((x0 (choose (x0 X0) (not (forall ((x1… xn Xn)) F)))))
       (let ((x1 (choose (x1 X1) (not (forall ((x2… xn Xn)) F)))))
        …
         (let ((xn (choose (xn X)) (not F))) F)..)) )
(forall- (t0 … tn) (forall ((x0 X0) … (xn Xn)) F)):
   ( -(forall ((x0 X0) … (xn Xn)) F) +(let ((x0 t0) … (xn tn)) F) )
(exists+ (t0 … tn) (exists ((x0 X0) … (xn Xn)) F)):
   ( +(exists ((x0 X0) … (xn Xn)) F) -(let ((x0 t0) … (xn tn)) F) )
(exists- (exists ((x0 X0) … (xn Xn)) F))
   ( -(exists ((x0 X0) … (xn Xn)) F)
     +(let ((x0 (choose (x0 X0) (exists ((x1… xn Xn)) F))))
       (let ((x1 (choose (x1 X1) (exists ((x2… xn Xn)) F))))
        …
         (let ((xn (choose (xn X)) F)) F)..)) )

In forall- and exists+, Xi is the type of ti.

Miscellaneous rules

For every symbol defined by define-fun we have a rule that expands the function definition. Assume f is defined as

(define-fun f ((x0 X0) … (xn Xn)) t)

then the expand rule for f is

(expand (f t0 … tn)): ( + (= (f t0 … tn)
                               (let ((x0 t0) … (xn tn)) t)))

Note that the :named attribute is implicitly expanded to define-fun according to the standard and therefore the expand rule can also be used for named terms.

For some internal functions, the expand rule is also applicable. For every function with the LEFTASSOC, RIGHTASSOC, CHAINABLE, or PAIRWISE attribute and more than two arguments expand will apply the transformation given by the SMT-LIB standard, e.g.:

(expand (or t0 … tn)): ( + (= (or t0 … tn)
                              (or … (or t0 t1) … tn)))
(expand (=> t0 … tn)): ( + (= (=> t0 … tn)
                              (or t0 … (or tn-1 tn) … )))
(expand (< t0 … tn)): ( + (= (< t0 … tn)
                             (and (< t0 t1) … (tn-1 tn))))

Moreover, for every internal binary function in LIRA that takes two reals, the functions for automatically casting the to real are defined. This is also supported by the expand function, even if the function is not used with more than two arguments.

(expand (+ i0 r1 i2 r3)): ( + (= (+ i0 r1 i2 r3)
                                 (+ (to_real i0) r1 (to_real i2) r3)))

We also have a simple rule to delete attributes from input terms

(del! t :attributes…): ( + (= (! t :attributes…) t) )

Equality between terms

A proof calculus needs to define which terms are intrinsically equal. This equality relation needs to be decidable, but it should also simple for efficient implementation of proof checkers. To get a consensus when a proof is correct we need to formally define this intrinsic equality.

Equality is used by the proof checker in two places. The first is for the side-condition of the assume rule that states that the assumed term must be equal to a formula asserted in the input problem. The second is for the set operations in the resolution rule that require that the pivot literal is contained in the clause, is removed from the clause, and for the union operation that will remove duplicated elements in the resolvent.

In type theory equality is usally defined by α,β,δ,…-reduction. Here we define our own reduction rules in form of a rewrite system. Two terms are equal, if they are rewritten to the same canonical term.

The rules are

x → t                where x in the current context is bound to t by a let
(let (...) t) → t    where t has no variable bound by let
|symbol| → symbol    where symbol is a valid SMT-LIB symbol

Discussion

The above rules are minimal to keep the canonicalization of terms simple. The let rules are necessary for efficient proof representation. The |symbol| rule should help solvers whose internal representation may not differentiate between quoted and unquoted terms.

Note that any change to these rules can make existing proofs invalid. If new rules are added this may cause resolution steps to be superfluous and cause an error message that the pivot literal was not found in the antecedent clause. This could be solved by making the resolution rule more lenient, but such a change would make debugging faulty proofs much harder as then only the final emptiness check for the proved clause fails.

Clashing variable names

For the rewrite system above variable names can clash. As an example consider the following term.

(exists ((x Int)) (let ((y x)) (exists (x Int) (not (= x y)))))

When rewriting the above term using the let rules and replacing y by its definition x, the proof validator needs to remember that it binds to the outer quantifier and not to the inner one. In particular the above term is not equal (not even equivalent) to the term

(exists ((x Int)) (exists (x Int) (not (= x x))))

Extensions for other theories

When adding more theories, we need to add more axioms to support these theories. Here we will look into some of them.

Array Theory

We add the McCarthy axioms and an axiom for extensionality. We choose the axiom with an explicit @diff operator that returns some index where two arrays differ, if they differ.

(selectstore1 a i v):   ( + (= (select (store a i v) i) v) )
(selectstore2 a i j v): ( + (= (select (store a i v) j) (select a j)) + (= i j))
(extdiff a b):  ( + (= a b) - (= (select a (@diff a b)) (select b (@diff a b))))

For supporting const arrays one additional axiom is needed:

(const v i): ( + (= (select (@const v) i) v) )

Linear Arithmetic

For arithmetic we need to manipulate polynomials. Polynomials are represented by an SMT-LIB term of the following form.

  (+ (* c1 t11 ...t1m) ... (* cn tn1 ... tnm))

Here c1, .., cn are constants (NUMERALS, DECIMALS, or integer/real numbers in canonical form). All (ti1…tim) are different multi-sets, the head symbol of tij is not + or *. All ci,tij have the same type (Real or Int). The constant ci can be omitted, if it is 1 (except if there is no ti1…tin), */+ is omitted if it has only one argument. The zero polynomial is represented by 0.

When multiplying polynomials or adding them together the usual rules of distributivity, commutativity and associativity are applied to bring them into the above normal form again.

We have four rules that manipulate these polynomial. The first two are doing polynom addition and multiplication:

(poly+ a1 .. an a):  ( +(= (+ a1 ... an) a) )
  where a1+...+an = a
(poly* a1 .. an a):  ( +(= (* a1 ... an) a) )
  where a1*...*an = a

Here a1, …, an, a are polynomials. The side-condition a1+...+an=a states that the polynomial resulting from adding a1 to an together yields a polynomial equal to a, i.e. has the same monomials with the same coefficients, but can differ in the order of the terms. Similarly for a1*...*an=a.

Then we have one rule for to_real:

(to_real a):  ( +(= (to_real a) a') )

where a is a polynomial (+ ... (* ci ...tij...)) with integer terms and a' is the polynomial (+ ... (* ci' ...(to_real tij)...)) where ci' is the real representation of the integer ci, i.e., an integer NUM is replaced with NUM.0 and (- NUM) is replaced with (- NUM.0). Every other term t is replaced by (to_real t).

The heart of the proof system is the following rule proving the inconsistency between multiple inequalities by using Farkas’ Lemma. (Note that this is only complete for linear arithmetic).

(farkas c1 (<=? a1 b1) ... cn (<=? an bn)): ( -(<=? a1 b1) ... -(<=? an bn) )

where ci are positive integers, <=? stands for <, <=, or =, ai,bi are polynomials. The weighted sum of these polynomials, c1*(a1-b1) + ... + cn*(an-bn) is a (rational or integer) constant c where c >= 0. If c = 0, then at least one inequality must be strict. If some inequalities are Int and some are Real, all inequalites are implicitly converted to Real by converting all coefficients in ai/bi to real and replacing all terms t in ai/bi with (to_real t).

The other axioms work with arbitrary terms and do not require adding or multiplying polynomials:

(trichotomy a b):   ( +(< a b) +(= a b) +(< b a) )
(total a b):        ( +(<= a b), (< b a) )
(>def a b):         ( +(= (> a b) (< b a)) )
(>=def a b):        ( +(= (>= a b) (<= b a)) )
(/def a b1 ... bn): ( +(= a (* b1 ... bn (/ a b1 ... bn))) +(= b1 0) ... +(= bn 0) )
(-def a):           ( +(= (- a) (* (- 1) a)))
(-def a b1 ... bn): ( +(= (- a b1 ... bn) (+ a (* (- 1) b1)) ... (* (- 1) bn)) )
(abs-def x):        ( +(= (abs x) (ite (< x 0) (- x) x)) )

The only side condition is that the terms in the clause type check. For integer reasoning we use the following axiom that states that there is no number between c and c+1:

(total-int a c):    ( +(<= a c) +(<= (c+1) a) )

where c is an integer constant (NUMERAL or negated NUMERAL) and (c+1) is the SMT-LIB representation of that constant plus one. The term a is an arbitrary term of sort Int. Remark: an alternative would be to restrict the axiom to c=0: (total-int a): ( +(<= a 0) +(<= 1 a) )

Also we need the following axioms for integer and mixed integer/real reasoning:

(to_int-low x):      ( +(<= (to_real (to_int x)) x) )
(to_int-high x):     ( +(<  x (+ (to_real (to_int x)) 1.0)) )
(div-low x d):       ( +(<= (* d (div x d)) x) +(= d 0) )
(div-high x d):      ( +(<  x (+ (* d (div x d)) (abs d))) +(= d 0) )
(mod-def x d):       ( +(= (mod x d) (- x (* d (div x d)))) +(= d 0) )
(divisible-def c x): ( +(= ((_ divisible c) x) (= x (* c (div x c)))) )
(expand (is_int x)): ( +(= (is_int x) (= x (to_real (to_int x)))) )

Note that in (divisible-def c x) the constant c must be a positive numeral larger than zero, to make the term ((_ divisible c) x) syntactically correct.

Note that the axiom farkas is the only axiom with negated literals. It can be used in a resolution proof to rewrite a positive literals into the corresponding negative literal. On the other hand, the axiom total and total-int can be used to rewrite a negative literal into the corresponding positive literal.

Most proof rules do not use the symbols -, /, >=, >. A solver should first rewrite them using the definition rules and then only work with <=, <, *, +.

Data Types

For reasoning about data types, the following rules are used

(dt_project seli a1 ... an): (= (seli (cons a1 ... an)) ai)
(dt_cons cons x): ~((_ is cons) x), (= (cons (sel1 x) ... (seln x)) x)
(dt_test cons (cons a1 ... an)):   ((_ is cons) (cons a1 ... an))
(dt_test cons' (cons a1 ... an)): ~((_ is cons') (cons a1 ... an))
(dt_exhaust x): ((_ is cons1) x), ..., ((_ is consn) x)
(dt_acyclic (cons ... x ...)):  ~(= (cons ... x ...) x)
   where (cons ... x ...) is a term dag containing term x and on the path
   from the root to x only constructor functions appear.
(dt_match (match ..)): (= (match t ((p1 x1) c1) ...)
           (ite ((_ is p1) t) (let (x1 (sel1 t)) c1)
             ...))