Pravda - sequent calculus

On this page, you will learn about sequent calculus. Just like in natural deduction, sequent calculus is about manipulating sequents, but now there can be more than one formula on the right-hand side. For instance p, q |- q, r is a sequent, composed of: Intuitively, Gamma |- Delta means that you can prove one of the formulas in Delta, based on the hypotheses in Gamma. The set Gamma can be empty, meaning you can prove one of the formulas in Delta without any hypotheses, as well as Delta which would mean the hypotheses in Gamma can prove false. This is only an introduction though, and sequent calculus is much richer than what we will see here! If you are interested, don't hesitate to take a look at the wikipedia page, as well as this wholesome document by David Baelde about more theoretical aspects of this proof system.

In propositional logic

Axioms

There are three ways you can start a proof. First, and most intuitively, any sequent having the same proposition on the right and the left can be proved. Second, any sequent having false as an hypothesis can be proved: Third, and symmetrically, having true as one of the conclusions is sufficient to be proved:

Other rules

There are two rules for each connective of the propositional logic: a "left" one and and a "right" one, i.e. one if a formula having this connective as a root is an hypothesis or a conclusion.

Not

Or

And

Implication

Examples of proofs

Proofs in sequent calculus are generally way simpler and more straightforward than in natural deduction. For instance, try to show p or (not p) (the law of excluded middle) in sequent calculus: Try to prove the following De Morgan's law in sequent calculus: not (p or q) -> (not p) and (not q) Try to prove the contraposition law in sqeuent calculus: (p -> q) -> (not q -> not p) Try to prove Peirce's law: ((p -> q) -> p) -> p Try to prove the definition of negation using implication: (not p -> (p -> bottom)) and ((p -> bottom) -> (not p))

While the proofs are not that shorter compared to natural deduction, you can see that they are way more guided by the shape of the formulas. This is in part due to the subformula property of the sequent calculus: each formula appearing in the premisses is a subformula of another formula appearing in the conclusion of a rule.

This observation almost directly gives a proof search algorithm. Maybe this could tickle something in your brain if you are aware of some notions about complexity: we know for sure that the validity problem for propositional logic is coNP-complete, thus it would be an enormous breakthrough to show that it is in P, thanks to an hypothetical polynomial-time proof search algorithm.

However be reassured, this will not happen with sequent calculus, because the proof search algorithm we just found happens to have an exponential complexity, in the worst case. For example, try to prove the following (and quite boring) sequent: |- p and q, p and not q, not p and q, not p and not q

First-order logic

First order rules

Just like for propositional rules, we have a right and left rule for the existential and universal quantifier. This gives four rules in total.

Existential quantifier

When the formula is on the right, you can instantiate the variable with anything. Note that the substitution of the variable x must not be ill-formed. For instance, the following is not a valid application of the rule: However, you can always get around this issue by renaming linked variables: However, on the left you must keep the variable as it is. Be careful, though! You cannot apply this rule with a variable appearing free anywhere in the premisse sequent:

Universal quantifier

The rules and restrictions are symmetric here:

First examples of first-order proofs

Try to prove the following formula: (forall x P(x)) -> (exists x P(x)) Try to prove the distributy of the universal quantifier on conjunctions: forall x (P(x) and Q(x)) -> (forall x P(x)) and (forall x Q(x)) Try to prove the duality of the existential and universal quantifiers: not (forall x P(x)) -> exists x (not P(x)) Though those rules can seem satisfying, they are not sufficient to prove everything. Take a look at the following formula: exists x (forall y (P(y) -> P(x))). When we try to prove it using only the previous rules, we fail quite quickly: But this formula is indeed valid (try to convince yourself of this!). A way to solve this problem is to add new structural rules, named contraction rules.

Contraction rules

As before, there is a left contraction rule as well as a right one. They are pretty straightforward: Note that their introduction completely shatters any hope to get a proof search algorithm for first-order sequents, since it's a priori very hard to know when to apply them, and how many times. This was to be expected though, since first-order logic is not decidable.

First-order proofs using contraction rules

Now equipped with those new rules, try to prove the previous formula: exists x (forall y (P(y) -> P(x))) (tip: you only have to apply contraction once!) Try to prove the drinking person's paradox: in a bar, there is always a person such that if this person drinks, everyone drinks.

Sandbox

In the following box, you can write any proof you want: