# Section 8: Proofs

The inference rules that we have covered so far suffice to prove whatever is provable in a given environment. (The above claim requires a proof but we do not discuss it here. It falls into the area of metalogic---that is knowledge about logic. Our interest is here about practical aspects of proving.)

Any justification of a formula in MIZAR may take the form of a proof. In essence, proofs are only a syntactic variation of recording the application of inference rules, which offers some convenient short cuts in conducting arguments. The syntax of proofs is similar to the syntax of reasonings.

 CompactStatement   ::= [ then ] Proposition SimpleJustification | Proposition Proof

 Proof   ::= proof Reasoning end

## 8.1: II, NI, PbyC, UI Rules Revisited

Using examples, we show how the proof syntax is used for recording the II, NI, PbyC, and UI rules.

### 8.1.1: Proof of an Implication

The proof form of the II rule is identical to the now version of the II rule, see 6.1. Indeed, what goes between now and end now goes between proof and end, and altogether save a label and a by reference.

implies proof

assume A:
.
.
.
thus Justification
end

Let us look at the proof form of the example from section 6.1. (Note the place where we announce the rule name.)

```environ
given x being PERSON;
Ax1: P[x] implies Q[x];
Ax2: Q[x] implies R[x];
begin

P[x] implies R[x] proof            == Rule: II
assume A: P[x];
1a: Q[x] by A, Ax1;      == Rule: IE
thus R[x] by 1a, Ax2           == Rule: IE
end
```

Whatever we can prove with a proof we can also justify using the now reasoning. There is a psychological difference though. To use a proof we have to state first what we are proving, while in case of the now reasoning we can do it later, if we wish, when we state explicitly what has been proven. (Certainly, in both cases the MIZAR checker will tell us whether our statements are justified.)

For example, let us prove this theorem about a PERSON called x.

( (P[x] implies Q[x]) & (Q[x] implies R[x]) & P[x] ) implies R[x]

We do it both ways, presenting the proof first.

```environ
given x being PERSON;
begin

== The proof version

( (P[x] implies Q[x]) & (Q[x] implies R[x]) & P[x] ) implies R[x]
proof         == Rule: II
assume A: (P[x] implies Q[x]) & (Q[x] implies R[x]) & P[x];
1: P[x] by A;                == CE
2: P[x] implies Q[x] by A;   == CE
3: Q[x] by 1, 2;             == IE
4: Q[x] implies R[x] by A;   == CE
thus R[x] by 3,4;                   == IE
end;

== The now version

1: now
assume A: (P[x] implies Q[x]) & (Q[x] implies R[x]) & P[x];
1: P[x] by A;                == CE
2: P[x] implies Q[x] by A;   == CE
3: Q[x] by 1, 2;             == IE
4: Q[x] implies R[x] by A;   == CE
thus R[x] by 3,4;                   == IE
end;

( (P[x] implies Q[x]) & (Q[x] implies R[x]) & P[x] ) implies R[x]
by 1;     == II
```

The justification of the main proposition is in both cases via hypothetical reasoning (Implication Introduction).

### 8.1.2: Proof of a Negation

Instead of applying the NI rule as presented in section 6.2.2 we may present the following proof:

not proof

assume Contra: ;
.
.
.
end

Compare the proof and the now versions of the example from section 6.2.2.

```environ
given glass being CONTAINER;
given juice being DRINK;
A1: Full[glass] implies Fresh[juice];
A2: not Fresh[juice];
begin       == We show that: not Full[glass]

== The proof version

not Full[glass] proof                 == NI
assume A: Full[glass];
1: Fresh[juice] by A, A1;  == IE
thus contradiction by 1,A2;       == ContrI
end;

== The now version

S: now
assume A: Full[glass];
1: Fresh[juice] by A, A1;  == IE
thus contradiction by 1,A2;       == ContrI
end;
not Full[glass] by S                  == NI
```

There is also a full correspondence between the PbyC rule and the proof version of reasoning by contradiction.

proof

assume Contra: not ;
.
.
.
end

The example from section 6.2.2 takes the following proof form.

```environ
given glass being CONTAINER;
begin

(not Full[glass] implies Full[glass]) implies Full[glass]
proof                                                       == II
assume a: not Full[glass] implies Full[glass];
S: now                                           == NI
assume b: not Full[glass];
1: Full[glass] by a,b;                == MP
thus contradiction by 1,b;                   == ContrI
end;
T: not not Full[glass] by S;                     == NI
thus Full[glass] by T                                   == NE
end;
```

### 8.1.4: Universal Introduction

The following proof schema is fully analogous to the UI rule.

: for being holds proof

let be ;
.
.
.
thus Justification
end

And now, employing a proof, the example from section 7.7 looks as follows:

```environ
given Fred being PERSON;
given Hello being STATEMENT;

KnightSpeak: for y being PERSON holds
for s being STATEMENT holds
Knight[y] & Tells[y,s] implies True[s];

begin

for y being PERSON holds
Knight[y] & Tells[y,Hello] implies True[Hello]               == UI
proof
let y be PERSON;
1: for s being STATEMENT holds
Knight[y] & Tells[y,s] implies True[s] by KnightSpeak; == UE
thus Knight[y] & Tells[y,Hello] implies True[Hello] by 1;     == UE
end
```

## 8.2: Proof Structures

We have seen above how the hypothetical reasoning (II), Negation Introduction NI, and Universal Introduction UI rules take on the a shape of a proof. Thus we know how to write a proof for a formula with an implication, negation, or a universal quantifier as the main connective. We also know how to prove an arbitrary formula by contradiction. MIZAR allows us to write a proof for an arbitrary formula, even if its structure is suitable to use a SimpleJustification on a basis of some rule of inference.

### 8.2.1: Conjunction

In order to prove a conjunction (directly) we have to demonstrate the truth of all its conjuncts. One can always use the CI rule. But one can employ the following proof structure.

& & . . . & proof
.
thus Justification;
.
thus Justification;
.
.
.
thus Justification
end

Usually, there is no point in employing this proof structure---applying the CI rule results in a shorter text.

### 8.2.2: Implication

The hypothetical reasoning forms the direct proof structure for a conditional formula. At this point we would like to mention the idea of indirect proof. You may note the affinity of the II rule and the Modus Ponens rule. One can wonder whether there is a similar proof structure corresponding to the Modus Tollens rule. We would like to permit the following, indirect, structure for a proof of a conditional:

implies proof

assume not ;
.
.
.
thus not Justification
end

From the Modus Tollens rule we can feel that such a structure is right. And this is the case. Unfortunately, MIZAR does not recognize it as a proper structure for proving a conditional. Too bad! But we can always find our way around the problem and do the indirect proof with the now reasoning as follows:

A: now
assume not ;
.
.
.
thus not Justification
end;
B: not implies not by A;
implies by B == Reverse Implication
== MIZAR will accept the following as well:
implies by A == No single rule suffices

All proofs that are not by contradiction and are not indirect, are called direct proofs.

### 8.2.3: Disjunction

The direct proof of a single disjunction looks quite `indirect', although it is not. We already know (from the example in section 6.5) which conditional formula is equivalent to a given disjunction. MIZAR permits the following as a proof of a disjunction.

or proof

assume not ;
.
.
.
thus Justification
end

Unfortunately, we cannot start with the negation of the second disjunct. But always we can call on the now reasoning for help.

### 8.2.4: Equivalence

The direct proof of an equivalence must take the following form:

iff proof

. . .
thus implies Justification;
.
.
.
thus implies Justification;
end

Unfortunately for MIZAR, the order of these thus'es is important.

## 8.3: Omitting Proofs

The correctness of a formula may be established by SimpleJustification without writing an explicit proof for it. Whenever we get the MIZAR processor to accept

by References

then we can also justify using the following proof structure:

proof
.
.
.
thus by References
end

Since the latter is longer we will prefer the former way of justification. Note how the above proof structure is similar to the one we discussed for conjunction in 8.2.1. Indeed, any formula can be seen as a conjunction with exactly one conjunct, no matter how strange it sounds.

## 8.4: Incorporating proofs

A proof nested within another proof can be incorporated directly into the encompassing proof. We start with the example, which is a variation of one of the proofs from section 6.5.

```environ
given S being STUDENT;
given M being Money;
begin
(Need[M] implies Work[S] & Save[M])
implies
(Need[M] implies Work[S]) & (not Save[M] implies not Need[M])
proof                                                             == II
assume A:( Need[M] implies Work[S] & Save[M] );

== At this point we have to prove the consequent of the original formula.
== Since it is a conjunction, we do it in two separate thus'es.

thus Need[M] implies Work[S] proof                            == II
assume A1: Need[M];
1a: Work[S] & Save[M] by A, A1;                 == IE
thus Work[S] by 1a;                                    == CE
end;

thus not Save[M] implies not Need[M] proof                    == II
assume A1: not Save[M];
2a: Need[M] implies Save[M] proof                == II
assume A2: Need[M];
2b: Work[S] & Save[M] by A, A2;    == IE
thus Save[M] by 2b;                       == CE
end;
thus not Need[M] by A1, 2a;                             == MT
end;
end;
```

This proof can still be refined by incorporating the last part of the conclusion into the main proof.

```environ
given S being STUDENT;
given M being Money;
begin
(Need[M] implies Work[S] & Save[M])
implies
(Need[M] implies Work[S]) & (not Save[M] implies not Need[M])
proof                                                             == II
assume A:( Need[M] implies Work[S] & Save[M] );

== At this point we have to prove the consequent of the original formula.
== It is a conjunction. The first conjunct is concluded by a thus.

thus Need[M] implies Work[S] proof                            == II
assume A1: Need[M];
1a: Work[S] & Save[M] by A, A1;                 == IE
thus Work[S] by 1a;                                    == CE
end;

== At this point the second conjunct has to be concluded.
== Since it is an implication we incorporate the proof of the implication
== into the main proof.
== We are really doing the implication introduction from here on.

assume A1: not Save[M];

== What remains to be proven at this point is: not Need[M]

2a: now                                                == II
assume A2: Need[M];
2b: Work[S] & Save[M] by A, A2;            == IE
thus Save[M] by 2b;                               == CE
end;

thus not Need[M] by A1, 2a;                                   == MT
end;
```

The key point to understanding all this is the question: What remains to be proven in a proof? It is a formula. Let us introduce a name for it. That which still remains to be proven in a proof we call the current thesis. Initially, right after the proof, the current thesis is the original formula that we are proving. Each assumption (assume) and conclusion (thus) that we make affects the thesis. The precise explanation of What is the current thesis? is pretty obscure. Therefore, we will rely more on your intuition. Assume we are at the beginning of a proof. The current thesis is the original formula. If we go through an assumption assume then it looks like we are applying hypothetical reasoning and the current thesis better be an implication. After the assumption the current thesis is the consequent of the previous thesis. And now, if the current thesis is an implication we may expect another assumption; if not we have to conclude (thus) whatever remains to be proven. At any point in a proof you can assume the negation of the current thesis, which means that you are switching to a proof by contradiction.

MIZAR has been equipped with a special contraption to talk about the current thesis.

## 8.5: thesis

We know the fundamental proof structures for propositional formulas. We know also we can use a proof by contradiction. For a conditional formula we can present an indirect reasoning. We know how to structure a proof to get a universally quantified formula.

The previous example will be used to introduce the special formula named thesis. In a proof, thesis denotes the current thesis, which is the formula that remains to be concluded in order to complete the innermost proof. The proof that can always be omitted (section 8.3) can be then rewritten (although there is no need to write it in the first place) as:

proof
.
.
.
thus thesis by References
end

Outside of a proof thesis must not be used as it is meaningless. It is quite natural to use thesis in hypothetical reasoning.

implies proof
.             == thesis  is here   implies
assume ;
.             == thesis  is here
.
.
thus thesis Justification
.             == thesis  is here  not contradiction
.             == which does not have to be concluded by a thus
end

It is quite convenient to use thesis in a proof by contradiction, it may shorten our proofs.

proof
.             == thesis  is here
assume not thesis;
.             == thesis  is here  contradiction
.
.
thus thesis Justification
.             == thesis  is here  not contradiction
.             == which does not have to be concluded by a thus
end

It is interesting to see the behaviour of thesis when doing universal introduction. Let us look at an example.

```environ
1: for p being Person holds Knight[p] implies Wise[p];
2: for p being Person holds Wise[p] implies Rich[p];
begin

for p being Person holds Knight[p] implies Rich[p] proof      == UI
let P be Person;
== thesis   <=>   Knight[P] implies Rich[P]
thus thesis proof                                          == II
== thesis   <=>   Knight[P] implies Rich[P]
assume a: Knight[P];
== thesis   <=>   Rich[P]
b: Wise[P] by 1, a;                         == UE, IE
c: Rich[P] by 2, b;                         == UE, IE
thus thesis by c                                   == RE
end;   == thesis   <=>  not contradiction
end;
== The inner proof can be shortened.

for p being Person holds Knight[p] implies Rich[p] proof      == UI
let P be Person;
== thesis   <=>   Knight[P] implies Rich[P]
thus thesis proof                                          == II
== thesis   <=>   Knight[P] implies Rich[P]
assume a: Knight[P];
== thesis   <=>   Rich[P]
b: Wise[P] by 1, a;                         == UE, IE
thus thesis by 2, b;                               == UE, IE
end;   == thesis   <=>  not contradiction
end;
== The inner proof can be incorporated entirely.

for p being Person holds Knight[p] implies Rich[p] proof      == UI
let P be Person;
== thesis   <=>   Knight[P] implies Rich[P]
== Since the thesis is a conditional we can use
== hypothetical reasoning to get it.
assume a: Knight[P];                                       == II
== thesis   <=>   Rich[P]
b: Wise[P] by 1, a;                                      == UE, IE
thus thesis by 2, b
end;
```

## 8.6: MIZAR Problems on Knights and Knaves

### 8.6.1: Puzzle 1

```environ

given A, B being PERSON;

Opp: for a being PERSON holds Knight[a] iff not Knave[a];

== These two statements are a translation of the sentence, said by A,
== " I am a knave or B is a knight"

A1:  Knight[A] implies ( Knave[A] or Knight[B] );
A1': Knave[A] implies not ( Knave[A] or Knight[B] );

begin

== The question is what are A and B?

== First we argue that A must be a knight.

C1: Knight[A]
proof
assume Contra: not Knight[A];
C1: Knave[A] by Contra, Opp;
C2: not ( Knave[A] or Knight[B] ) by C1, A1';
C3: not Knave[A] by C2;
C4: Knight[A] by C3, Opp;
end;

== Now we determine what B is.

D1: Knight[B]
proof
S1: Knave[A] or Knight[B] by C1, A1;
S2: not Knave[A] by C1, Opp;
thus thesis by S1, S2;
end;
```

### 8.6.2: Puzzle 2

```environ

given A, B, C being PERSON;

Opp: for a being PERSON holds Knight[a] iff not Knave[a];

== These two are a translation of the sentence, said by A,
== "All of us are knaves."
A1:  Knight[A] implies ( Knave[A] & Knave[B] & Knave[C] );
A1':  Knave[A] implies not ( Knave[A] & Knave[B] & Knave[C] );

== These two are a translation of the sentence, said by B,
== "Exactly one of us is a knight."
B1: Knight[B] implies ( (Knight[A] & Knave[B] & Knave[C]) or
(Knave[A] & Knight[B] & Knave[C]) or
(Knave[A] & Knave[B] & Knight[C]) );
B1': Knave[B] implies not ( (Knight[A] & Knave[B] & Knave[C]) or
(Knave[A] & Knight[B] & Knave[C]) or
(Knave[A] & Knave[B] & Knight[C]) );

begin

== The question is what are A, B and C?

== First we argue that A must be a knave.
A2: Knave[A]
proof
assume Contra: not Knave[A];
1: Knight[A] by Contra, Opp;
2: Knave[A] by 1, A1;
end;

== A useful sentence:
B2: ( (Knight[A] & Knave[B] & Knave[C]) or
(Knave[A] & Knight[B] & Knave[C]) or
(Knave[A] & Knave[B] & Knight[C]) )
implies not Knave[B] by B1';

== Then we argue that if C is a knight then B must be a knight.
C2: Knight[C] implies Knight[B]
proof
assume S1: Knight[C];
S2: Knave[A] & Knight[C] by S1, A2;
S3: not Knave[B]
proof
assume Contra: Knave[B];
31: Knave[A] & Knave[B] & Knight[C]
by S2, Contra;
32: not Knave[B] by B2, 31;
end;
thus Knight[B] by Opp, S3;
end;

== Then we argue that if C is a knave then B must still be a knight.
C3: Knave[C] implies Knight[B]
proof
assume S1: Knave[C];
thus Knight[B]
proof
assume Contra: not Knight[B];
1: Knave[B] by Opp, Contra;
2: Knave[A] & Knave[B] & Knave[C] by A2, S1, 1;
3: not Knave[A] by A1', 2;
end;
end;

== A tautology:
C4: Knight[C] or not Knight[C];
C5: Knight[C] or Knave[C] by C4, Opp;

== No matter what C is, B is a Knight.
B3: Knight[B] by C2, C3, C5;

C6: Knave[C] by A1, A1', B1, B1', B3, C5;

Conclusion: Knave[A] & Knight[B] & Knave[C] by A2, B3, C6;
```

### 8.6.3: A theorem about Knights and Knaves

```environ

== Opp says that every person is either a Knight or a Knave, but
== not both
Opp: for a being PERSON holds Knight[a] iff not Knave[a];

== Veracity says that a person is a knight if and only if every statement
== they say is true.
Veracity: for a being PERSON holds
Knight[a] iff ( for s being STATEMENT holds
Tells[a,s] implies True[s] )

begin

== Theorem1 says that if some person, p, is a knave then there is at
== least one statement s, uttered by p, that is not true
== But it does not say that every statement of p is false. Can this
== latter assertion be proven with the above environment?

Theorem1: for p being PERSON holds
Knave[p] implies ( ex s being STATEMENT st
Tells[p,s] & not True[s] )
proof
let p be PERSON;

== We have to prove the implication.
assume A1: Knave[p];

== Now we have to prove the consequent.
C0: not Knight[p] by A1, Opp;

C1: Knight[p] iff ( for s being STATEMENT holds
Tells[p,s] implies True[s] ) by Veracity;

C2: not ( for s being STATEMENT holds
Tells[p,s] implies True[s] ) by C1, C0;

C3: ex s being STATEMENT st not
( Tells[p,s] implies True[s] ) by C2;

consider t being STATEMENT such that
C4: not ( Tells[p,t] implies True[t] ) by C3;

C5: Tells[p,t] & not True[t] by C4;

thus thesis by C5;
end;
```

### 8.6.4: Three more theorems

```environ

given Hello being STATEMENT;
given Fred being PERSON;

== Hello is a true statement.
Fact1: True[Hello];

== Fred always lies.
Fact2: for s being STATEMENT holds Tells[Fred,s] implies not True[s];

== Opp says that every person is either a Knight or a Knave, but
== not both.

Opp: for a being PERSON holds Knight[a] iff not Knave[a];

== Veracity says that a person is a knight if and only if every statement
== they say is true

Veracity: for a being PERSON holds
Knight[a] iff ( for s being STATEMENT holds
Tells[a,s] implies True[s] );

== Consistency says that if at least one of the things a person says is
== false then everything they say is false.

Consistency: for a being PERSON holds (
(ex s being STATEMENT st Tells[a,s] & not True[s]) implies
(for s being STATEMENT holds Tells[a,s] implies not True[s]) );

begin

== Theorem2 says that Fred cannot say hello.
Theorem2: not Tells[Fred,Hello] proof
== Specialize Fact2 to the statement Hello.
C0: Tells[Fred, Hello] implies not True[Hello] by Fact2;
thus thesis by C0, Fact1;
end;

== Theorem3 says that if Fred says nothing at all, then he is a knight!
Theorem3: (for s being STATEMENT holds not Tells[Fred,s])
implies Knight[Fred] proof
assume A1: (for s being STATEMENT holds not Tells[Fred,s]);
C0: for s being STATEMENT holds Tells[Fred,s] implies True[s]
proof
let s be STATEMENT;
C1: not Tells[Fred,s] by A1;
thus thesis by C1;
end;
C2: Knight[Fred] iff ( for s being STATEMENT holds
Tells[Fred,s] implies True[s] ) by Veracity;
thus thesis by C0, C2;
end;

== Theorem4 says that if some person, a, is a knave then every
== statement s uttered by a is false.

Theorem4: for p being PERSON holds
Knave[p] implies ( for s being STATEMENT holds
Tells[p,s] implies not True[s] )
proof
let p be PERSON;

== Now we have to prove the implication.
assume A1: Knave[p];

== Now we have to prove the consequent.
C0: not Knight[p] by A1, Opp;

C1: Knight[p] iff ( for s being STATEMENT holds
Tells[p,s] implies True[s] ) by Veracity;

C2: not ( for s being STATEMENT holds
Tells[p,s] implies True[s] ) by C1, C0;

C3: ex s being STATEMENT st not
( Tells[p,s] implies True[s] ) by C2;

consider t being STATEMENT such that
C4: not ( Tells[p,t] implies True[t] ) by C3;

C5: Tells[p,t] & not True[t] by C4;

C6: ex s being STATEMENT st Tells[p,s] & not True[s] by C5;

C7: (ex s being STATEMENT st Tells[p,s] & not True[s]) implies
(for s being STATEMENT holds Tells[p,s] implies not True[s])
by Consistency;

thus thesis by C6,C7;
end;
```

### 8.6.5: And even more theorems

```== Here is the intended interpretation of the predicates Tells, Likes, and
== Conspire.
== Tells[Person, Statement] - Tells[a,s] means person a tells statement s.
== Likes[Person, Person] - Likes[a,b] means person a likes person b.
== Conspire[Person, Person, Person] - Conspire[x,y,z] means both x and y
==    do not like z.

environ
== These reserve statements save us the trouble of specifying
== the sort of the variables being quantified.
reserve x, y, z for Person;
reserve r, s, t for Statement;

given A, B, C, D, E, F, G being Person;
given Lie, Promise, Blabla, Insult, Prayer being Statement;

a1:     for x holds Knave[x] or Knight[x] or Normal[x];
a1':    for x holds Knave[x] implies not (Knight[x] or Normal[x]);
a1'':   for x holds Knight[x] implies not (Knave[x] or Normal[x]);
a1''':  for x holds Normal[x] implies not (Knight[x] or Knave[x]);

a2:     for x, y holds Likes[x, y] & Knight[x] implies Knave[y];
a2':    for x, y holds Likes[x, y] & Knave[x] implies not Normal[y];
a2'':   for x, y holds not Likes[x, y] & Normal[x] implies Knight[y];

a3:     for x, y, z holds Conspire[x, y, z] implies
not (Likes[x, z] or Likes[y, z]);
a3':    for x, y, z holds
not Likes[x, z] & not Likes[y, z] &
not (Likes[z, x] or Likes[z, y])
implies Conspire[x, y, z];

a4:     for x, s holds Knight[x] & Tells[x, s] implies True[s] & s <> Insult;
a4':    for x, s holds Knave[x] & Tells[x, s] implies False[s] or s = Blabla;
a4'':   for x holds Normal[x] implies Tells[x, Promise];

a5:     for x being Person holds Likes[x, D];
a5':    not False[Prayer];
a5'':   Prayer <> Blabla & Promise <> Blabla;
a5''':  for x holds Knave[x] implies not Likes[E, x];

a6:     Likes[F, G] & Likes[G, F];
a6':    Normal[F];

begin

Exercise1:

not Knave[G] proof
1: Normal[F] by a6';
2: Likes[G, F] & Knave[G] implies not Normal[F] by a2';
3: not ( Likes[G, F] & Knave[G] ) by 1, 2;
4: not Likes[G, F] or not Knave[G] by 3;
thus not Knave[G] by a6, 4;
end;

Exercise2:

not Conspire[F, F, G] proof
1: Likes[F, G] by a6;
2: Conspire[F, F, G] implies not (Likes[F, G] or Likes[F, G]) by a3;
thus thesis by 1,2;
end;

Exercise3:

not Conspire[A, E, D] proof
1: Likes[A, D] by a5;
2: Conspire[A, E, D] implies not (Likes[A, D] or Likes[E, D]) by a3;
thus thesis by 1,2;
end;

Exercise4:

not Normal[C] & Tells[C, Promise]
implies True[Promise] or False[Promise] proof
assume A: not Normal[C] & Tells[C, Promise];
1: Knave[C] or Knight[C] by A, a1;
2: Knight[C] implies True[Promise] by A, a4;
3: Knave[C] implies False[Promise] by A, a4', a5'';
thus thesis by 1,2,3;
end;

Exercise5:

(Likes[E, A] iff Likes[E, B]) & Knave[B] & Normal[E] & Tells[A, Promise]
implies True[Promise]   proof
assume A: (Likes[E, A] iff Likes[E, B]) & Knave[B] & Normal[E] &
Tells[A, Promise];

1: Knave[B] implies not Likes[E, B] by a5''';
2: not Likes[E, B] by A, 1;
3: not Likes[E, A] by A, 2;

4: Knight[A] by 3, A, a2'';
5: Knight[A] implies True[Promise] by a4, A, a5'';

thus thesis by 5, 4;
end;

Exercise6:

Likes[B, A] & Knight[B] & Tells[A, Promise]
implies False[Promise]  proof
1: Likes[B, A] & Knight[B] implies Knave[A] by a2;
2: Knave[A] & Tells[A, Promise] implies False[Promise] by a4', a5'';
thus thesis by 1,2;
end;

Exercise7:

not Tells[D, Promise] implies Knight[D] proof
assume A: not Tells[D, Promise];
1: not Normal[D] by a4'', A;
2: Likes[E, D] by a5;
3: Knave[D] implies not Likes[E,D] by a5''';
4: not Knave[D] by 2, 3;
thus thesis by 1, 4, a1;
end;

Exercise8:

Knave[B] & Knave[C] & Conspire[A, B, E] & Conspire[A, C, E]
implies Conspire[B, C, E]       proof
assume A: Knave[B] & Knave[C] & Conspire[A, B, E] & Conspire[A, C, E];
1: not Likes[E, B] & not Likes[E, C] by A, a5''';
2: not Likes[B, E] by a3, A;
3: not Likes[C, E] by a3, A;
thus thesis by 1, 2, 3, a3';
end;

Exercise9:

Conspire[D, A, E] & Tells[E, Insult]
implies not (Normal[D] & Normal[A]) proof
assume A: Conspire[D, A, E] & Tells[E, Insult];
1: not Likes[D, E] & not Likes [A, E] by A, a3;
2: Knight[E] implies Insult <> Insult by A, a4;
3: not Knight[E] by 2;
4: not Normal[A] by a2'', 1, 3;
5: not Normal[D] by a2'', 1, 3;
thus thesis by 4, 5;
end;

Exercise10:

Tells[A, Prayer] & not Tells[A, Blabla] & not Tells[A, Promise]
implies Knight[A]       proof
assume A: Tells[A, Prayer] & not Tells[A, Blabla] & not Tells[A, Promise];
1: not False[Prayer] & not ( Prayer = Blabla )
implies not (Knave[A] & Tells[A,Prayer]) by a4';
2: not Knave[A] or not Tells[A, Prayer] by 1, a5', a5'';
3: not Knave[A] by 2, A;
4: not Normal[A] by a4'', A;
thus thesis by 3, 4, a1;
end;
```