Now that we have demonstrated the rules of replacement and additional derived rules, we can explore doing proofs in SL without truth tables. Why should we abandon truth tables for another method? In the last chapter, we saw that truth tables can become quite complicated when more than two statements are being evaluated. What if we have more than two or three statements, and we want to derive conclusions? We could use truth tables, but we would need sixteen, thirty two, or exponentially more possible cases of truth values, which would be needlessly confusing. By learning a simplified method of proof that relies on rules already demonstrated by truth tables, we can eliminate using T and F altogether.

Let us say that we know several expressions are given as true, such as (A > B) and (B > C), and we want to determine whether or not we can arrive at a particular conclusion, such as (A > C). We could prove that in all possible cases the Hypothetical Syllogism is true, but we could simply assume a single case, in which (A > B) and (B > C) are known to be true, and attempt to derive (A > C) from this single case, using the rules of replacement and derived rules we also know to be true. Consider that if I know (A > B), and (B > C), and (C > D), and (D > E), and (E > F), there should be a way of concluding that (A > F) without determining each truth value for each expression for sixty four possible cases. This way, we can take a set of several complex expressions, such as A, ((~A > (B ^ D)), (D v E), F, (G > H) etc., and draw further conclusions from any complicated set without determining all possible truth values for each expression.

A deductive proof always starts with one or more premises, with a bar underneath to show where the given and assumed premises end. Then, beneath the bar, each line is a conclusion derived from the previous premises and conclusions using a rule, which must be justified in standard notation.

There are three types of rules for doing deductive proofs in SL. First, is **reiteration**, repeating something that has already been determined. This is even simpler than our first truth table we used to demonstrate negation. We are going to assume that a single simple premise, A, is true, and from this prove that A, our premise, is true:

Notice that we give each line of proof, including premises and each conclusion, a number to the left of the proof such that we can refer to it later, and we give each conclusion we derive from the premises a justification. In this case, we reiterated our only premise, A, and symbolized this justification as R1, a reiteration of line 1.

When my mother, aunt and uncle were very young and going on car trips, they would sing, “We’re here, because we’re here, because we’re here, because we’re here…” to pass the time and annoy their parents. This is also an example of reiteration.

Other than reiteration, all additional rules involve either **introduction** or **elimination**. For each of these, based on one or several lines one introduces or eliminates a connective, rule of replacement or derived rule, each of the elements we demonstrated in previous chapters with truth tables. Here, to begin, is the rule for **conjunction introduction**, adding two lines together to introduce an AND:

Notice that with two premises, A and B, we can determine A ^ B. Our justification is that we can introduce (I) an AND (^) given lines 1 and 2. You can introduce an AND from lines that are derived from the premises. Let us say that our premises are A, B and C, and we want to derive (A ^ B) ^ C as our conclusion.

Notice that we first introduced an AND, then reiterated line 3 (C), and then introduced another AND to derive our desired conclusion. Some might say that line 5 is redundant, as we could have introduced an AND given lines 3 and 4 rather than reiterate line 3 as line 5, but we do this to show each step of our work carefully. It is better to use line 23, 24 and 25 to derive line 26 than it is to use line 2, line 16, and line 25 to derive line 26. Reiteration may seem redundant, but it keeps our work simple. This makes deductive proof easier than truth tables, which sometimes require glancing between columns that look similar and can lead to mistakes.

If we know A ^ B, we can also eliminate the conjunction, leaving us with either A or B alone. Here is the rule for both types of **conjunction elimination**:

If we started from the premise (A ^ B) ^ C, and we wanted to derive A for our conclusion, we would have to first eliminate C, leaving A ^ B, and then eliminate B, leaving A. While this may seem obvious and redundant, keep in mind that we will be dealing with many variables and complex expressions, and will need to carefully show our work at each step.

Disjunction introduction also seems silly in its simple form, but in complex proofs it can be quite useful. If we know A, we know that A v B, or A v C, or A v (~F > G), or A v anything else. If I know that my name is Roberto, then I know that my name is Roberto OR I am a purple chicken. While this would mean that if we proved my name is not Roberto, it would prove I am a purple chicken, if we start knowing my name is Roberto, and introduce a disjunction, we will not be able to prove my name is not Roberto without contradicting ourselves. This will prove useful when we are using conditional introduction and negation introduction and elimination, as we will see later in this chapter.

**Disjunction elimination** works like the Material Conditional. If we know A v B, and we know ~A, then we know B. Similarly, if we know A v B, and we know ~B, we know A. Here is the rule for disjunction elimination, of both types.

**Conditional introduction** involves adding another hypothetical level to a proof, another vertical and horizontal line to the right of the original vertical line of proof. As we have seen, any line that can be derived to the right of the original vertical line is true given that the premises are true. This means that our proof is already hypothetical. It may be that the premises are wrong, but if we work by the rules, step by step, our proof is valid even if the premises are wrong. Just like we supposed that there could be a hypothetical universe in which puppies are evil, we can add any number of hypothetical proofs within hypothetical proofs to show what would be the case if we start from any set of premises.

Notice that we start with the single premise, A. We then suppose, hypothetically, that B, introducing another vertical and horizontal line to indicate that we are on another hypothetical plane of existence within our proof. We do not know that B is true, given A, our original premise, but we are seeing what we could determine is true if we suppose that B. Because we know our original premise A, we can reiterate it. We can then introduce a conditional, B > A. Notice that we are not proving ~A, nor are we proving B. We are proving, given the premise A v B, then IF ~A, then B. Notice also that our justification uses a dash between line numbers, indicating a continuous range, rather than numbers separated by commas.

**Conditional elimination** does not require introducing another hypothetical level to our proof. If we know A, and we know A > B, then we know B. Here is the rule for conditional elimination. Let us say that we know A, B and (A ^ B) > C, and we want to derive C. First we must use conjunction introduction to combine A and B to get A ^ B, and then use conditional elimination to arrive at C:

We can use conditional introduction and elimination to prove the Hypothetical Syllogism. Given both of the premises A > B and B > C, we can hypothetically suppose A, derive B through conditional elimination because of A > B (line 1) and A (line 3), then similarly derive C through conditional elimination because of B > C (line 2) and B (line 4). We can then conclude, by conditional introduction, that A > C, given lines 3 through 5:

**Biconditional introduction** combines two conditionals together, which requires us to first have presumed or derived both conditionals, and then make two separate hypothetical suppositions:

Notice that in the justification for line 7, the biconditional introduction and our conclusion, there are two sets of numbers, each containing a dash and separated by a comma. While it seems redundant to create two separate hypothetical suppositions when we could simply justify the biconditional introduction with the premises, lines 1 and 2, remember that proofs can get complicated and can involve many expressions and steps.

**Biconditional elimination** works just like conditional elimination, except we can not only derive B given A and A <> B, but we can also derive A given B, as it works like a conditional both ways:

When we studied connectives and truth tables, we covered negation first, but in proofs, we cover negation last, as it always involves a hypothetical supposition like we use in both conditional and biconditional introduction.

**Negation introduction** is also known as reductio ad absurdum in Latin, reduction to absurdity. Given that contradictions are impossible and necessarily false in formal logic, if we arrive at a contradiction such that an expression and its negation are both true, then we know that at least one of the premises must be false or the rules improperly applied. This means that if we make a hypothetical supposition with a single premise and follow the rules properly, we can prove that the premise is necessarily false. Let us say that we know both A and B > ~A. From these premises, we can use negation introduction to prove ~B:

Notice the use of repetition in line 5. Typically, negation introduction and elimination involve repetition, bringing in something previously presumed or derived to contradict something derived in a hypothetical supposition. With negation introduction, we can demonstrate both the impossibility of contradiction (admittedly, already presupposed) without any premises:

Negation elimination is essentially the same as negation introduction, through one starts by supposing a negated expression and through negation eliminates the negation:

We can use negation introduction to demonstrate Modus Tollens:

To do this, we must assume A > B, then hypothetically suppose ~B, then, within our hypothetical supposition, hypothetically suppose A. This means that we are creating a possible universe within a possible universe, a dream within a dream (much like the movie Inception). In this supposition within a supposition, we can derive B, as we know A > B, our original premise, which we introduce into the supposition within our supposition. Then, with repetition, we introduce ~B from our first supposition into our second supposition within a supposition. Remember, we have proved neither ~B or A, we have merely supposed ~B and then supposed A within our supposing ~B. Now we have a contradiction within our second supposition within a supposition, so we can introduce a negation and have proved that, within our first supposition, ~A. Finally, we can conclude that, given our A > B, we can deduce ~B > ~A.

**Using Rules of Replacement and Derived Rules**

Now that we have covered repetition, as well as introduction and elimination for each of the connectives, we can have a new appreciation for the rules of replacement and derived rules. We can use them in deductive proofs to save ourselves a great deal of work. For instance, rather than go through the motions of our last proof, we can simply use **Modus Tollens** as a rule, symbolized as MT in our justification, and have a two step proof:

This becomes even more useful when we allow ourselves to use rules of replacement and derived rules to transform expressions within expressions. Let us say that we know (A > B) v C, and we want to derive (~B > ~A) v C. We have not proven A > B, as it remains a possibility, but we can nevertheless transform this part of the whole expression using Modus Tollens:

Similarly, we can use Commutativity (Com), DeMorgan’s Theorem (DeM), and the Material Conditional (MC) to transform expressions and expressions within expressions, justifying ourselves with a single line. **Commutativity** allows us to reverse the order of AND, OR and biconditionals:

**DeMorgan’s Theorem** allows us to exchange ~(A v B) with ~A ^ ~B, as well as exchange ~(A ^ B) with ~A v ~B:

Notice that we had to put ~A ^ ~B in parentheses to keep the expressions equivalent. The Material Conditional allows us to exchange A > B with ~ A v B, as well as exchange A v B with ~A > B:

Notice that B and A are switched in the last example, but the rule still applies. Our two derived rules, the Hypothetical Syllogism (HS) and the Dilemma (Dil), allow us to derive a particular conclusion based on several conclusions, justifying this by referring to several lines. The **Hypothetical Syllogism** allows us to derive A > C given we have presumed or derived A > B and B > C:

Derived rules do not allow us to transform expressions within expressions. They do, however, allow us to transform expressions composed of expressions. For example:

The **Dilemma** allows us to derive C given we have A v B, A > C and B > C:

Notice that in the justification for line 4, there are three numbers, separated by commas. Similar to the Hypothetical Syllogism, we can transform expressions of expressions with the Dilemma. The lines used for justification also do not have to be in any particular order:

**Exercise 6.1: For each proof below, provide a justification including both rule and line number(s) for each step. Remember, you need to justify each and every line besides the premises and hypothetical premises.**

**Exercise 6.2: Given each set of premises, derive the conclusion with a deductive proof.**