In the English language, double negation isn’t allowed. If I told you to don't not read this post, it would cause unnecessary confusion because I could have just told you to read this post.

If we wish to break this down into propositional logic, let A represent the poposition of read this post. Then the first sentence is \(\neg\neg A\) while the second sentence is just \(A\). In a certain regard, these two things mean the same thing, but at the same time they don’t.

Let’s operate in the logic where every propositional statement is a boolean. Everything is either true or false. This means that we can directly represent the negation of a proposition as a bool->bool function:

fun neg (b : bool) : bool =
    if b then false else true

This function simply flips the boolean value from true to false or false to true. Now if we have two negations like \(\neg\neg A\), this is the same as flipping a light switch twice: it goes back to the original state \(A\). This means that \(\neg\neg A\) is directly equal to that of \(A\).

The system of logic where we allow all propositions to be represented by booleans is known as classical logic. With classical logic, we get the infamous law of excluded middle or LEM, which tells us exactly that every propositional statement is a boolean: if it’s not true, then it’s definitely false. This is denoted as \(\forall A. A\lor \neg A\). We also obtain all the Demorgan’s Laws for free, as they are simply manipulations of booleans.

The LEM rule isn’t the only axiom that would allow us to obtain classical logic. Other equally strong rules include double negation elimination, which is \(\forall A. (\neg\neg A) \to A\), Peirce’s law, which is \(\forall P Q. ((P\to Q) \to P) \to P\), contrapositive, which is \(\forall P Q. (P \to Q) \leftrightarrow (\neg Q \to \neg P)\), or what is known as a proof by contradiction: assume the opposite which is \(\neg A\), arrive at a contradiction \(\bot\), and thus derive \(A\). It is reasonable to prove that these are all correlated, meaning you can derive them from each other.

However, classical logic doesn’t make sense in many ways. Imagine two kids playing around:

Kid 1: I bet you're too short to touch that tree branch!
Kid 2: Nawha. I definitely can.
Kid 1: Oh yeah? Prove it to me!
Kid 2: Well assume for the sake of contradiction that I was unable to reach the tree branch...

Kid 1 here is going to be very dissapointed. Even if Kid 2 were able to provide a successful proof, they never actually went ahead and showed that they could jump high enough. There is no real evidence. They had to actually show they could by jumping.

The reason this proof by contradiction is lacking is that there is no computational content associated with classical logic proofs. Constructive logic (or intuitional logic) is logic based on actual constructed evidence. If you want to prove you can reach that tree branch, you need to show you can by jumping and reaching it. Likewise, if you wish to prove something, you must provide computational evidence in the form of an algorithm that you can run.

In constructive logic, propositions are not booleans! We can easily convert boolean \(B\) into a proposition via \(B = true\), but there is no way to convert propositions into booleans. Failure to prove a statement true isn’t sufficient to prove that it is false nor is the reverse true. It simply means you didn’t try hard enough, or your system isn’t capable of proving the statement. Because of this, we can no longer define the negation as flipping a boolean, but instead we define it as syntactic sugar for the formula \(A \to \bot\).

In constructive logic, all of our propositions are born directly from our axiomatic system. The GEB book provides an excellent depiction of this in the form of a tree: axioms are the original source of judgements, then these axioms interact with each other to develop new proofs of judgements, forming a branching out mechanism just like a tree. But the tree is not expansive enough to cover everything (otherwise there wouldn’t be any more research problems left to solve), so there are propositions out there that we just don’t know if they are proveable or refutable.

At the same time, you can express a tree branching out with the negated form of the axioms. This branching out determines proofs for which problems do not work, or more sepecifically, constructions of refutations of these propositions. If our system is sound, then there will be no overlap between the two trees.

For certain kinds of axiomatic systems, these two trees end up branching so far that they completely cover the entire plane of existence. This means that for every instance of the problem (decision procedure), we have an algorithm for determining whether it is provable or refutable. This is known as a decideable problem, and we recover the law of excluded middle in constructive logic for this specific problem since we have algorithms to construct either case of the boolean. This brings the question, what benefit is there to classical logic that isn’t found in constructive logic?

The first thought is that maybe undecideable problems might be imporant. Some indeed are and a list of them can be found on wikipedia here. Ones that caught my attention are the halting problem, which is used a lot in computability theory, and the type inference of System F, which means we must type annotate our programs. I am not aware if there is such a thing as proving these types of propositions in classical logic to arrive at a solution, but even if there were such a classical proof, it wouldn’t help us at all. Why? Because solutions to these problems are only useful to us if we can run them in an algorithm so we can apply them to solve more problems.

Another argument for classical logic is that it can prove more things. What are these more things? They are precisely the things that deal with representing propositions as booleans in a general setting, so law of excluded middle, DeMorgan’s laws, and things of this nature. However, maybe it proves too many things. A classical argument against classical logic is that not all propositions should be assigned a true or false value. For example, the liar paradox proposition of “this statement is false” can neither be true nor false due to its self-referential nature. The negation of this statement would be “this statement is true” which is definitely true. If we are in the classical setting, this would mean that the proposition “this statement is false” is false, simply because it was the negation of the true statement. More so than that, there are several judgements where it doesn’t even make logical sense to assign it a truth value. For example, the judgement “banana” is neither true nor false, is simply is.

Beyond this, we can sorta represent classical logic in constructive logic anyway using double negation translation or the continuation passing style translation. There will probably be another blog post about this later, but for now simply know that we choose to represent the classical logic idea of proofs not having computational content by including shortcut mechanisms to jump around within our proofs. This just means we have computational forms of expressing no computational content.

Another dispute for classical logic is that it is sometimes much easier to prove things with. Of course it would be, as you aren’t going through the effort of actually finding a computational example to go along with your proof. One kind of classical proof that struck me as being particularly useful is that of utlizing expected value over a probability distribution. If something has an expected value of 0 which we can prove via properties of expected value, then it must be the case that there exists an instance of an event such that it produces a nonnegative value. This provides a proof of the existence of such an object existing, which can be some crazy combinatorial thing, but it doesn’t tell us what the actual object is. Ultimately, classical logic might be a good starting point for finding interesting things to prove constructively. It serves as a quick sanity check before investing the time and effort to do things the right way and go find the right instances.

Now is constructive logic the logic we should be using or is it itself too strong? The issue that some people have with constructive logic is the ex falso sequitur quodlibet rule. This is Latin for something along the lines of “from falsehood, derive anything” and if works exactly like that: upon reaching a contradiction \(\bot\) we are able to conclude anything. This is also known as the principle of explosion because the single contradiction explodes into a proof of everything. This is a very interesting safety mechanism because if we are able to derive \(\bot\) without any assumptions, then our system is unsound and so any proof is simply gibberish and isn’t proving anything. This rule is soundly used when there are hypothesis that contradict themselves. Because we assumed enough things that we created an unsound system, then we can reach any conclusion we want under these assumptions, so within this particular scope we have an unsound system but in the global scope our system is sound.

Why is this an issue? Well, consider the constructive proof of “all cows on the moon are blue.” First assume we have a cow on the moon. This causes a contradiction because there are no cows on the moon. From this contradiction we are able to assume that the cow is blue. This proof is not wrong, but it introduces entirely arbitrary information from the contradiction. There is no connection between the fact that the cow is on the moon and that the cow is blue. We could have just as easily replaced the “cow is blue” part with something equally ridiculous such as “this blog is interesting.”

Minimal logic is the logic you get when you remove the principle of explosion from constructive logic. This offers a considerably direct method of expressing proofs because we are unable to introduce things that aren’t mentioned elsewhere. We would have a proof of the statement “all cows on the moon are… well that’s a contradiction.”

In some sense, this allows for both the formulas \(A\) and \(\neg A\) to exist simultaneously because we don’t derive anything from a contradiction. As such, minimal logic is a paraconsistent logic which rejects the principle of explosion, and these paraconsistent logics might result in some interesting results dealing with things such as the liar paradox or quantum superposition where things are both true and false simultaneously.

As for whether this is better or worse than constructive logic, I don’t really know. Even though the principle of explosion is confusing to see in code, it does have computational content associated with it via the empty sum elimination rule. If we have a contradiction, it makes sense that nothing makes sense so all proofs are valid. Does it really hurt to add in this rule? Maybe a future post will let us know more if I find something out.

This is where you get differences in thought where some people call themselves “classical mathematicians” and others call themselves “constructive computer scientists” and others call themselves “minimalist philosophers.” It does tend to be this way with mathematicians more likely to use classical logic than computer scientists because computer science is based on computation primarily. But in the end none of these labels really matter because they’re all crazy mad scientists studying weird symbols.

So after reading this post, what rules do you think you should have in a logical system? Do you think propositions as boolean are the way to go? Are you against eliminating contradictions? Regardless of what you choose to believe, I hope we’ll all keep chugging along producing more proofs.