Incompressibility and Incompleteness
One exciting feature of Kolmogorov complexity is that it has an incompleteness result as a relatively straightforward consequence. The proof is a little technical for us, but I'll sketch the idea. To do the proof correctly, we would need to make rigorous the idea of lexigraphically ordering of all of the outputs of the theory, and also clarify how we can measure the information that describes such an ordering. But, the basic idea is similar to the following:The Strength of Theories
- Suppose we have a theory T of Kolmogorov complexity n. This basically means that we have made a version of our theory in the form of a compact program that prints theorems. Call this program P; P of course has complexity n.
- We suppose that we can list the outputs of our theory in lexigraphical order. (There will likely be no effective procedure for this; but this assumption does not get us in trouble because we're aiming to show that even if we could do this, we would get a contradiction.)
- Suppose our theory has a predicate that means that string k is random. So, Random(x) is a claim in our theory that description x is random. Of course, this description is an output of the theory and will be written in the alphabet of our theory.
- Consider all those descriptions of some very great length, call it l, such that if one of these, call it k, was Chaitin Random, then C(k) >> C(T). That is, the complexity of that k would be very much greater than T. (Because there are many Chaitin random strings of every length, we know that there is such a string of complexity k for the alphabet used in the theory T.)
- Since by supposition our theory could say Random(x) for any output of the theory x that was Random, then also we would now be able to determine, in a lexicagraphical ordering of outputs of our theory of length l, which is the first output k such that Random(k). (I use here "output," "description," and "string" interchangeably.)
- But now we have a contradiction. The information that a description is of length l is not very big (we say it is about log(l)); and our theory T is of complexity C(T). So with the information of complexity C(T) plus with the information needed to say, "of length l," we could then use our theory to identify the first random description of length l. This is the description we called k.
- But that means we can identify (and, so, print if we wanted) k using just T plus a little bit of extra information. But that means then that the complexity of k is less than or equal to the complexity of T and this information about l. C(k) <= C(T) + log(l).
- This contradicts our assumption that C(k) >> C(T).
- We conclude that the error is the assumption that our theory could determine whether any output of the theory was Random. In particular, the theory cannot determine whether an output more complex than the theory is Chaitin Random.
An Example of Hard Complexity: Chaitin-Omega
- This incompleteness result is very general and powerful, and can be roughly stated in the following way: every theory T has a complexity n. There are then questions about objects whose complexity exceed n that may not be answerable by T. In particular, questions about properties which have complexity that exceeds n. (There may be questions you can answer about more complex objects -- a simple theory can tell us whether a very complex number is odd or even, for example. We are interested instead in the fact that there are not only complex objects but they can have complex properties. Being Chaitin Random was used in our quick sketch of a proof above that property is by definition relatively complex.)
- Note that there are, in mathematics at least, infinitely many possible objects (for example, relations between numbers) of complexity exceeding that of any particular theory!
- One of the limitations here is that we cannot determine precisely the Kolmogorov complexity of things much more complex than our existing theories. For any arbitrary long string, then, the Kolmogorov complexity of that string is itself typically effectively approximatable, but not effectively computable. (But, please note, the Kolmogorov complexity is still an objective feature!)
- G. Chaitin has a nicely bold way of describing the implications of this (here, you might read "no structure" to mean "no reducible structure"):
I thought that maybe the problem is larger and Godel and Turing were just the tip of the iceberg. Maybe things are much worse and what we really have here in pure mathematics is randomness. In other words, maybe sometimes the reason you can't prove something is not because you're stupid or you haven't worked on it long enough, the reason you can't prove something is because there's nothing there! Sometimes the reason you can't solve a mathematical problem isn't because you're not smart enough, or you're not determined enough, it's because there is no solution because maybe the mathematical question has no structure, maybe the answer has no pattern, maybe there is no order or structure that you can try to understand in the world of pure mathematics. Maybe sometimes the reason that you don't see a pattern or structure is because there is no pattern or structure! (Exploring Randomness)Mathematics as a (contingent) Science?
- Gregory Chaitin has describe a very complex number which he calls "Omega" (since we've discussed other omegas, we'll call this Chaitin-Omega).
- Chaitin-Omega is the Halting probability of programs for some particular universal Turing machine (and so, of all programs).
- We need one restriction: we rule out that any working program is the extension of a working program.
- Chaitin-Omega gives the probability that a program generated randomly will halt. (You would sum over the number of programs that halt of any particular size.)
- Chaitin-Omega is completely incompressible! You cannot figure it out except maybe a few bits worth and out of dumb luck.
- Chaitin has described a problem in elementary number theory that is equivalent to knowing Chaitin-Omega.
- Chaitin describes Omega thus:
Physicists feel comfortable with randomness, but this is the black or white world of pure mathematics --- how is this possible, how can it be? Each of these bits is well-defined, it's a specific 0 or a 1, because [Chaitin- Omega] is a specific real number once I fix the universal Turing machine or the programming language that I'm dealing with. But it turns out that the right way to think about each bit is that it's not black or white, it's not that it's a 0 or a 1, it's so well balanced, it's so delicately balanced, that it's grey!
Here's another way to put it. Let's go back to Leibniz. What's the idea of mathematics? The normal idea is that if something is true, it's true for a reason --- Leibniz! --- if something is true it's true for a reason. Now in pure math, the reason that something is true is called a proof, and the job of the mathematician is to find proofs, to find the reason something is true. But the bits of this number [Chaitin-Omega], whether they're 0 or 1, are mathematical truths that are true for no reason, they're true by accident! And that's why we will never know what these bits are.
In other words, it's not just that Hilbert was a little bit wrong. It's not just that the normal notion of pure mathematics is a little bit wrong, that there are a few small holes, that there are a few degenerate cases like "This statement is unprovable". It's not that way! It's much, much worse than that! There are extreme cases where mathematical truth has no structure at all, where it's maximally unknowable, where it's completely accidental, where you have mathematical truths that are like coin tosses, they're true by accident, they're true for no reason. That's why you can never prove whether individual bits of [Chaitin-Omega] are 0 or are 1, because there is no reason that individual bits are 0 or 1! That's why you can't find a proof. (Exploring Randomness)
- What does this mean for reason, and for the special case of reason par excellence, mathematics?
- One way we can interpret the limitations that we've seen is that we will have to add information to our theories whenever we are trying to reason about things more complex than our theory.
- If you cannot prove P, but P looks likely or at least very useful, you may just need to assume P and work with it (perhaps abandoning P later if it turns out to lead to contradictions).
- But this blurs the lines now between pure reason and some of the fallible aspects of empirical science! Mathematics so practiced has strong analogies with science: we assume some things and work with them because of their power to explain, but are open to revision in the future! Our theories become pragmatic and fallible!
In principle limits
We have focussed on one kind of reason: formal reason (logic and math). We have had a special interest in effective procedures, which the Church- Turing thesis claims are all and only what a Turing machine can do. We have seen three kinds of in-principle (that is, inescapable) limits:
- Godel's First Incompleteness Theorem: For theories of sufficient strength, such as arithmetic with multiplication, there are unprovable truths, or the system is inconsistent.
- Godel's Second Incompleteness Theorem: For theories of sufficient strength, such as arithmetic with multiplication, we cannot prove the consistency of the system from within the system.
- The Halting Problem: There is no effective procedure to identify all and only the effective procedures. Equivalently: There is no program that can verify for any arbitrary program whether that program will halt. (Here we assume the Church-Turing thesis: that all the effective procedures are all and only what a universal Turing machine can do.)
- The Incompressibility Result: Each theory has a descriptive complexity; and that theory cannot determine whether an arbitrary theorem of the theory that is of complexity greater than the complexity of the theory is random.