Continued from

(Part I - Introduction)

(Part II - Paradoxes in Math)

(Part III - Turns out you can't do anything about the paradoxes)

Alan Turing (1912 - 1954)

Last time I blogged on this topic, I discussed Kurt GĂ¶del's proof no consistent mathematical system of more than trivial complexity can be complete, or more concisely: Math is filled with an infinity of paradoxes. This turns out to have some important implications for Computer Science. Although we usually think about computers in term of real life engineered machinery, they're also creatures of mathematics. Just like integers, or particular sets of numbers, you can write proofs about computers. One of the most important minds in formalizing and reasoning about computers in the abstract was Alan Turing: inventor of the Turing Machine, Turing test, and important contributor to the cracking of the Enigma Code.(Part I - Introduction)

(Part II - Paradoxes in Math)

(Part III - Turns out you can't do anything about the paradoxes)

Alan Turing (1912 - 1954)

Turing reasoned about a computer that's come to be known as a Turing Machine. It's a rather impractical device, consisting of a machine that reads and writes symbols to a long strip of paper. You program the machine by telling it what action to take when it sees a symbol; for example: if you read the letter 'a', replace it with a 'b' and slide the tape one symbol backwards. Interestingly, anything you can program the computer you're reading this blog post on to do you could program with a Turing machine. Google or Windows 7 could be run on a Turing machine. It may take decades to get a result out, but that's not important in theory. The computational power, the programs you could write, are the same.

Turing equivalent machines are the most powerful ones we know of, likely the most powerful possible. Computers built to take advantage of weird quantum rules could be simulated in a Turing machine. Our brains can probably be simulated in a Turing machine. In fact, it's hypothesized that the whole universe could be simulated in a Turing machine, a hypothesis I'd give lots of credence. None of this is practical: our universe might die and get reborn trillions of times while you wait for the program to complete, but the interesting point is it would eventually complete.

Which brings us to again to the limits of knowledge. If there are questions the Turing machine could not possibly answer, then that's it for the question. There's no reason to believe humans could figure out the answer, or that even the universe acting as one giant brain could solve. And there are such questions. And they aren't even that complicated.

consider the following program:

while(x > 3) x = x+1

print(x);

that says that for any input x, keep adding 1 to x until it's less than 3, then print the number. So if you enter 2 it would print out 2. What if you run it on 4? Well, 4 > 3, so we go again with 5. Then 6. Then 7. The number keeps getting larger: it'll never be smaller than 3. Your computer will think and think and think and never return an answer. This is often what's happened when a program you're running freezes up. It's easy to see that making a number bigger and bigger when really you need it to be smaller just isn't going to work. This program doesn't "halt".

Given an arbitrary program and its input, can we figure out if it's going to halt? Imagine if we had a program that does this: Then we could build another program that does the opposite of its input. You give it a halting program and it runs forever. You give it a program that runs forever and it halts. What would it do if you passed in its own source code? It would run forever if it halts...but if it halts, that means it must run forever...but wait, it halted...No matter how it acts, it's by definition doing the wrong thing. Thus a program that figures out if any other program halts on some input cannot logically exist (see here for a fuller explanation of the proof). As you can see, this is a very similiar problem to Russel's paradox that was the seed for understanding that math is incomplete. Computer Science is part of math, and follows the same rules as the rest of it.

Alan Turing was a hugely influential man in Computer Science. Besides that, in playing an important role in breaking Germany's secret codes, he was among the most important men in winning World War II. How did the U.K. thank him for his contributions to science and national security? In 1952 he fell afoul of 'gross indecency' laws that outlawed homosexuality and was given the choice of imprisonment or probation conditional on taking chemicals to reduce his libido (they also caused him to grow breasts). His security clearances were revoked and he was barred from continuing his cryptographic work. In 1954 he took his own life, eating a cyanide laced apple. In 2009 Gordan Brown apologized for his nation's treatment of Turing.

Turing Photo distributed under Creative Commons

## No comments:

## Post a Comment