Monday, March 15, 2010

Bangs, Bounces, Freezes, Crunches

The fabric of space-time is expanding in every direction. All the stars in the night sky are receding from our view: the more distant the star, the faster the retreat. Someday the night sky will be black, as even the closest star (if any still burn on) is racing away faster than the light that brings us its news. This is not a violation of Einstein's prohibition against moving faster than light: that rule only applies to matter and energy, not to space-time itself.

What's going on? Empty space is growing around us. I've written about this before, using the analogy of a universe on a ripple spreading out across a pond. Physicists aren't sure what's causing the expansion of the universe, so they give it the mysterious moniker "dark energy". If there really are mysterious bearers of this force, dark-trons you might call them, they account for 74% of the energy/mass in the universe.

Physicists used to believe in a Big Crunch (some still do, I'm sure) where the Universe gets pulled back together into a point, a reverse Big Bang. This lead naturally to the Big Bounce theory, where immediately after the Big Crunch you've got a new Big Bang. The universe would cycle endlessly (although possibly slowly winding down...), life would begin again and again and again.

I recall wondering about this as a child, and what it meant for humanity's future. It seemed to mark a fixed end to our days. No matter how successful our civilization is, no matter how many star systems we colonize, it would face extinction in the Big Crunch. Sure, a new universe might spring to life, but how could we get there? You can't outrun space shrinking, as there's nowhere else to run to. There could be hope, retreating out of space time for a few million years, but such a task would require entirely unknown laws of physics. From a relativistic standpoint we're doomed in the Big Crunch model.

But then physicists observed that not only is the universe expanding, it's expanding at an ever accelerating rate. It shows no sign of pulling back in for a Big Crunch. So another theory for the end of days took center stage: The Big Freeze. In an expanding Universe, there is increasingly less and less energy per unit volume. Someday a single photon zipping across an empty expanse that once housed our solar system would be an usually warm region of the Universe. Again, not much hope for mankind: this story doesn't end with perpetual rebirths of the Universe we could hitch a ride onto: it ends with order decaying into a vast expanse of nothingness.

So humanity must die. If it comforts you, we're probably talking billions of years of time. And anyways, worrying about humanity's end in terms of the universe's death is a bit like declining desert on the Titanic...but then, there's another theory worth considering...

String Theory has given rise to mathematical models that suggest our universe may not be all there is. In these physical theories, we live on a brane (derived from membrane), a self contained universe floating in a larger reality. Reusing the metaphor from an earlier post, we would be analogous to a civilization living on a ripple in a lake. There are other ripples, there may even be other lakes. Spacetime would be a material we live on, but the energy for the Big Bang would have come from a collision with another sheet of reality.

And suddenly, there's hope again. If space is large enough, it may contain many separate universes, different realities created by different big bangs. These would be unimaginably far away, but if you sit in your spaceship for sufficient aeons you could visit. And even if there's just one, given enough time a new collision will occur: Really, this could happen at any moment. You never know when the space around you is suddenly going to erupt with the energy of trillions of suns. It's amazing how successful physics is at introducing new things for us to worry about (quantum vacuum collapse is a fun one for its combination of utter devastation and quantum weirdness).

So what's this all mean for us? If we can survive sufficiently long in the big freeze, and then survive a universe being born around us, we can keep going as a civilization indefinitely. It seems like a harsh journey for our bodies, but if we programmed the patterns of our DNA into stronger matter it could recreate us once the new universe is born and grown up into a more hospitable place. Ideally nanobots would survive a high energy wave as the new universe passes over them, but if that won't work we could leave patterns of energy to get swept up in the new universe. These would interact at a quantum level with the new matter, so it evolves in a pattern we wished, eventually recreating some simple robot tasked with rebuilding humanity.

As our understanding of physics improves I'll keep you up to date, but the current prognosis is that an eternal civilization is possible (however catastrophically unlikely).

Wednesday, March 10, 2010

Stone, Bronze, Iron, Steel...

I find it interesting that we use materials to name the earlier reaches of time: The iron age, the bronze age... Plenty of other variables could be used to divide history, but the materials used to build technology are a very significant, and in particular visible, choice. What would you call the current age? We once went by the Nuclear Age, but that was far from a revolutionary change. Nuclear energy is just another power source, anonymous through our electric grid. Perhaps fission will deserve an age, driving us across the galaxy, but that's all the future.

Nuclear points the way to electricity, and the selection of the Electronic Age feels appropriate. That ephemeral bolt of energy, so recently understood, is a nice nod to the unprecedented growth in scientific knowledge we've seen. Plentiful power, shipped all across the landscape, revolutionized life as fully as anything since agriculture. It's been transforming us since the later acts of the Industrial Revolution, now getting a second run at revolution with the advent of computers and the Internet.

If this is the Electronic Age, what comes next? I suspect an appropriate name will be the Carbon Age, when that plentiful element bends to our whim. After as fundamental a character as the electron, it does feel like a step backwards to move up in size to element, but what can be done? The quark, the gluon, the photon: they're such wispy, enigmatic things. No, Carbon is my candidate, bridging the gap between the macro and the micro-scale.

For one thing, there's this structure, the carbon nanotube:


Unroll it and you've got graphene; these materials have pretty unbelievable properties. If you want to build an elevator to outer-space, the carbon nanotube is just about your only option for the tether. And it now looks like it's going to have a role powering nanotechnology. Researches coated carbon nanotubes in explosives, then ignited one end. While an explosion normally radiates energy in all directions, the nanotube caught the heat and channeled it down its length. The first cool thing that happened was the heat, moving uni-directionally, traversed the nanotube 10,000 times faster than in a regular explosion. From our old friend F=MA, a faster explosion is a more powerful explosion.

On it's own, this would be pretty cool and useful. But something else happened: This wave of heat managed to catch hold of the electrons in the nanotube. You could visualize the electrons as buoys floating in the ocean: waves pass them by and they bob up and down in place. But if a large enough wave (a tidal wave, perhaps) were to flow past, the buoys would get caught up in the motion and wash away. It turns out this heat wave did just that, driving the electrons out the other end of the nanotube. This was something scientists didn't expect could happen. The explosive coated nanotube generated about 100 times the electricity of a battery, by weight. Additionally, while a battery slowly loses energy as it sits unused, there's no obvious reason the nanotube couldn't hold on to its electrical potential for decades if not millenia.

The downside is that the nanotube is not easily reusable like a battery: to generate more energy a new coating of explosives would need to be applied (or potentially pumped into the nanotube as a gas). It's also not clear whether you could scale this up to say, power your house. But we've already got solutions for powering the macroscopic world: this breakthrough is revolutionary for what it could allow in the microscopic realm. Traditional engines do not scale well downwards. Being able to generate electricity at the atomic scale is the first step to being able to construct things on the atomic scale: another small step towards nanobots. Being able to instruct agents to work on the atomic scale could be the key to revolutionizing manufacture, scanning and understanding our brain, stopping cancer and heart attacks, and even colonizing space. Carbon is a great building block, easy to structure into all different shapes, and will likely play a major role in future miniaturization.

Which is half of why I believe the next age will be the Carbon Age. The other reason is a nod to our status as carbon-based lifeforms. Our understanding of DNA continues to grow, and the technology to read and manipulate genes is plummeting in price. Scientists are already starting to custom build bacteria, you may fill your car with gasoline derived from oil that a custom build bacteria produced. If we can bend life itself to our whim, creating never before seen creations to serve our needs, we'll have entered a new phase of futurism in human evolution. This is a topic I'll revisit in more depth later.

Goodbye Electronic Age, hello Carbon Age.

Monday, March 8, 2010

The Limits of Knowledge, Part IV

Last time I blogged on this topic, I discussed Kurt Gödel's proof no consistent mathematical system of more than trivial complexity can be complete, or more concisely: Math is filled with an infinity of paradoxes. This turns out to have some important implications for Computer Science. Although we usually think about computers in term of real life engineered machinery, they're also creatures of mathematics. Just like integers, or particular sets of numbers, you can write proofs about computers. One of the most important minds in formalizing and reasoning about computers in the abstract was Alan Turing: inventor of the Turing Machine, Turing test, and important contributor to the cracking of the Enigma Code.

Turing reasoned about a computer that's come to be known as a Turing Machine. It's a rather impractical device, consisting of a machine that reads and writes symbols to a long strip of paper. You program the machine by telling it what action to take when it sees a symbol; for example: if you read the letter 'a', replace it with a 'b' and slide the tape one symbol backwards. Interestingly, anything you can program the computer you're reading this blog post on to do you could program with a Turing machine. Google or Windows 7 could be run on a Turing machine. It may take decades to get a result out, but that's not important in theory. The computational power, the programs you could write, are the same.

Turing equivalent machines are the most powerful ones we know of, likely the most powerful possible. Computers built to take advantage of weird quantum rules could be simulated in a Turing machine. Our brains can probably be simulated in a Turing machine. In fact, it's hypothesized that the whole universe could be simulated in a Turing machine, a hypothesis I'd give lots of credence. None of this is practical: our universe might die and get reborn trillions of times while you wait for the program to complete, but the interesting point is it would eventually complete.

Which brings us to again to the limits of knowledge. If there are questions the Turing machine could not possibly answer, then that's it for the question. There's no reason to believe humans could figure out the answer, or that even the universe acting as one giant brain could solve. And there are such questions. And they aren't even that complicated.

consider the following program:
while(x > 3) x = x+1
print(x);

that says that for any input x, keep adding 1 to x until it's less than 3, then print the number. So if you enter 2 it would print out 2. What if you run it on 4? Well, 4 > 3, so we go again with 5. Then 6. Then 7. The number keeps getting larger: it'll never be smaller than 3. Your computer will think and think and think and never return an answer. This is often what's happened when a program you're running freezes up. It's easy to see that making a number bigger and bigger when really you need it to be smaller just isn't going to work. This program doesn't "halt".

Given an arbitrary program and its input, can we figure out if it's going to halt? Imagine if we had a program that does this: Then we could build another program that does the opposite of its input. You give it a halting program and it runs forever. You give it a program that runs forever and it halts. What would it do if you passed in its own source code? It would run forever if it halts...but if it halts, that means it must run forever...but wait, it halted...No matter how it acts, it's by definition doing the wrong thing. Thus a program that figures out if any other program halts on some input cannot logically exist (see here for a fuller explanation of the proof). As you can see, this is a very similiar problem to Russel's paradox that was the seed for understanding that math is incomplete. Computer Science is part of math, and follows the same rules as the rest of it.

Alan Turing was a hugely influential man in Computer Science. Besides that, in playing an important role in breaking Germany's secret codes, he was among the most important men in winning World War II. How did the U.K. thank him for his contributions to science and national security? In 1952 he fell afoul of 'gross indecency' laws that outlawed homosexuality and was given the choice of imprisonment or probation conditional on taking chemicals to reduce his libido (they also caused him to grow breasts). His security clearances were revoked and he was barred from continuing his cryptographic work. In 1954 he took his own life, eating a cyanide laced apple. In 2009 Gordan Brown apologized for his nation's treatment of Turing.

Turing Photo distributed under Creative Commons

Wednesday, March 3, 2010

Patently Silly

If you follow tech news you already know this, but if not: Apple is suing HTC (maker of various Google Android phones) for patent infringement. Twenty Patents are named in the lawsuit (Engadget has a discussion of what each means). Naturally, the blogosphere is abuzz with discussions of what this means for HTC, Apple, and Google.

In short, while HTC is the defendant, conventional wisdom states the suit is more about Google then anything else. The iPhone is the current dominant smartphone, but Google's efforts are gaining steam and may eventually unseat Apple's reigning champ. The patents cover a lot of ground, but the basic point of contention is touch based control of a phone (specifically, multitouch). Why HTC? It's a major producer of Google's Android phones, and it probably doesn't own enough patents of its own to launch a counterattack. It may be the first in a series of lawsuits, or this might be it.

In older news, Apple is also the defendant in patent lawsuits: Nokia has filed suit that the iPhone violates 10 of its own patents. Apple counter-sued in that case with its own patents, and the case has yet to be resolved.

Which brings us to the interesting use of patents in the tech industry. HTC, by failing to patent sufficient quantities of ideas, has left itself vulnerable in a bizarre game being played by major companies. Many seemingly obvious ideas have been patented (one of the patents being brought by Apple is for using gestures to unlock a phone. Another is for having a screen scroll when you wave your finger and 'bounce' when it hits the bottom. A famous software patent is Amazon's 'one click' method for being able to purchase a product from a webpage without re-entering shipping and billing information). The major technology companies have accrued impressive quantities of patents.

Some software patents are used by 'patent trolls' who aren't trying to defend any product, just extort money from other companies. The big companies tend to use there patent portfolio's not for lawsuits but as a deterrent. When Nokia sued Apple for violating its patents on crucial cellphone technology, Apple counter-sued with its own basic patents Nokia was likely to be violating. If two major companies got in a serious patent war (IBM vs. GE, or Apple vs. Microsoft), the results could be catastrophic. Huge swathes of products could be expelled from the market. It's probably impossible to have any cell phones without some cross licensing: a 'world war' of patents would eliminate entire product classes, some as fundamental as the operating system. It's generally believed that companies are too self-interested to actually let such a ominous situation arise, but to be fair a similar sentiment existed before World War I.

While patents were designed to promote innovation, they seem to be having the opposite effect in software. Hence some major efforts to reform or eliminate software patents (see in particular 'in re Bilski'). Understanding a piece of software can require expertise in the field and days of time, neither of which the average patent examiner has. Additionally, software is much closer to the realm of ideas: it's much easier to describe an idea for a program than a way of, say, fighting AIDs. Which is not to say that there isn't important research that needs to be funded in software: there is. Microsoft, to pick a specific example, has an extremely well regarded research department they pump lots of money in to. But the problems I mentioned earlier, along with the rapid pace of change inherent in technology, has created an environment of weaponized patents. Besides creating costs in fighting off patent trolls and removing useful products from the market, the threat of a patent war that takes out how product categories exists.

The problem is ultimately the one-size-fits-all nature of patents. I'm by no means opposed to patents, but the dynamics of software is totally different from the dynamics of pharmaceuticals. Whereas drugs can take hundreds of millions of dollars to develop, a software patent can be thought up in an afternoon. Whereas drugs are patented individually, software is made up of tens of thousands of algorithms working together, any one of which is subject to patent. And none of that takes in to account the speed of advance in software. If search had been patented, 2010 could be the first year we'd be able to use anything but good old Archie (sorry Google).

The problem with these articles on complex topics like patent law is that they're hard to conclude. What to say? The software industry would probably be stronger without any patents, but that's by no means a certainty. And more nuanced solutions would require additional pages to discuss. The one positive is that 20 years isn't a terribly long time (compare with copyright, which if congress continues to lengthen at the rate is has goes on forever). Yes many software patents are questionable, not to mention the patents on your genes (that's right, someone owns a couple of those), but at least in 20 years they expire and no-one can copyright them anymore. At least we'll have a free market eventually.