One estimate puts the number of robots dwelling among us at 11 million (as of mid 2010). If computers serve as guide, we can expect that number to increase rapidly as the years go by. Were the number of robots to increase at a rate of 10% a year, we'd have more robots than the present day human population by 2080. Self-replicating robots could cut that time frame dramatically.
Knowing that 11 million automated beings hide in factories and homes across this globe makes me a little worried. That's not an inconsequential army. Granted, almost all of those bots are highly specialized factory works who couldn't lift a gun if you wrote the software for them, but are we entering the era of the robot, and consequently, the threat of robot uprisings? When should we really start worrying, and what should we do if a mechanical judgment day comes?
The first step is, of course, precaution. The best way to avoid a robotic uprising is not training them to kill. We may have missed the boat on that one. At least the militaries of the world realize the importance of having a human making the shoot/don't shoot decision. Still, it's just a single feature requiring malfunction or tampering. The lack of strategic coordination among the robots also mitigates the threat, thus requiring a programming evil villain in the loop at present. You should be safe in the near term.
In the long term, who's to say? Of course, not all uprisings are created equal. The first question is if the robots have a shot at winning. Without access to heavy weaponry, extensive infrastructure support, and all-terrain movement, the uprising should turn out to be an inconvenience, not an Armageddon. What if we're not so lucky? I'd still divide the outcome into a few categories: bad, terrible, really-really-terrible, and not-all-bad. Let's hope for the last.
Robotic Uprising, the bad kind
This would be a low-intelligence, but highly effective attack by the robots. The most probable cause is haywire nuclear missile control, but a roboticized military turning against us isn't inconceivable. It's bad, because civilization gets destroyed. It's not terrible, because a clock starts ticking as soon as the attack begins. Robots break down. Nuclear arms are used up. As long as some humans living in remote locations make it a year or two, humanity can be rebooted. Even if we don't make it, the life left behind will have a chance to evolve its way to intelligence again. The most vicious nuclear onslaught is likely to leave life behind, if only in the depths of the sea. If surviving life evolves intelligence quickly, they'll even have the ruins of our civilization to learn from.
Robotic Uprising, the terrible kind
Much worse would be self-perpetuating robots with a dislike for biological sentience. Self repairing robots using solar energy could see to it that no new life takes hold as long as the sun keeps shining. Here, a small band of humans couldn't hold out past the end of days to see a new dawn: once the automatons take over, it's over. This scenario requires a much more sophisticated robotic ecosystem. The lesson, though, is that once we have self-perpetuating robots, we're moving into dangerous territory.
Robotic Uprising, Really-Really-Terrible kind
The last scenario had all life, or at least all life more advanced than a fish, being eradicated from the Earth forever. Is there really a more dismal scenario? Yup. I see it playing out much like the terrible version, with one crucial difference: robotic seeder ships. While we're still all buddy-buddy with the robots, we start constructing robotic settlement ships. They're built to fly to distant star systems, maybe construct a city and incubate a few humans when they arrive. Then they mine the local system, and set about building new probes to colonize new worlds. We'll explore and settle the stars!
Until the robots decide they don't like biological intelligence anymore. Then the seeder ships are re-purposed to seek it out and destroy it. In the "terrible scenario" we blotted out life on Earth, but we're just one of a trillion or more planets. In this scenario we set in motion an orchestrated effort to destroy all life anywhere. Whoops. Truly, a disaster of the greatest magnitude. If such a scenario seems at all possible, I hope we'll stockpile a planet destroying supply of fission, or anti-matter, bombs. Better that the Earth be turned into dust then risk eradicating life everywhere. Of course, once the seeder ships are out there, it might already be too late...
Robotic Uprisings: not all bad
I saved the least bad for last, so as to not end on the down note of universal extinction. I believe that the proper way to evaluate the consequences of any disaster are in terms of life, and in particular, intelligent life. The destruction of, say, a nation would be a tragedy, certainly. But in the grand scheme of things its a small blip. The miracle of Earth is first, life, and second, intelligence. The degree to which robots muck that up is the degree to which the uprising is bad, or really really really bad.
And the thing that makes life special is evolution. The biosphere is constantly changing, and usually improving. We went from microbes blindly reacting to environmental cues, through varying degrees of braininess, to the human mind, capable of unlocking cosmological secrets, capable of creative invention. Someday we may build a machine that's at least as capable of evolving as ourselves, where generation after self-replicating generation of machine is more intelligent, more creative, better than the last. Such a system could very well leave humanity behind, achieving intellect unlike anything we currently imagine. If such an improving, super-human creation were to turn against humanity, it'd just be Darwinism, in a way. Competition among species is as old as time, and if we create a truly superior specie, it might out-compete us in the end. I see nothing wrong with fighting such a creature: we need not go gently into the dark night. But, unlikely the other scenarios, mutual destruction would no longer be necessary. As long as life, be it biological or mechanical, is improving itself, things are well enough in the world. If we achieve any degree of artificial intelligence, I suspect we'll eventually reach this mastery of the art. The important thing, then, is seeing to it that we don't let the robots get out of hand, at least until that time.
Monday, December 13, 2010
Wednesday, December 8, 2010
An Ion Blast from Low Orbit
Sometimes I start to write about a topic, and every few sentences find myself resisting getting pulled down some new tangent. These topics can be approached in so many different ways, it's hard not to keep writing until a book comes out. Attempting to expose the backstory for the original point I wanted to make, I'll find myself repeatedly backing up, deciding there's something else I should tell the reader first.
Wikileaks is that topic d'jour. Why has its release of diplomatic cables caused so much more furious a response than anything else its leaked. What's the legal and moral implications of the event? How is everyone reacting, and where's it all leading?
But what I originally wanted to talk about is the fact that Anonymous is attacking the credit card companies.
Anonymous is hard to characterize, exactly. It's a loose affiliation of computer savvy individuals, who've taken on such organizations as Scientology and the RIAA. They fight against censorship, against perceived injustices perpetrated by major corporations against defenseless individual citizens. And lately they have dropped their attention from antipiracy organizations to focus on the financial behemoths behind the economy.
While previous leaks by WikiLeaks have created some controversy, the fallout over the release of classified diplomatic cables is somewhat unprecedented. Many members of many governments throughout the world have been falling over each other to condemn the organization in the strongest words possible. It's a terrorist organization whose members should be assassinated, according to some. Arguably in response to this rhetoric, companies with links to WikiLeaks have been quick to sever them.
Amazon dropped its hosting of the site. Paypal froze its account, refusing any new contributions. Mastercard and Visa have also banned any payments to wikileaks through there systems. The effect of all this is to deny the organization access to capital at the same time its facing major technical attacks and legal battles. It's an odd situation, legally. WikiLeaks has not been formally accused of any crimes, but the attempts to remove it from existence are not really government actions. Is Mastercard in the right deciding that certain organizations don't deserve access to donations? This is one of those tangents I'm going to avoid going down.
Anonymous has sided with Wikileaks, arguing that the companies are in the wrong for trying to cut off a whistle blowing organization, that regardless of the moral questions around the appropriateness of this set of leaks, it's not the government or private industries role to silence undesired speech, even the revelation of secrets. So it's attacking, in its own peculiar form. As a primary internet based group, it fights through the dissemination and stopping of information. Some see it as a illegal mob, others as modern activists: another tangent.
The primary weapon anonymous uses is the DDOS, a technique to bring websites down. A computer server can only handle so many requests for a webpage at a time. By running code to, in essence, refresh a webpage all day, you can slow the page down. By running that code on thousands of computers, you can block entry by anyone. So for much of today mastercard and visa websites (but not the transaction processing servers) were down.
The code in question is called the "Low Orbit Ion Cannon", and can be downloaded by anyone. I find people's participation very interesting. A DDOS attack is a crime. Participating could involve a multiyear jail sentence. Do they not know? Not care? Figure in such a large crowd they won't be singled out? The victim can easily log the requests coming in, and through subpoena's find out who the attackers were. But despite the risks, real or perceived or ignored, people download the code and let their computers bring down credit card websites.
Thinking about it, I've realized they may be protected by an odd ally: the virus. It's easy to view the computer as just an extension of the self, but it is not necessarily only in our control. A traditional DDOS is not a group affair, but the tool of virus writers. a DDOS requires many computers spread out between many networks. Virus ridden machines will often spring silently to life to attack a distant server, without the owner noticing anything except perhaps a slower than usual internet. These botnets are almost certainly also involved in the attack against the credit card companies.
So how do you know who was attacking, and who just had a secretive virus buried in their machine? I suspect computer forensics could tell, but after the first round of trials for this, that would change. Activists would just visit unsafe sites, download trojan-laced programs, knowing that they were helping the attack while retaining plausible deniability.
This technique doesn't stop there, either. There have been horror stories over the years of viruses that pull kiddy porn onto your machine. I suspect these are written by the purveyors of such filth to avoid having to host the content themselves: much safer to let anonymous infected computers handle that risk. But if it's not already used as a screen, I suspect it will be eventually: a virus that downloads inappropriate material to your machine for you. Thousands will be infected unknowingly, a few will seek out the computer infection for the files, and how do you ever tell the two groups apart?
Identity theft was just the start. With computers in between us, it becomes impossible to test for intention. As more of our lives go online, as more crimes are committed in the digital ether, detection of crime may become the easy part of law. The hard part would be figuring out who the computer committed the crime for.
Wikileaks is that topic d'jour. Why has its release of diplomatic cables caused so much more furious a response than anything else its leaked. What's the legal and moral implications of the event? How is everyone reacting, and where's it all leading?
But what I originally wanted to talk about is the fact that Anonymous is attacking the credit card companies.
Anonymous is hard to characterize, exactly. It's a loose affiliation of computer savvy individuals, who've taken on such organizations as Scientology and the RIAA. They fight against censorship, against perceived injustices perpetrated by major corporations against defenseless individual citizens. And lately they have dropped their attention from antipiracy organizations to focus on the financial behemoths behind the economy.
While previous leaks by WikiLeaks have created some controversy, the fallout over the release of classified diplomatic cables is somewhat unprecedented. Many members of many governments throughout the world have been falling over each other to condemn the organization in the strongest words possible. It's a terrorist organization whose members should be assassinated, according to some. Arguably in response to this rhetoric, companies with links to WikiLeaks have been quick to sever them.
Amazon dropped its hosting of the site. Paypal froze its account, refusing any new contributions. Mastercard and Visa have also banned any payments to wikileaks through there systems. The effect of all this is to deny the organization access to capital at the same time its facing major technical attacks and legal battles. It's an odd situation, legally. WikiLeaks has not been formally accused of any crimes, but the attempts to remove it from existence are not really government actions. Is Mastercard in the right deciding that certain organizations don't deserve access to donations? This is one of those tangents I'm going to avoid going down.
Anonymous has sided with Wikileaks, arguing that the companies are in the wrong for trying to cut off a whistle blowing organization, that regardless of the moral questions around the appropriateness of this set of leaks, it's not the government or private industries role to silence undesired speech, even the revelation of secrets. So it's attacking, in its own peculiar form. As a primary internet based group, it fights through the dissemination and stopping of information. Some see it as a illegal mob, others as modern activists: another tangent.
The primary weapon anonymous uses is the DDOS, a technique to bring websites down. A computer server can only handle so many requests for a webpage at a time. By running code to, in essence, refresh a webpage all day, you can slow the page down. By running that code on thousands of computers, you can block entry by anyone. So for much of today mastercard and visa websites (but not the transaction processing servers) were down.
The code in question is called the "Low Orbit Ion Cannon", and can be downloaded by anyone. I find people's participation very interesting. A DDOS attack is a crime. Participating could involve a multiyear jail sentence. Do they not know? Not care? Figure in such a large crowd they won't be singled out? The victim can easily log the requests coming in, and through subpoena's find out who the attackers were. But despite the risks, real or perceived or ignored, people download the code and let their computers bring down credit card websites.
Thinking about it, I've realized they may be protected by an odd ally: the virus. It's easy to view the computer as just an extension of the self, but it is not necessarily only in our control. A traditional DDOS is not a group affair, but the tool of virus writers. a DDOS requires many computers spread out between many networks. Virus ridden machines will often spring silently to life to attack a distant server, without the owner noticing anything except perhaps a slower than usual internet. These botnets are almost certainly also involved in the attack against the credit card companies.
So how do you know who was attacking, and who just had a secretive virus buried in their machine? I suspect computer forensics could tell, but after the first round of trials for this, that would change. Activists would just visit unsafe sites, download trojan-laced programs, knowing that they were helping the attack while retaining plausible deniability.
This technique doesn't stop there, either. There have been horror stories over the years of viruses that pull kiddy porn onto your machine. I suspect these are written by the purveyors of such filth to avoid having to host the content themselves: much safer to let anonymous infected computers handle that risk. But if it's not already used as a screen, I suspect it will be eventually: a virus that downloads inappropriate material to your machine for you. Thousands will be infected unknowingly, a few will seek out the computer infection for the files, and how do you ever tell the two groups apart?
Identity theft was just the start. With computers in between us, it becomes impossible to test for intention. As more of our lives go online, as more crimes are committed in the digital ether, detection of crime may become the easy part of law. The hard part would be figuring out who the computer committed the crime for.
Friday, December 3, 2010
Comcast and L3
I always read the comments on online news. I think it's a wonderful view into the process of society moving. Interactions, the spread of ideas: it's all stored in 1's and 0's for us to examine.
The internet feels nebulous, but it's built on very real hardware spread across the world. No one entity owns the internet, instead millions of companies and billions of people each own a little piece. The science and technology of the connections, the mathematics of the virtual information speeding down copper wires, the economic transactions between billions that makes it all work: it's an interesting topic, but in many ways an esoteric one. Like most other things in moden society, a small group of people specialize deeply in each aspect, and make it work.
So it's interesting when the small details are brought to the broader attention of the public. This happened recently in an ongoing dispute between Comcast and L3. Each presented a different set of facts to the public, and in online news and the related comments sections I've watched as consensuses formed. First, there was anger at Comcast, and accusations this was about stopping Netflix from replacing cable tv. Then Comcast's arguments were considered, and the discussion turned towards absolving Comcast. Maybe L3 was in the wrong here? People taught each other about peering and CDN's, short arguments were developed explaining sides, and last I saw people were solidifying their views, the debate seeming to be won by the L3 side, but with moderate disagreement as is always true in a controversy.
The very short of it is that Comcast, provider of internet to residential areas, and L3, who owns heavy duty connections between far flung ISPs, had a peering agreement. This said that since they were sending about the same amount of data to each other, there was no need to calculate detailed bills. It was a wash, they carried each others data for free. Netflix contracted L3 to stream movies for it, Comcast claimed L3 now owed it money, and L3 said Comcast is just trying to block Netflix.
It's murky, because there are two types of data transfers. If a Comcast customer sends an email to Verizon, and it travels across the L3 network, L3 gets paid. Comcast is paid by its customer, Verizon by its customer, but nobody is paying L3. In contrast, if Comcast and Verizon are directly connected, it's not clear either should pay. Even if Comcast is sending far more emails to Verizon, it's in both companies interest to make sure the email gets through. If not, both have angry customers.
So in that regard, Comcast is sending data across L3, but L3 is supplying Comcast with data it's customers explicitly requested. Comcast argues that the previous Netflix provider did provide them, and that L3 is looking for a unfair advantage. L3, and before it Aakamai, are acting as CDNs, Content Distribution Networks.
But again, the term is ambiguous. In one sense, both L3 and Akamai distributed content, so they're in the same business. But they did it dramatically differently. Aakamia essentially paid Comcast to host the content, acting as a middle man. L3 already has networks across the country, so it just places the computers on its network. In a sense, the computers just move a few miles. But now, they're moving around like the rest of the content on the internet, a middle man was just removed.
It's these subtle distinctions that define the situation. Is a CDN distributed content hosting, or is it a specific way of doing it, purchasing service from your customer's ISP? Is "peering" just traffic that passes across a network elsewhere, or does it include traffic destined for the other network. But these distinctions have the potential to dramatically change who makes billions and who goes under, what sort of internet you get and for what costs, even whether things like high fidelity video conferencing will change business or not.
It's been heartening to watch thousands of people online all unraveling, at least a little, the interconnections and implications of the topic.
The internet feels nebulous, but it's built on very real hardware spread across the world. No one entity owns the internet, instead millions of companies and billions of people each own a little piece. The science and technology of the connections, the mathematics of the virtual information speeding down copper wires, the economic transactions between billions that makes it all work: it's an interesting topic, but in many ways an esoteric one. Like most other things in moden society, a small group of people specialize deeply in each aspect, and make it work.
So it's interesting when the small details are brought to the broader attention of the public. This happened recently in an ongoing dispute between Comcast and L3. Each presented a different set of facts to the public, and in online news and the related comments sections I've watched as consensuses formed. First, there was anger at Comcast, and accusations this was about stopping Netflix from replacing cable tv. Then Comcast's arguments were considered, and the discussion turned towards absolving Comcast. Maybe L3 was in the wrong here? People taught each other about peering and CDN's, short arguments were developed explaining sides, and last I saw people were solidifying their views, the debate seeming to be won by the L3 side, but with moderate disagreement as is always true in a controversy.
The very short of it is that Comcast, provider of internet to residential areas, and L3, who owns heavy duty connections between far flung ISPs, had a peering agreement. This said that since they were sending about the same amount of data to each other, there was no need to calculate detailed bills. It was a wash, they carried each others data for free. Netflix contracted L3 to stream movies for it, Comcast claimed L3 now owed it money, and L3 said Comcast is just trying to block Netflix.
It's murky, because there are two types of data transfers. If a Comcast customer sends an email to Verizon, and it travels across the L3 network, L3 gets paid. Comcast is paid by its customer, Verizon by its customer, but nobody is paying L3. In contrast, if Comcast and Verizon are directly connected, it's not clear either should pay. Even if Comcast is sending far more emails to Verizon, it's in both companies interest to make sure the email gets through. If not, both have angry customers.
So in that regard, Comcast is sending data across L3, but L3 is supplying Comcast with data it's customers explicitly requested. Comcast argues that the previous Netflix provider did provide them, and that L3 is looking for a unfair advantage. L3, and before it Aakamai, are acting as CDNs, Content Distribution Networks.
But again, the term is ambiguous. In one sense, both L3 and Akamai distributed content, so they're in the same business. But they did it dramatically differently. Aakamia essentially paid Comcast to host the content, acting as a middle man. L3 already has networks across the country, so it just places the computers on its network. In a sense, the computers just move a few miles. But now, they're moving around like the rest of the content on the internet, a middle man was just removed.
It's these subtle distinctions that define the situation. Is a CDN distributed content hosting, or is it a specific way of doing it, purchasing service from your customer's ISP? Is "peering" just traffic that passes across a network elsewhere, or does it include traffic destined for the other network. But these distinctions have the potential to dramatically change who makes billions and who goes under, what sort of internet you get and for what costs, even whether things like high fidelity video conferencing will change business or not.
It's been heartening to watch thousands of people online all unraveling, at least a little, the interconnections and implications of the topic.
Sunday, October 3, 2010
Live Forever and Die Young
Life predates death, but just barely. At some point in the dim past, in a small pond or in the depth of the ocean, a twiggy bit of proto-RNA, or a fat globule stuffed with hydrocarbons, appeared. It was random chance, an unusual reaction, but with that life had begun. And like all familiar forms of life, this ancestor set off to make a copy of itself.
It succeeded. It may have succeeded a few times, starting a small extended family with total ignorance about death. Mostly, they lacked any organelles remotely capable of conceptualizing anything, especially abstract concepts like death. But there was also a lack of practical experience at play.
Then, death swirled his metaphorical bony finger in that pond, announcing his arrival. Perhaps torsion in the water ripped a creature in two. Perhaps the reproductive act failed, leaving two molecules trapped in a deathly embrace. But just moments after life, death was here.
Odds are pretty good that life and death will depart together, the last creature exiting, hand in hand, with the last death. Entropy isn't the swiftest foe, but its a dedicated one. Still, the rules of death are amenable to change. As long as a star is fusing elements together, there's energy a life can use to extends its existence.
The younger among us may live to see the day when medical technology possesses power to banish non-accidental death. Futurist Ray Kurzweil believes even the middle aged will make it to that victory. Even if it comes centuries from now, some civilization is going to have to face the social repercussions of practical immortality. Our body does an impressive job keeping itself repaired, a little help from nanotechnology and the genetic engineering away of certain 'kill switches' in our genes, could push our maximum age back arbitrarily far.
The "replacement rate" for humans is currently somewhere around 2.3 children per woman. If every woman had 2.3 children, our population would remain constant. As we prevent childhood deaths and infertility, the number approaches 2. When we have more children than this, our population grows, and on a finite world we will eventually reach some maximum capacity for supporting life.
But the replacement rate assumes a constant rate of death as well. If nobody dies, there's no-one to replace, so the replacement rate falls to 0. Adding in voluntary death and accidents, a few babies become necessary, but a very small amount compared to our biological drive to reproduce. An immortal civilization will face very hard choices about how to handle birth and death, putting a choice once dictated by fate into human hands.
The most pleasant solution is pushing off the population cap. Immortality supplanted with the colonization of space could see each generation being sent to new, untapped worlds. Kurzweil predicts that the ability to upload the mind into a simulated Eden will avoid overpopulation issues. If the mind is just a physical pattern, and that pattern can be reproduced in bits, computers could provide a massive location for storing generations past, a location where the living could interact freely with the 'dead', removing the sense of loss from bodily expiration.
But its not clear that either of those solutions will come to pass, or provide a long term solution to geometric population growth. Absent new worlds, real or simulated, to keep our forefathers population balance will need to be provided by society. The fairest solution may be abstention: the technology to stop death exists, but isn't used. Or if the allure of an extra century or three of life is too great, life extension could be provided up until a point. At some predefined age, death is mandated.
Practically, the solution seems fair. Overpopulation is averted, and while the government will eventually have you killed, you'd have achieved a longer than natural life. Still, mandated death dates have a chilly, distopian feel to them. I suspect not everyone will go gently into that good night. And in a way its just another form of abstention, possessing life giving technology but refusing to hand it out.
What if we wanted to allow immortality, perhaps for the few? It'd be an interesting experiment at any rate: what would a youthful body 800 years old think of? Would they be like a species above us, masters of endless knowledge and skills? Or would their brain fill up, and their abilities fail to exceed our own? Would they become trusted advisers, fonts of first hand knowledge from centuries of civilization, or just a burden and source of jealousy for the rest?
Will we turn to the free market for a solution? Auctioning off slots to the next century? A free-market proponent could argue that wealth corresponds with an individuals contribution to society, and that the most contributing deserve more time on Earth, but the solution strikes me as too distasteful for adoption. While we already portion off life saving technology by wealth, I'm inclined to believe this discrepancy has a limit, and that the populace won't accept an immortal ruling class lightly.
There's one solution that frightens me most, because it feels so fair on the face of it. Immortal despots, death panels and brains encoded into 1's and 0's are very science fiction. They don't strike me as the sorts of solutions people would actually choose. But what if the 'replacement rate' was made more concrete? What if immortality is granted as long as you have no children? With the birth of a first child, some clock could be set, perhaps just the removal of age-defying treatments. The parent could live a traditional full life: 60 years, perhaps, to see the children grow up and make their own lives, before they need to pass on. Then death becomes a choice, a trade off for a deep biological need, and a morale obligation to give other lives a chance at realization.
Sounds fair doesn't it, or at least as fair as is likely when immortality is real? But what sort of world would this create? I suspect lots of people wouldn't wait much longer than they do now. Some would experience life for centuries, or longer, only passing on when they'd experienced all they wanted to experience in this life. But then there's the edge of the bell curve...
A few people would chose not to have children ever, preferring self preservation to reproduction. Perhaps sociopaths, having no concern for others, would see no need to create new life. The intensely narcissistic, the ego-maniacal, the greedy, seeing at no point in time an advantage in giving up their self-important life, would pass generation to generation. The well adjusted will accept death, sooner or later, as an integral part of being. Even if those self absorbed make up a minuscule fraction of the population, each generation would add a few more to their numbers. The good die, the bad live on. The good die, the bad live on. Then one day birth is all but gone, as only the self obsessed are left. Such a good meaning solution. Such a dreary end.
There's a theory that the reason we can't find signs of alien life is a tendency for civilizations to self destruct at some point in their evolution. This idea seems less popular now, being much more visceral during the threats of the cold war. I'd always dismissed it somewhat out of hand, as more a projection of fears than a reasonable theory. But on thinking about life extension, I can start to see the sort of seed that would rip societies apart. What if the technology was ubiquitous and strict rules could not be placed on the use? What if the solutions all require despotic control, the sort antithetical to a civilization's advancement? Will we adjust to the siren song of immorality? Do any species?
It succeeded. It may have succeeded a few times, starting a small extended family with total ignorance about death. Mostly, they lacked any organelles remotely capable of conceptualizing anything, especially abstract concepts like death. But there was also a lack of practical experience at play.
Then, death swirled his metaphorical bony finger in that pond, announcing his arrival. Perhaps torsion in the water ripped a creature in two. Perhaps the reproductive act failed, leaving two molecules trapped in a deathly embrace. But just moments after life, death was here.
Odds are pretty good that life and death will depart together, the last creature exiting, hand in hand, with the last death. Entropy isn't the swiftest foe, but its a dedicated one. Still, the rules of death are amenable to change. As long as a star is fusing elements together, there's energy a life can use to extends its existence.
The younger among us may live to see the day when medical technology possesses power to banish non-accidental death. Futurist Ray Kurzweil believes even the middle aged will make it to that victory. Even if it comes centuries from now, some civilization is going to have to face the social repercussions of practical immortality. Our body does an impressive job keeping itself repaired, a little help from nanotechnology and the genetic engineering away of certain 'kill switches' in our genes, could push our maximum age back arbitrarily far.
The "replacement rate" for humans is currently somewhere around 2.3 children per woman. If every woman had 2.3 children, our population would remain constant. As we prevent childhood deaths and infertility, the number approaches 2. When we have more children than this, our population grows, and on a finite world we will eventually reach some maximum capacity for supporting life.
But the replacement rate assumes a constant rate of death as well. If nobody dies, there's no-one to replace, so the replacement rate falls to 0. Adding in voluntary death and accidents, a few babies become necessary, but a very small amount compared to our biological drive to reproduce. An immortal civilization will face very hard choices about how to handle birth and death, putting a choice once dictated by fate into human hands.
The most pleasant solution is pushing off the population cap. Immortality supplanted with the colonization of space could see each generation being sent to new, untapped worlds. Kurzweil predicts that the ability to upload the mind into a simulated Eden will avoid overpopulation issues. If the mind is just a physical pattern, and that pattern can be reproduced in bits, computers could provide a massive location for storing generations past, a location where the living could interact freely with the 'dead', removing the sense of loss from bodily expiration.
But its not clear that either of those solutions will come to pass, or provide a long term solution to geometric population growth. Absent new worlds, real or simulated, to keep our forefathers population balance will need to be provided by society. The fairest solution may be abstention: the technology to stop death exists, but isn't used. Or if the allure of an extra century or three of life is too great, life extension could be provided up until a point. At some predefined age, death is mandated.
Practically, the solution seems fair. Overpopulation is averted, and while the government will eventually have you killed, you'd have achieved a longer than natural life. Still, mandated death dates have a chilly, distopian feel to them. I suspect not everyone will go gently into that good night. And in a way its just another form of abstention, possessing life giving technology but refusing to hand it out.
What if we wanted to allow immortality, perhaps for the few? It'd be an interesting experiment at any rate: what would a youthful body 800 years old think of? Would they be like a species above us, masters of endless knowledge and skills? Or would their brain fill up, and their abilities fail to exceed our own? Would they become trusted advisers, fonts of first hand knowledge from centuries of civilization, or just a burden and source of jealousy for the rest?
Will we turn to the free market for a solution? Auctioning off slots to the next century? A free-market proponent could argue that wealth corresponds with an individuals contribution to society, and that the most contributing deserve more time on Earth, but the solution strikes me as too distasteful for adoption. While we already portion off life saving technology by wealth, I'm inclined to believe this discrepancy has a limit, and that the populace won't accept an immortal ruling class lightly.
There's one solution that frightens me most, because it feels so fair on the face of it. Immortal despots, death panels and brains encoded into 1's and 0's are very science fiction. They don't strike me as the sorts of solutions people would actually choose. But what if the 'replacement rate' was made more concrete? What if immortality is granted as long as you have no children? With the birth of a first child, some clock could be set, perhaps just the removal of age-defying treatments. The parent could live a traditional full life: 60 years, perhaps, to see the children grow up and make their own lives, before they need to pass on. Then death becomes a choice, a trade off for a deep biological need, and a morale obligation to give other lives a chance at realization.
Sounds fair doesn't it, or at least as fair as is likely when immortality is real? But what sort of world would this create? I suspect lots of people wouldn't wait much longer than they do now. Some would experience life for centuries, or longer, only passing on when they'd experienced all they wanted to experience in this life. But then there's the edge of the bell curve...
A few people would chose not to have children ever, preferring self preservation to reproduction. Perhaps sociopaths, having no concern for others, would see no need to create new life. The intensely narcissistic, the ego-maniacal, the greedy, seeing at no point in time an advantage in giving up their self-important life, would pass generation to generation. The well adjusted will accept death, sooner or later, as an integral part of being. Even if those self absorbed make up a minuscule fraction of the population, each generation would add a few more to their numbers. The good die, the bad live on. The good die, the bad live on. Then one day birth is all but gone, as only the self obsessed are left. Such a good meaning solution. Such a dreary end.
There's a theory that the reason we can't find signs of alien life is a tendency for civilizations to self destruct at some point in their evolution. This idea seems less popular now, being much more visceral during the threats of the cold war. I'd always dismissed it somewhat out of hand, as more a projection of fears than a reasonable theory. But on thinking about life extension, I can start to see the sort of seed that would rip societies apart. What if the technology was ubiquitous and strict rules could not be placed on the use? What if the solutions all require despotic control, the sort antithetical to a civilization's advancement? Will we adjust to the siren song of immorality? Do any species?
Thursday, September 9, 2010
Do We Have Gravity Backwards?
Electromagnetic radiation allows us to see the universe. From radio waves as long as football fields, up through the colorful wavelengths of visible light and on to high powered x-rays and gamma rays, electromagnetic radiation is our principle tool for observation. Even when we switch to magnets or touch we've just swapped one form of electromagnetism for another.
What if some forms of matter don't react with electromagnetic radiation? A very clear plane of glass is transparent to the colors we see, but still opaque to other wavelengths. What if a structure was totally invisible? You could pass through it without ever knowing it was there. The only evidence would be the slight tug of gravity.
A globule of the stuff on Earth might evade detection forever, presumably drifting down into the molten core of the planet for an even better hiding spot. But on galactic scales the impact of the gravity would be visible on other matter. And astronomers have detected just that. Galaxies with insufficient mass to mathematically hold together do anyways. When you tally the things we see with the gravity we measure, they don't add up. Science has named this discrepancy dark matter. The leading candidate explanation right now is exotic particles that just don't react to light.
Einstein taught us that gravity is a distortion of spacetime by a massive object. The universe is like a trampoline: when something is placed on it, it bends the structure around it. Another object placed down will tend to roll towards the first object, obeying the attractive force of gravity (As an aside I've never really liked this way of explaining gravity. Why does the trampoline distort around an object? Gravity is pulling the object down towards the Earth, and the trampoline is in the way. We explain what gravity does by alluding to gravity).
There's an alternative explanation I've come up with. We see regular matter at the bottom of gravity wells, in exact proportions to the strength of the gravity, and as the matter moves the gravity does too. So we say that the matter distorts space, creating the gravity. What if that's backwards? What if spacetime is curved all on its own, and matter just pools in the low places? That is, what if protons, electrons and the whole gang don't bend space around them, just follow the existing grooves? You'd still see lots of matter in very dense places, but the cause and effect would be reversed. Gravitational distortions wouldn't follow a sun around, the sun would roll around to keep inside the distortion. Dark matter stops being a mysterious particle, and just becomes a gravity well that hasn't been totally filled.
Physical theories need tests. The most straightforward one I can think of is looking for cases where a massive body is caught in the grip of even larger one: A star passing by a black hole perhaps. If the gravity well travels too close to the blackhole, it may fall in. But if it skirts by its ever so closely, ejecting around the other side, the energy pooled in the star might fall in while the gravity well continued on its way. Dark matter being produced by super nova might also be a clue, with the great explosion ejecting mass from the gravity well. Finer measurements of gravity might allow us to use more practical sized objects.
What if some forms of matter don't react with electromagnetic radiation? A very clear plane of glass is transparent to the colors we see, but still opaque to other wavelengths. What if a structure was totally invisible? You could pass through it without ever knowing it was there. The only evidence would be the slight tug of gravity.
A globule of the stuff on Earth might evade detection forever, presumably drifting down into the molten core of the planet for an even better hiding spot. But on galactic scales the impact of the gravity would be visible on other matter. And astronomers have detected just that. Galaxies with insufficient mass to mathematically hold together do anyways. When you tally the things we see with the gravity we measure, they don't add up. Science has named this discrepancy dark matter. The leading candidate explanation right now is exotic particles that just don't react to light.
Einstein taught us that gravity is a distortion of spacetime by a massive object. The universe is like a trampoline: when something is placed on it, it bends the structure around it. Another object placed down will tend to roll towards the first object, obeying the attractive force of gravity (As an aside I've never really liked this way of explaining gravity. Why does the trampoline distort around an object? Gravity is pulling the object down towards the Earth, and the trampoline is in the way. We explain what gravity does by alluding to gravity).
There's an alternative explanation I've come up with. We see regular matter at the bottom of gravity wells, in exact proportions to the strength of the gravity, and as the matter moves the gravity does too. So we say that the matter distorts space, creating the gravity. What if that's backwards? What if spacetime is curved all on its own, and matter just pools in the low places? That is, what if protons, electrons and the whole gang don't bend space around them, just follow the existing grooves? You'd still see lots of matter in very dense places, but the cause and effect would be reversed. Gravitational distortions wouldn't follow a sun around, the sun would roll around to keep inside the distortion. Dark matter stops being a mysterious particle, and just becomes a gravity well that hasn't been totally filled.
Physical theories need tests. The most straightforward one I can think of is looking for cases where a massive body is caught in the grip of even larger one: A star passing by a black hole perhaps. If the gravity well travels too close to the blackhole, it may fall in. But if it skirts by its ever so closely, ejecting around the other side, the energy pooled in the star might fall in while the gravity well continued on its way. Dark matter being produced by super nova might also be a clue, with the great explosion ejecting mass from the gravity well. Finer measurements of gravity might allow us to use more practical sized objects.
Monday, September 6, 2010
Happy Labor Day
September 4, 1882 Thomas Edison turned on the world's first commercial electric power plant, lighting one square mile of Lower Manhattan. The following morning would mark the first observation of Labor Day in the United States.
From peasant uprisings, through strikes and political lobbying, labor disputes have been part of the fabric of civilization since its inception. But from the industrial revolution onwards they took a different tone. Work took a different tone. Man could now wield power external to his body.
Animals and fire served that role first, but the introduction of water wheels and the mastery of steam power greatly bolstered the force with which man could reshape his surroundings. Edison's electric station turned the electron to man's bidding, one more ally in reshaping the universe.
One of the defining questions of civilization is 'how should things be shared when want exceeds supply?' In a sense, this is the foundational question in politics and in economics. The basic answer we seem to always return to is: you deserve reward proportional to your contribution to the process of obtaining it.
But with mastery of the elements, and the introduction of repeatable, predictable, automatable processes, the balance of labor shifted. Within a factory, the ability of a man to produce was magnified immensely. Suddenly, measuring contribution becomes difficult.
Does the man toiling in a field not yet touched by automation suddenly deserve a much smaller portion of the society's goods? And what of the machinery's contribution? Does the man working the machine deserve the riches of the machine, or does the machine's inventor? Or the man who arranged to have the machine built?
The industrial revolution was a liquid period in human civilization. The rules were changing, new forces were at play. The labor movement was a series of physical battles fought between people to set the new rules of society. As the heat of change cooled, society started recrystallizing, new social norms in place for how to manage an economy in a world with electricity.
The story of the labor movement is filled with victories: the weekend, the slowly shortening workweek, rights for workers and comfortable wages. But in a sense, the entrepreneurs and investors won. They were the ones seen setting in motion progress. Society weighed down in favor of the idea that wealth mostly belongs to the inventor, to the machine owner, to the one organizing work.
Thirty-four years before Edison's factory opened, before Labor Day was observed, Karl Marx published the Communist Manifesto. Marx believed workers would unite and claim ownership of the fruits of labor, destroying social strata and autocratic economic governance. That hasn't happened. Karl Marx's mistake, I'd argue, was believing that the 'crystallization' of society would keep proceeding at the rate it was during his life. That soon man would have invented what he was going to invent, that life would settle into pastoral routine of early ages, when each generation relived their parents' lives. Perhaps it will someday, and he was just early. But change is innate to complex organisms like a society. Technology can greatly disrupt established orders, but we manage to do so on our own even without it.
What Marx saw coming was a day when each generation looked exceedingly similar to the one before it. What would business be in such an environment? Each adult would have his role: keeping this machine functioning, seeing that coal was delivered from this mine to these factories. Businesses would not rise and fall, but plod along with predictable returns each year. In such an environment, of what use is the entrepreneur, of the investor, of the executives? The system today elevates them, treasures there ability to navigate in changing times, but that role is lost in a predictable environment.
His call for the workers of the world to unite never happened. Communist states appeared, but the brotherhood of the worker never overcame the social force of national identity. There was no need to question why grandchildren of businessmen deserved such power because technology kept everything changing, new inventions created new wealth. If anything change has been accelerating.
Labor Day celebrates the efforts of men and women in ensuring the quality of life of the worker, the right of the worker to exist as a powerful political and social entity. It's a story of struggle, requiring an antagonist, filled sometimes-fairly, sometimes-not by the executive. What's ahead in labor relations? I don't expect the struggle to ever end, really. Incredibly intelligent AI might achieve peace in labor relations, ending man's need to worry about such things. We may create a Matrix for ourselves, a simulation of world' without scarcity, man made gardens of Eden. But more strongly I suspect that our civilization will stretch out through the stars, and each passing millenia will see new battlefields for Labor Day remembrances.
From peasant uprisings, through strikes and political lobbying, labor disputes have been part of the fabric of civilization since its inception. But from the industrial revolution onwards they took a different tone. Work took a different tone. Man could now wield power external to his body.
Animals and fire served that role first, but the introduction of water wheels and the mastery of steam power greatly bolstered the force with which man could reshape his surroundings. Edison's electric station turned the electron to man's bidding, one more ally in reshaping the universe.
One of the defining questions of civilization is 'how should things be shared when want exceeds supply?' In a sense, this is the foundational question in politics and in economics. The basic answer we seem to always return to is: you deserve reward proportional to your contribution to the process of obtaining it.
But with mastery of the elements, and the introduction of repeatable, predictable, automatable processes, the balance of labor shifted. Within a factory, the ability of a man to produce was magnified immensely. Suddenly, measuring contribution becomes difficult.
Does the man toiling in a field not yet touched by automation suddenly deserve a much smaller portion of the society's goods? And what of the machinery's contribution? Does the man working the machine deserve the riches of the machine, or does the machine's inventor? Or the man who arranged to have the machine built?
The industrial revolution was a liquid period in human civilization. The rules were changing, new forces were at play. The labor movement was a series of physical battles fought between people to set the new rules of society. As the heat of change cooled, society started recrystallizing, new social norms in place for how to manage an economy in a world with electricity.
The story of the labor movement is filled with victories: the weekend, the slowly shortening workweek, rights for workers and comfortable wages. But in a sense, the entrepreneurs and investors won. They were the ones seen setting in motion progress. Society weighed down in favor of the idea that wealth mostly belongs to the inventor, to the machine owner, to the one organizing work.
Thirty-four years before Edison's factory opened, before Labor Day was observed, Karl Marx published the Communist Manifesto. Marx believed workers would unite and claim ownership of the fruits of labor, destroying social strata and autocratic economic governance. That hasn't happened. Karl Marx's mistake, I'd argue, was believing that the 'crystallization' of society would keep proceeding at the rate it was during his life. That soon man would have invented what he was going to invent, that life would settle into pastoral routine of early ages, when each generation relived their parents' lives. Perhaps it will someday, and he was just early. But change is innate to complex organisms like a society. Technology can greatly disrupt established orders, but we manage to do so on our own even without it.
What Marx saw coming was a day when each generation looked exceedingly similar to the one before it. What would business be in such an environment? Each adult would have his role: keeping this machine functioning, seeing that coal was delivered from this mine to these factories. Businesses would not rise and fall, but plod along with predictable returns each year. In such an environment, of what use is the entrepreneur, of the investor, of the executives? The system today elevates them, treasures there ability to navigate in changing times, but that role is lost in a predictable environment.
His call for the workers of the world to unite never happened. Communist states appeared, but the brotherhood of the worker never overcame the social force of national identity. There was no need to question why grandchildren of businessmen deserved such power because technology kept everything changing, new inventions created new wealth. If anything change has been accelerating.
Labor Day celebrates the efforts of men and women in ensuring the quality of life of the worker, the right of the worker to exist as a powerful political and social entity. It's a story of struggle, requiring an antagonist, filled sometimes-fairly, sometimes-not by the executive. What's ahead in labor relations? I don't expect the struggle to ever end, really. Incredibly intelligent AI might achieve peace in labor relations, ending man's need to worry about such things. We may create a Matrix for ourselves, a simulation of world' without scarcity, man made gardens of Eden. But more strongly I suspect that our civilization will stretch out through the stars, and each passing millenia will see new battlefields for Labor Day remembrances.
Monday, August 9, 2010
Computers: Slightly Less Magic Than They Could've Been?
Our suspicions may have been confirmed: P may not have equaled NP all along.
The P=NP question has been the seminal problem in theoretical computer science since forever ago (for the computer value of forever: 40 years give or take). To simplify it past any resemblance of truth, P is basically the group of all 'easy' problems, and NP is the group of all 'easy to correct' problems. If I gave you a computer program and told you it could always win at chess, how would you go about confirming that's true? Not very easily. But if I told you there's a 3,000 mile car trip that visits every state capital in America, and tell you the roads to take, you'd just have to count up the miles to see if it's true. Proving a computer program does something is not in NP, but proving that there's a path of some length between cities is. The question, then, is if there's an easy way to confirm a solution is true, can you easily come up with the solution too? (If I give you a map, can you come up with the shortest path between all the state capitals?) Computer scientists have suspected 'no', but demonstrating this is true has been fraught with difficulty.
People try to prove this all the time, and they fail. Vinay Deolalikar apparently emailed a 100 page proof to a group of respected researchers, and the paper eventually ended up on the Internet. Then the Internet became abuzz, an article was written on slashdot, and tweets of 'Deolalikar' skyrocketed past any previous level he'd experienced. Daniel Lemire raised a good question: why? What about this proof has gotten such discussion, as opposed to countless other attempts at a proof? Which is a really interesting question in general.
News transmission requires two participants: someone to tell the news and someone to hear it. You're hearing, or rehearing, this news from me. All in all, I'll probably end up telling 15 or so individuals about this event. Each of you may pass the news on to someone new, or let this trail of the story of Deolalikar's proof attempt end.
Coincidently, just like in my last post linking auto-immune diseases, SEO and planted evidence there's a illness connection here. The study of news transmission began in the study of epidemics. Instead of giving you a disease, I'm giving you a fact. Sorry, antibiotics won't help this time. Maybe enough alcohol, consumed very quickly, will wipe the fact from your mind? On the plus side, P = NP would have been great news for robots, so Computer Science is arguably less likely to kill you now.
What was discovered in epidemics is that the crucial value for deciding if a sickness will turn into a pandemic is the average number of people a sick person passes the disease onto before dying or getting better. If they average, say, 0.9 people, then the disease will dwindle and vanish. But if the average is, say, 1.2 people, then it'll explode through the population, each day the legions of sick increasing. Same thing with zombies. Same things with news and memes.
The P=NP? proof is a particularly good example of this. The knowledge of this paper started out just in Deolalikar's head. He gave the paper to a few other researchers. If the proof was poorly done, it would have ended there. But it was exciting enough that the researchers wanted to share it with > 1 other person each. There was an initial outbreak of this knowledge, eventually reaching slashdot.
Slashdot, having a vast readership, is particularly apt at transmitting an idea. For an instant the average person told skyrocketed by its introduction. The analogy is that the virus has reached the water supply, or the zombies have entered New York City. There was a phase change in outreach.
Sometimes news sources provide stories that don't excite us, that don't have a viral spread > 1. These peter out of the public discourse. But the blog posts and tweets since the slashdot story demonstrate again that there's a viral nature to this story. And here I am telling you.
Viral spread isn't a constant value, as eventually the susceptible are all infected. And not everyone is susceptible, by genes or geographic isolation. Or in this case, interests. Not all the readers of this blog are computer science aficionados, so this is more likely to be the end of the line then the computer science-centric blogs I've read this story on. It still might have another major outbreak if it reaches a mainstream news source. Or alternatively, if other theoreticians can't find any problems with the proof the story will 'mutate', and grow even more viral. Million dollar prizes always make good stories.
Returning to Dan's question, what made the story viral? P=NP is about as pop-culture as unsolved mathematics can be. The proof is apparently plausible. Releasing the paper by email is novel and interesting. And "I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts" is a great way to announce one of the pinnacle proofs of our time.
One of the interesting things about viral spread is how small the critical value is. 0.95 people per iteration is a non-story, 1.05 can spread across the world. Thus it may be that the font comment was enough to push it across, and without it we'd have had to wait for more actual confirmation of the proof before hearing about it.
I find networks to be an endlessly interesting topic, and this is a great practical example of it. It also shows a news source acting as a facilitator of a story with wide popularity rather than pushing a story because of the source's reach. It'd be an interesting metric to see what news sources respond to and create viral spread, and which just push ideas that couldn't hold up in an organic global discussion.
As cool as a P=NP world would be, best of luck to Deolalikar, I hope you've earned your way into the pantheon of computer science greats.
The P=NP question has been the seminal problem in theoretical computer science since forever ago (for the computer value of forever: 40 years give or take). To simplify it past any resemblance of truth, P is basically the group of all 'easy' problems, and NP is the group of all 'easy to correct' problems. If I gave you a computer program and told you it could always win at chess, how would you go about confirming that's true? Not very easily. But if I told you there's a 3,000 mile car trip that visits every state capital in America, and tell you the roads to take, you'd just have to count up the miles to see if it's true. Proving a computer program does something is not in NP, but proving that there's a path of some length between cities is. The question, then, is if there's an easy way to confirm a solution is true, can you easily come up with the solution too? (If I give you a map, can you come up with the shortest path between all the state capitals?) Computer scientists have suspected 'no', but demonstrating this is true has been fraught with difficulty.
People try to prove this all the time, and they fail. Vinay Deolalikar apparently emailed a 100 page proof to a group of respected researchers, and the paper eventually ended up on the Internet. Then the Internet became abuzz, an article was written on slashdot, and tweets of 'Deolalikar' skyrocketed past any previous level he'd experienced. Daniel Lemire raised a good question: why? What about this proof has gotten such discussion, as opposed to countless other attempts at a proof? Which is a really interesting question in general.
News transmission requires two participants: someone to tell the news and someone to hear it. You're hearing, or rehearing, this news from me. All in all, I'll probably end up telling 15 or so individuals about this event. Each of you may pass the news on to someone new, or let this trail of the story of Deolalikar's proof attempt end.
Coincidently, just like in my last post linking auto-immune diseases, SEO and planted evidence there's a illness connection here. The study of news transmission began in the study of epidemics. Instead of giving you a disease, I'm giving you a fact. Sorry, antibiotics won't help this time. Maybe enough alcohol, consumed very quickly, will wipe the fact from your mind? On the plus side, P = NP would have been great news for robots, so Computer Science is arguably less likely to kill you now.
What was discovered in epidemics is that the crucial value for deciding if a sickness will turn into a pandemic is the average number of people a sick person passes the disease onto before dying or getting better. If they average, say, 0.9 people, then the disease will dwindle and vanish. But if the average is, say, 1.2 people, then it'll explode through the population, each day the legions of sick increasing. Same thing with zombies. Same things with news and memes.
The P=NP? proof is a particularly good example of this. The knowledge of this paper started out just in Deolalikar's head. He gave the paper to a few other researchers. If the proof was poorly done, it would have ended there. But it was exciting enough that the researchers wanted to share it with > 1 other person each. There was an initial outbreak of this knowledge, eventually reaching slashdot.
Slashdot, having a vast readership, is particularly apt at transmitting an idea. For an instant the average person told skyrocketed by its introduction. The analogy is that the virus has reached the water supply, or the zombies have entered New York City. There was a phase change in outreach.
Sometimes news sources provide stories that don't excite us, that don't have a viral spread > 1. These peter out of the public discourse. But the blog posts and tweets since the slashdot story demonstrate again that there's a viral nature to this story. And here I am telling you.
Viral spread isn't a constant value, as eventually the susceptible are all infected. And not everyone is susceptible, by genes or geographic isolation. Or in this case, interests. Not all the readers of this blog are computer science aficionados, so this is more likely to be the end of the line then the computer science-centric blogs I've read this story on. It still might have another major outbreak if it reaches a mainstream news source. Or alternatively, if other theoreticians can't find any problems with the proof the story will 'mutate', and grow even more viral. Million dollar prizes always make good stories.
Returning to Dan's question, what made the story viral? P=NP is about as pop-culture as unsolved mathematics can be. The proof is apparently plausible. Releasing the paper by email is novel and interesting. And "I am pleased to announce a proof that P is not equal to NP, which is attached in 10pt and 12pt fonts" is a great way to announce one of the pinnacle proofs of our time.
One of the interesting things about viral spread is how small the critical value is. 0.95 people per iteration is a non-story, 1.05 can spread across the world. Thus it may be that the font comment was enough to push it across, and without it we'd have had to wait for more actual confirmation of the proof before hearing about it.
I find networks to be an endlessly interesting topic, and this is a great practical example of it. It also shows a news source acting as a facilitator of a story with wide popularity rather than pushing a story because of the source's reach. It'd be an interesting metric to see what news sources respond to and create viral spread, and which just push ideas that couldn't hold up in an organic global discussion.
As cool as a P=NP world would be, best of luck to Deolalikar, I hope you've earned your way into the pantheon of computer science greats.
Saturday, August 7, 2010
It was him.
The human body is equipped with a remarkably effective defense system. Once a dangerous substance has gotten into your body, it needs to push its way past the cell walls containing Defensin, a protein designed to keep bad things out. While the invader struggles to get in, it is liable to be eaten by Granulocytes and Macrophages, or marked for attack by a passing B cell. Even once infiltration is achieved, the body will seek out the compromised cells and destroy them with Killer T and NK Cells. Infections and illnesses come, but the body almost always wins in the end.
But such a powerful defensive weapon has a cost, as the HIV virus reveals. A powerful immune system can be turned against the body, destroying what it was built to protect. Some allergies exhibit a similar phenomenon, where the immune system gets over eager in destroying foreign particles. A lack of defenses leaves a system vulnerable to attack, but excessively capable defense systems are liable to be misused by outside forces.
The lesson doesn't just apply to the body. Google is caught in a war against 'black hat' search engine optimization. Do you ever see spam in the comments of an article? They often don't expect you to click on the link to purchase cheap knock-off purses, but they do expect Google to see the link, and return the target website higher in search results. Huge swathes of the internet are made up of dummy webpages, computer generated, intended only to look legitimate so the links to a real webpage are followed by Google and considered more authoritative.
When these tricks work, Google is left redirecting its users to scams and overpriced stores. So it works hard to identify these tricks and nullify them. If Google catches you gaming its system, it will remove you from its indexes, forever cutting you off from the biggest provider of pageviews around. The threat of banishment from its indexes is a very effective deterrent.
It also creates a powerful weapon to be abused. If you can't rank your website higher, causing your competitors to get kicked out of the index is the next best thing. Performing black hat search engine optimizations on another website can trigger Google's defenses against an innocent target, bringing advantages to the perpetrator. This is making Google's job harder.
And then there's law. Allegations of police officers planting drugs on suspects has a long history. It's difficult to catch someone in the act of a crime like drug use, so possession is used as a reasonable marker of intent and criminalized. But possession can be faked by an enemy, turning the sizable defensive weapon of the criminal justice system to twisted ends. Very recently, after 8 months of being abused and ostracized, a man was cleared of possession of child pornography after an employee of his bragged about planting the photos on his computer. In this case justice was eventually served, but its certainly the case that not every framer is foolish enough to leave behind a trail or brag about his crime.
The improper triggering of defensive mechanisms is, to some extent, unavoidable. The cost of letting an attack succeed in any setting can be too great to sustain. Thus a balance must be found that will inevitably include false positives, be it the creation of allergies or the imprisonment of the innocent. Still, it's a very important lesson to remember that evidence is easy enough to manufacture. When the penalties for a crime can ruin a life, especially based on the ultimately circumstantial evidence of possession of outlawed goods, some extra care must be taken. Things aren't always what they seem.
But such a powerful defensive weapon has a cost, as the HIV virus reveals. A powerful immune system can be turned against the body, destroying what it was built to protect. Some allergies exhibit a similar phenomenon, where the immune system gets over eager in destroying foreign particles. A lack of defenses leaves a system vulnerable to attack, but excessively capable defense systems are liable to be misused by outside forces.
The lesson doesn't just apply to the body. Google is caught in a war against 'black hat' search engine optimization. Do you ever see spam in the comments of an article? They often don't expect you to click on the link to purchase cheap knock-off purses, but they do expect Google to see the link, and return the target website higher in search results. Huge swathes of the internet are made up of dummy webpages, computer generated, intended only to look legitimate so the links to a real webpage are followed by Google and considered more authoritative.
When these tricks work, Google is left redirecting its users to scams and overpriced stores. So it works hard to identify these tricks and nullify them. If Google catches you gaming its system, it will remove you from its indexes, forever cutting you off from the biggest provider of pageviews around. The threat of banishment from its indexes is a very effective deterrent.
It also creates a powerful weapon to be abused. If you can't rank your website higher, causing your competitors to get kicked out of the index is the next best thing. Performing black hat search engine optimizations on another website can trigger Google's defenses against an innocent target, bringing advantages to the perpetrator. This is making Google's job harder.
And then there's law. Allegations of police officers planting drugs on suspects has a long history. It's difficult to catch someone in the act of a crime like drug use, so possession is used as a reasonable marker of intent and criminalized. But possession can be faked by an enemy, turning the sizable defensive weapon of the criminal justice system to twisted ends. Very recently, after 8 months of being abused and ostracized, a man was cleared of possession of child pornography after an employee of his bragged about planting the photos on his computer. In this case justice was eventually served, but its certainly the case that not every framer is foolish enough to leave behind a trail or brag about his crime.
The improper triggering of defensive mechanisms is, to some extent, unavoidable. The cost of letting an attack succeed in any setting can be too great to sustain. Thus a balance must be found that will inevitably include false positives, be it the creation of allergies or the imprisonment of the innocent. Still, it's a very important lesson to remember that evidence is easy enough to manufacture. When the penalties for a crime can ruin a life, especially based on the ultimately circumstantial evidence of possession of outlawed goods, some extra care must be taken. Things aren't always what they seem.
Saturday, July 24, 2010
The Flipbook Universe
The universe is a very active place. From the collision of stars and fiery supernovas down to the effervescent fizz of virtual particles that make up the smallest filaments of reality, things are always moving.
The dynamic nature of the universe is almost a given...and yet, is it real? Consider instead the flip book universe:
The flip-book, besides being a strategy in getting through a particularly dry lecture, serves as an interesting model for understanding the universe. It's a two dimension reality that exists within three, the third dimension making the passage of time possible. Each page is static, unchanging. But if you flip the book, creating varying perspectives on the pages, you can observe the arrow of time working its magic.
There are other ways to construct the same phenomena. The movie projector shines light through a series of static images to project them onto another object. Precursors like the kinetoscope would often rotate a cylinder with images, showing one at a time through a slit.
The same phenomena could be creating you and me. Are we really dynamic, three dimensional objects, or are we just fixed features on a higher dimensional rigid object, spinning on its axis, creating the illusion of a passing time? When the universe completes its rotation, will we be back where we started, ready for yet another pass through an unchanging story of life, the universe and everything?
I'm not sure that there are explicit, scientific tests you could run to ever answer the question. The idea may be trapped forever in the realm of philosophy. For any layer of dynamic activity could just be a transcription on a higher dimension. And how could the sketched character, unchanging moment to moment, leave his page to observe the full sketch book?
If it's true, though, I look forward to repeating my experiences with you the next time reality completes its cycle.
The dynamic nature of the universe is almost a given...and yet, is it real? Consider instead the flip book universe:
There are other ways to construct the same phenomena. The movie projector shines light through a series of static images to project them onto another object. Precursors like the kinetoscope would often rotate a cylinder with images, showing one at a time through a slit.
The same phenomena could be creating you and me. Are we really dynamic, three dimensional objects, or are we just fixed features on a higher dimensional rigid object, spinning on its axis, creating the illusion of a passing time? When the universe completes its rotation, will we be back where we started, ready for yet another pass through an unchanging story of life, the universe and everything?
I'm not sure that there are explicit, scientific tests you could run to ever answer the question. The idea may be trapped forever in the realm of philosophy. For any layer of dynamic activity could just be a transcription on a higher dimension. And how could the sketched character, unchanging moment to moment, leave his page to observe the full sketch book?
If it's true, though, I look forward to repeating my experiences with you the next time reality completes its cycle.
Thursday, June 24, 2010
Pawns and Processors, Bytes and Bishops
In a game famous for it's prodigies, Magnus Carlsen of Norway stands out as something special. He was the third youngest chess grandmaster ever (age 13). Youngest #1 player ever (age 19). At 19 he also holds the third highest ELO score ever, putting him with the top chess players of all time (although wikipedia informs me that score is subject to inflation and thus debatable). Apparently you don't usually hit peak chess ability until 25 at the earliest, so being the best in the world at 19 promises for a long, dominating career.
Why do I bring him up? Because his childhood spent mastering chess included lots of time playing against computers. Carlsen is obviously one of those rare individuals that excel at some task far beyond the abilities of others. But could screen time have contributed to his almost unique skill? It's an answer we'll have to wait and watch for as new generations of chess players enter the world stage. Carlsen could be a fluke, or a precursor of what's to come.
The argument for software being a major cause of Carlsen goes like this: natural ability is not enough to be the best in the world. It's a requirement, but insufficient. Chess greats throughout history have put countless hours into studying chess: memorizing moves, playing games. This is true of any skill: you need to paint lots of pictures, write lots of words, hit lots of fastballs to make it big. Natural aptitude is a clay that gets formed into a specialized machine with repetitions and practice.
But not all practice is equal. Joe Mauer, 3 time A.L. batting champ, perfected his swing against a contraption his dad invented that would drop a ball at his eye level. With so little time between seeing the ball and the ball passing his waist, Joe had to develop an exceptionally quick swing. More generally, how often do strong high school sports programs exist for decades at a time? Partially elite performing schools attract transfers, but partially the talent in a program encourages everyone to perform at a greater level.
As a child, I played chess against my mother. For a long time she'd beat me every game. Eventually I surpassed her, winning most (but not all) of our matches. And then I stopped improving. It wasn't until years later that I started playing chess on Yahoo that I noticed myself improving again. I had found a challenge.
If you practice a competitive skill against players below your level, it's much harder to improve. Failures teach us. If you only see 50 mph fastballs, how do you learn to hit a 95 mph one? You're not developing the reaction time, the muscle memory, or the poise to do so. So it is with a chess prodigy. Eventually you surpass your parents, local children, then local adults. You can travel the world playing tournaments against the current generation of greats, but there's only so many such games you can get in. Often the chess prodigies will find a skilled player to mentor them, but the truly great will surpass even them.
Enter the computer, capable of calculating dozens of moves in advance. The computer can now play on par or better than any human in the world. In the coming years, that advantage will only grow. Any child with a computer now has access both to talented players across the world via the internet, and against their own, personal AI chess master. There is no need to arrange games against other greats, traveling far to play: from the comfort of your room you can pit yourself against the best.
As computers soar in their processing power, defeating all comers, they threaten to trivialize chess. But simultaneously they offer us the opportunity to reach previously unattainable levels of skill. And chess could just be the start. From math to literature, sports to debate, engineering to leadership computers may soon give us the opportunity to prove ourselves in challenges previously unknown. Carlsen is a wonderfully brilliant young man. He may also be a pioneer.
Why do I bring him up? Because his childhood spent mastering chess included lots of time playing against computers. Carlsen is obviously one of those rare individuals that excel at some task far beyond the abilities of others. But could screen time have contributed to his almost unique skill? It's an answer we'll have to wait and watch for as new generations of chess players enter the world stage. Carlsen could be a fluke, or a precursor of what's to come.
The argument for software being a major cause of Carlsen goes like this: natural ability is not enough to be the best in the world. It's a requirement, but insufficient. Chess greats throughout history have put countless hours into studying chess: memorizing moves, playing games. This is true of any skill: you need to paint lots of pictures, write lots of words, hit lots of fastballs to make it big. Natural aptitude is a clay that gets formed into a specialized machine with repetitions and practice.
But not all practice is equal. Joe Mauer, 3 time A.L. batting champ, perfected his swing against a contraption his dad invented that would drop a ball at his eye level. With so little time between seeing the ball and the ball passing his waist, Joe had to develop an exceptionally quick swing. More generally, how often do strong high school sports programs exist for decades at a time? Partially elite performing schools attract transfers, but partially the talent in a program encourages everyone to perform at a greater level.
As a child, I played chess against my mother. For a long time she'd beat me every game. Eventually I surpassed her, winning most (but not all) of our matches. And then I stopped improving. It wasn't until years later that I started playing chess on Yahoo that I noticed myself improving again. I had found a challenge.
If you practice a competitive skill against players below your level, it's much harder to improve. Failures teach us. If you only see 50 mph fastballs, how do you learn to hit a 95 mph one? You're not developing the reaction time, the muscle memory, or the poise to do so. So it is with a chess prodigy. Eventually you surpass your parents, local children, then local adults. You can travel the world playing tournaments against the current generation of greats, but there's only so many such games you can get in. Often the chess prodigies will find a skilled player to mentor them, but the truly great will surpass even them.
Enter the computer, capable of calculating dozens of moves in advance. The computer can now play on par or better than any human in the world. In the coming years, that advantage will only grow. Any child with a computer now has access both to talented players across the world via the internet, and against their own, personal AI chess master. There is no need to arrange games against other greats, traveling far to play: from the comfort of your room you can pit yourself against the best.
As computers soar in their processing power, defeating all comers, they threaten to trivialize chess. But simultaneously they offer us the opportunity to reach previously unattainable levels of skill. And chess could just be the start. From math to literature, sports to debate, engineering to leadership computers may soon give us the opportunity to prove ourselves in challenges previously unknown. Carlsen is a wonderfully brilliant young man. He may also be a pioneer.
Wednesday, June 16, 2010
The other solution to the Prisoner's Dilemma
Sometimes I'll sit down and blog the thoughts that come to mind. Other times I'm thinking interesting ideas and happen to be near the computer. Then there are the topics that bounce around my head for months, where I intend to write something, but it always feels so...daunting. Eventually you'll get my theory of negative mass, but...not yet.
Of course when you wait long enough somebody else might present your idea first. Thanks "Saturday Morning Breakfast Cereal." Besides making me laugh, you've now gotten me to write a blog post. (I'm sure this idea has been presented before that, but I'm not about to go 'research' the history of it. Independently arrived at by at least me and SMBC. All I need to know. Hurrahs for not being in academia).
So read the comic if you haven't already.
I've seriously been meaning to blog that for months. A slightly different take, naturally, but the idea of Jesus as fundamentally presenting an alternative solution to the prisoners dilemma. In our pro-capitalist society we've come to accept the turn-coat solution to the prisoner's dilemma as the correct one. It's the reason it's a famous thought experiment. Game theory says you should rat out your compatriot. But I find it interesting how many ways Jesus comes out against that solution.
'Turn the other cheek' is one of the fundamental lessons Jesus taught, and I feel like Christianity has lost sight of that. The impulse to fight evil is strong. It's the natural reaction. Half of the solutions to the prisoner's dilemma involve the party trying to do good losing out to the party being mean. So the solution where you're both mean, and thus can't take advantage of each other is tempting. But ultimately, isn't the cooperative solution best?
"It's easier for a camel to go through the eye of a needle than for a rich man to enter the Kingdom of God." I know this religion can be a bit critical of science, but you don't need very advanced physics to understand that won't happen. Even the poor in America are proportionally fairly rich. I have to wonder if it bothers Americans that Jesus has essentially told us we aren't getting in to heaven. The devout must assume they'll push the camel through. Good luck.
But again, from the Prisoner's Dilemma standpoint, it makes sense. You don't get rich in the cooperative column. You get taken advantage of. You get fleeced, your money taken. You build up wealth in the turncoat column. That's the basis of game theory, of objectivism. The idea that things will go well for everyone if you always look out just for yourself. But Jesus' point, and it's a valuable one, is that looking at your own self gain is not the point. Look at the global good. That's what you should optimize.
If someone hungry steals your bread, isn't that a net positive in the world? Now two people can eat. The concern is always that this line of thinking leads to the immoral getting an advantage. People so often oppose social safety nets because some percentage might take advantage of it and not work. So? Just because someone's bad, why are they less deserving of comforts and rewards?
It's a weird way to think, but when enough people think that way things are better for everyone. If you can accept good things happening to bad people, and bad things happening to good people, you can see to it that better things happen to people in general. I find it interesting to think of heaven and hell not as brimstone and harps waiting on the other side of the graveyard, but as the future. If we're good, our children can live in peace and sustenance. If we're bad, they'll have to eke a living from a scourged shell of a planet. And how to we get to the heaven-as-a-good-future? By ignoring our personal desires, ignoring the overwhelming urge to meter justice, and to just do best by the world. Two thousand years later I think we're still misunderstanding Jesus. Do unto your neighbor as you would have him do unto you.
Of course when you wait long enough somebody else might present your idea first. Thanks "Saturday Morning Breakfast Cereal." Besides making me laugh, you've now gotten me to write a blog post. (I'm sure this idea has been presented before that, but I'm not about to go 'research' the history of it. Independently arrived at by at least me and SMBC. All I need to know. Hurrahs for not being in academia).
So read the comic if you haven't already.
I've seriously been meaning to blog that for months. A slightly different take, naturally, but the idea of Jesus as fundamentally presenting an alternative solution to the prisoners dilemma. In our pro-capitalist society we've come to accept the turn-coat solution to the prisoner's dilemma as the correct one. It's the reason it's a famous thought experiment. Game theory says you should rat out your compatriot. But I find it interesting how many ways Jesus comes out against that solution.
'Turn the other cheek' is one of the fundamental lessons Jesus taught, and I feel like Christianity has lost sight of that. The impulse to fight evil is strong. It's the natural reaction. Half of the solutions to the prisoner's dilemma involve the party trying to do good losing out to the party being mean. So the solution where you're both mean, and thus can't take advantage of each other is tempting. But ultimately, isn't the cooperative solution best?
"It's easier for a camel to go through the eye of a needle than for a rich man to enter the Kingdom of God." I know this religion can be a bit critical of science, but you don't need very advanced physics to understand that won't happen. Even the poor in America are proportionally fairly rich. I have to wonder if it bothers Americans that Jesus has essentially told us we aren't getting in to heaven. The devout must assume they'll push the camel through. Good luck.
But again, from the Prisoner's Dilemma standpoint, it makes sense. You don't get rich in the cooperative column. You get taken advantage of. You get fleeced, your money taken. You build up wealth in the turncoat column. That's the basis of game theory, of objectivism. The idea that things will go well for everyone if you always look out just for yourself. But Jesus' point, and it's a valuable one, is that looking at your own self gain is not the point. Look at the global good. That's what you should optimize.
If someone hungry steals your bread, isn't that a net positive in the world? Now two people can eat. The concern is always that this line of thinking leads to the immoral getting an advantage. People so often oppose social safety nets because some percentage might take advantage of it and not work. So? Just because someone's bad, why are they less deserving of comforts and rewards?
It's a weird way to think, but when enough people think that way things are better for everyone. If you can accept good things happening to bad people, and bad things happening to good people, you can see to it that better things happen to people in general. I find it interesting to think of heaven and hell not as brimstone and harps waiting on the other side of the graveyard, but as the future. If we're good, our children can live in peace and sustenance. If we're bad, they'll have to eke a living from a scourged shell of a planet. And how to we get to the heaven-as-a-good-future? By ignoring our personal desires, ignoring the overwhelming urge to meter justice, and to just do best by the world. Two thousand years later I think we're still misunderstanding Jesus. Do unto your neighbor as you would have him do unto you.
Sunday, June 13, 2010
Putting the fun in fungible
Money is a classic example of a fungible resource. That means you don't particularly care which twenty dollar bill is yours, or even if you've got two tens instead. Houses are not fungible: you can't just arbitrarily trade two houses and expect both participants to be happy. If I lend someone five dollars, I don't expect them to retrieve that particular bill when they pay me back. If I lend them "Catcher in the Rye", though, I don't expect to be returned "Alice in Wonderland" or even another, heavily highlighted copy.
This is all nice, and helps the economy run smoothly. But the fungible nature of money can make thinking about money deceptive. Five dollars is five dollars, but five dollars spent in one place is not equivalent to five dollars spent elsewhere. The money is fungible, the act of spending money is not.
Think about a decision to send a spacecraft to the moon, at a cost of perhaps a billion dollars. At that price, we could probably buy every human in the world a serving of rice for a week. But does that mean the spacecraft took 50 billion meals of rice away from the world? No, of course not. Rice production and spacecraft production are such disparate worlds that money spent in one shouldn't really affect the other very significantly.
What did spending the billion dollars on the spacecraft cause? Well, oil, steel and electronic elements were probably the biggest physical expenditures. Thus a little less steel might exist for sale on the market. That in turn may have inched its price up just a fraction, and perhaps a few people in the long run ended up holding off buying a new car. Electricity was certainly used up heating and lighting factories and offices, powering computers where the design occurred. Because electricity is also fungible it's hard to point to one particular thing the spacecraft effort impacted. But perhaps higher energy costs ultimately caused a handful of families to keep the thermostat a little lower that winter. Resources like electricity and steel have limited availability, so in one sense choosing to use them for one task removes their availability for another task. But you can't convert rice into steel to meet one need with another.
Which is all a simplification, of course. Oil use could increase demand for alternative fuel sources like ethanol which could use up corn, which could increase demand for rice. Or the electricity used by the offices might just come out of excess capacity they had just in case, increasing efficiency in the system and thus being 'free'.
When you've got a billion dollar project, the priciest part is usually the humans. Thus the investment in a spacecraft is less significant in terms of allocation of steel then it is in allocation of minds. The money goes to scientists to design and build this craft. Perhaps it'll improve employment rates a tiny bit, but it seems likely most of the people involved in building a spacecraft could find other work. So the projects biggest impact might be taking minds away from other tasks.
One line of thinking holds that even though none of the scientists would have been rice farmers, they might have been, say, a manager at an industrial factory. So now a line worker becomes the manager, opening up a position a farmer's son fills instead of working the farm. But even if there is some truth to this, every subsequent link lessens the impact.
People are the ultimate in non-fungible resources. We're all different. Perhaps one of the scientists involved in the spacecraft effort would have worked designing cars instead, pushing fuel efficiency up a tiny notch. Or maybe another would have been a manager, and done a poor job at it, causing rippling effects in stock prices and resource usage. The existence of a billion dollars worth of work for aerospace engineers might influence the next generation in college, producing more in that field and less in, say, astronomy. That might influence very gently our understanding of the cosmos. It might be an inefficient allocation if fewer spaceships are build in the next generation.
And we could do the same thing for buying a fifty billion servings worth of rice. How does land usage change? Nitrogen usage? Transportation of the rice? Who do you need to farm and distribute the rice, and what would they have done otherwise? Although each project costs $1 billion dollars, they aren't fungible in how they impact the world.
It's easy to just visualize the economy as a ledger. A dollar is spent here, fifty dollars of oil are burnt there. Because dollars are such an easy abstraction to think about, it becomes tempting to equate things with the same cost. A second car or a vacation can be equivalent decisions in terms of cost, but they can ripple through the economy in very different ways.
You'll never predict all the ways an individual economic action will ripple through time. But it is important to keep in mind when thinking about money. People often complain about spending money on NASA when people are hungry in the world. But they're aren't trivially connected. Cutting NASA doesn't end world hunger. Similarly, the recent financial crisis has revealed that a company's bottom line isn't necessarily proportional to it's positive impact on the world. Money is an invaluable abstraction, but it's good to sometimes take a step back and think about the millions of moving parts involved when you buy a pack of gum from the store.
This is all nice, and helps the economy run smoothly. But the fungible nature of money can make thinking about money deceptive. Five dollars is five dollars, but five dollars spent in one place is not equivalent to five dollars spent elsewhere. The money is fungible, the act of spending money is not.
Think about a decision to send a spacecraft to the moon, at a cost of perhaps a billion dollars. At that price, we could probably buy every human in the world a serving of rice for a week. But does that mean the spacecraft took 50 billion meals of rice away from the world? No, of course not. Rice production and spacecraft production are such disparate worlds that money spent in one shouldn't really affect the other very significantly.
What did spending the billion dollars on the spacecraft cause? Well, oil, steel and electronic elements were probably the biggest physical expenditures. Thus a little less steel might exist for sale on the market. That in turn may have inched its price up just a fraction, and perhaps a few people in the long run ended up holding off buying a new car. Electricity was certainly used up heating and lighting factories and offices, powering computers where the design occurred. Because electricity is also fungible it's hard to point to one particular thing the spacecraft effort impacted. But perhaps higher energy costs ultimately caused a handful of families to keep the thermostat a little lower that winter. Resources like electricity and steel have limited availability, so in one sense choosing to use them for one task removes their availability for another task. But you can't convert rice into steel to meet one need with another.
Which is all a simplification, of course. Oil use could increase demand for alternative fuel sources like ethanol which could use up corn, which could increase demand for rice. Or the electricity used by the offices might just come out of excess capacity they had just in case, increasing efficiency in the system and thus being 'free'.
When you've got a billion dollar project, the priciest part is usually the humans. Thus the investment in a spacecraft is less significant in terms of allocation of steel then it is in allocation of minds. The money goes to scientists to design and build this craft. Perhaps it'll improve employment rates a tiny bit, but it seems likely most of the people involved in building a spacecraft could find other work. So the projects biggest impact might be taking minds away from other tasks.
One line of thinking holds that even though none of the scientists would have been rice farmers, they might have been, say, a manager at an industrial factory. So now a line worker becomes the manager, opening up a position a farmer's son fills instead of working the farm. But even if there is some truth to this, every subsequent link lessens the impact.
People are the ultimate in non-fungible resources. We're all different. Perhaps one of the scientists involved in the spacecraft effort would have worked designing cars instead, pushing fuel efficiency up a tiny notch. Or maybe another would have been a manager, and done a poor job at it, causing rippling effects in stock prices and resource usage. The existence of a billion dollars worth of work for aerospace engineers might influence the next generation in college, producing more in that field and less in, say, astronomy. That might influence very gently our understanding of the cosmos. It might be an inefficient allocation if fewer spaceships are build in the next generation.
And we could do the same thing for buying a fifty billion servings worth of rice. How does land usage change? Nitrogen usage? Transportation of the rice? Who do you need to farm and distribute the rice, and what would they have done otherwise? Although each project costs $1 billion dollars, they aren't fungible in how they impact the world.
It's easy to just visualize the economy as a ledger. A dollar is spent here, fifty dollars of oil are burnt there. Because dollars are such an easy abstraction to think about, it becomes tempting to equate things with the same cost. A second car or a vacation can be equivalent decisions in terms of cost, but they can ripple through the economy in very different ways.
You'll never predict all the ways an individual economic action will ripple through time. But it is important to keep in mind when thinking about money. People often complain about spending money on NASA when people are hungry in the world. But they're aren't trivially connected. Cutting NASA doesn't end world hunger. Similarly, the recent financial crisis has revealed that a company's bottom line isn't necessarily proportional to it's positive impact on the world. Money is an invaluable abstraction, but it's good to sometimes take a step back and think about the millions of moving parts involved when you buy a pack of gum from the store.
Wednesday, May 12, 2010
How To Teach Machines Creativity
Kasparov said of Deep Blue, after losing to it at chess, that he sometimes "saw deep intelligence and creativity in the machine's moves." Partially, this was an accusation that the IBM programmers weren't playing fairly, but it was fundamentally a testament to the quality of play Deep Blue could bring to chess.
Of course, it wasn't really exhibiting creativity. The biggest advantage a chess playing computer has is simply the depth it can think ahead. It chose its moves because it knew they'd prevent Kasparov from checkmating it in the foreseeable future, not from any deep insight into the strategies of the board.
I heard another anecdote from a professor, once, about an experiment in robotic movement. A (computer) mouse was tied to the back of a small, roomba like robot. The goal was to train it to avoid walls. Whenever the mouse rolled forward, the robot would get a 'reward'. Whenever it rolled backwards, it would be 'penalized'. These were just numbers being plugged into an algorithm, but they acted analogously to the dopamine in your brain. The robot was supposed to learn to navigate in such a way that it wouldn't have to back up, something like traveling in a circle through the room. The researchers left the robot running overnight to learn.
When they returned the next morning they were surprised to find the robot in a corner, rocking back and forth. Not the expected result. Some investigation revealed that the robot had found its way onto a rug. Whenever it moved backwards, the rug's bristles would jam up the mouse wheel. It had freedom to roll in the other direction, however. The rug let it avoid the negative feedback, circumventing the expected rules and giving it a constant euphoria as it rocked back and forth in place.
The solution seems creative. Really, though, it was just dumb luck. The robot didn't reason out that the rug might help, it blindly ended up on the rug, where it happened to take a couple simple steps that seemed positive. It tried a few repetitions, decided this was the best it was going to find, and went with it.
So how could we incorporate a more 'real' form of creativity into AI? I believe it's all about explicitly measuring creativity. Big Blue was 'rewarded' based on victories and losses. The robot was rewarded for moving forward. To create a creative machine, you have to reward creativity.
Let's use music as an example. What if we wanted to train a computer to compose music? First: a quick machine learning lesson:
Genetic Algorithms evolve a solution to a problem. For the music example, you'd take thousands of completely random musical scores, and listen to them (or ideally, evaluate them mathematically to start). They'd all sound bad, but hopefully a couple have some redeeming feature: a rhythm, or some short snippet that sounded good. After ordering them by quality, you take the best and create a new generation, just like life does. There's a few mutations thrown in (changing notes), but mostly you use genetic crossover: take two of the musical scores, cut them up and interleave them.
If you repeat this, generation after generation, the quality of the music will increase. Short snippets of good music greatly increase the odds an individual song will pass its genetic (musical?) material to the next generation, so those good snippets grow in number. Eventually music evolves.
But of course, this isn't creative. You end up with a population of songs at the end that all sound alike, as they all share most of their genetic material now. The process is creative like evolution is creative (are animals art?) but you don't have an artificial mind that can create arbitrary songs.
So we turn our attention to genetic programming. This still uses the same life-based mechanisms to evolve a population, but now you're evolving a population of computer programs. You write random computer programs, in this case programs that take some input and output a song. At first none of the computer programs do what you want (or anything, usually. Random code isn't terribly useful), but evolution still kicks in. With a large enough population, enough generations, and somebody listening to the god-awful racket this would produce, you'd eventually end up with a computer composer. Would it be creative?
Probably not. My prediction is that while the eventual eComposer would produce passable music, it would be very self-derivative. Once it finds a formula that works, why deviate? Deviating from your best stuff is only likely to get you a lower evaluation, and then you don't get any children. Best to play it safe, keep pumping out the same bass line, and keep your bytes replicating.
So what do you do? You evaluate each composer not just on quality, but on their creativity. You don't just ask for their best piece, you ask for 10 pieces, and make sure they're all good enough and different enough from each other. We usually only care about the best answer from our machine buddies, but I think that's wrong. To be right consistently, to be right when things change dramatically, you need to be able to come up with lots of different candidate solutions. It seems obvious in terms of music, but I expect this would help in lots of different machine learning domains.
Were I at grad-school, I'd write a paper about this. I'd throw together some simple problems, and see how penalizing for lack of depth in suggested solutions changes the long term performance of GP-produced programs. I'd draw some graphs, write up a conclusion suggesting further research, and find some journal to print it. But instead, not being a grad student, I've decided to publish the idea (sans research) via blog. When/If I do end up back in school, I can look back at my blog and start pumping out the papers. Or alternatively, Internet, you're welcome to do the work and take the credit if you find this first. Especially if you're a robot reading this as you consume all of humanity's written word (I'm looking at you, Google)...
Of course, it wasn't really exhibiting creativity. The biggest advantage a chess playing computer has is simply the depth it can think ahead. It chose its moves because it knew they'd prevent Kasparov from checkmating it in the foreseeable future, not from any deep insight into the strategies of the board.
I heard another anecdote from a professor, once, about an experiment in robotic movement. A (computer) mouse was tied to the back of a small, roomba like robot. The goal was to train it to avoid walls. Whenever the mouse rolled forward, the robot would get a 'reward'. Whenever it rolled backwards, it would be 'penalized'. These were just numbers being plugged into an algorithm, but they acted analogously to the dopamine in your brain. The robot was supposed to learn to navigate in such a way that it wouldn't have to back up, something like traveling in a circle through the room. The researchers left the robot running overnight to learn.
When they returned the next morning they were surprised to find the robot in a corner, rocking back and forth. Not the expected result. Some investigation revealed that the robot had found its way onto a rug. Whenever it moved backwards, the rug's bristles would jam up the mouse wheel. It had freedom to roll in the other direction, however. The rug let it avoid the negative feedback, circumventing the expected rules and giving it a constant euphoria as it rocked back and forth in place.
The solution seems creative. Really, though, it was just dumb luck. The robot didn't reason out that the rug might help, it blindly ended up on the rug, where it happened to take a couple simple steps that seemed positive. It tried a few repetitions, decided this was the best it was going to find, and went with it.
So how could we incorporate a more 'real' form of creativity into AI? I believe it's all about explicitly measuring creativity. Big Blue was 'rewarded' based on victories and losses. The robot was rewarded for moving forward. To create a creative machine, you have to reward creativity.
Let's use music as an example. What if we wanted to train a computer to compose music? First: a quick machine learning lesson:
Genetic Algorithms evolve a solution to a problem. For the music example, you'd take thousands of completely random musical scores, and listen to them (or ideally, evaluate them mathematically to start). They'd all sound bad, but hopefully a couple have some redeeming feature: a rhythm, or some short snippet that sounded good. After ordering them by quality, you take the best and create a new generation, just like life does. There's a few mutations thrown in (changing notes), but mostly you use genetic crossover: take two of the musical scores, cut them up and interleave them.
If you repeat this, generation after generation, the quality of the music will increase. Short snippets of good music greatly increase the odds an individual song will pass its genetic (musical?) material to the next generation, so those good snippets grow in number. Eventually music evolves.
But of course, this isn't creative. You end up with a population of songs at the end that all sound alike, as they all share most of their genetic material now. The process is creative like evolution is creative (are animals art?) but you don't have an artificial mind that can create arbitrary songs.
So we turn our attention to genetic programming. This still uses the same life-based mechanisms to evolve a population, but now you're evolving a population of computer programs. You write random computer programs, in this case programs that take some input and output a song. At first none of the computer programs do what you want (or anything, usually. Random code isn't terribly useful), but evolution still kicks in. With a large enough population, enough generations, and somebody listening to the god-awful racket this would produce, you'd eventually end up with a computer composer. Would it be creative?
Probably not. My prediction is that while the eventual eComposer would produce passable music, it would be very self-derivative. Once it finds a formula that works, why deviate? Deviating from your best stuff is only likely to get you a lower evaluation, and then you don't get any children. Best to play it safe, keep pumping out the same bass line, and keep your bytes replicating.
So what do you do? You evaluate each composer not just on quality, but on their creativity. You don't just ask for their best piece, you ask for 10 pieces, and make sure they're all good enough and different enough from each other. We usually only care about the best answer from our machine buddies, but I think that's wrong. To be right consistently, to be right when things change dramatically, you need to be able to come up with lots of different candidate solutions. It seems obvious in terms of music, but I expect this would help in lots of different machine learning domains.
Were I at grad-school, I'd write a paper about this. I'd throw together some simple problems, and see how penalizing for lack of depth in suggested solutions changes the long term performance of GP-produced programs. I'd draw some graphs, write up a conclusion suggesting further research, and find some journal to print it. But instead, not being a grad student, I've decided to publish the idea (sans research) via blog. When/If I do end up back in school, I can look back at my blog and start pumping out the papers. Or alternatively, Internet, you're welcome to do the work and take the credit if you find this first. Especially if you're a robot reading this as you consume all of humanity's written word (I'm looking at you, Google)...
Thursday, May 6, 2010
Am I You?
A very early memory of mine is staring, head back, up at a tree. I remember, in a fuzzy way, the dichotomy between the details in each minutely detailed leaf, and the whole, indivisible mass of green. I was being babysat by an aunt; my cousin was in his room for some transgression. I think it was summer: I remembered being impressed by the warmth and beauty around me.
It's not a remarkable memory for the vista, I've seen that tree hundreds of times. Rather, it stayed with me because it was the first time my mind stumbled across a mystery that's been with me ever since. I was outside, observing manicured nature. My cousin was in his room, probably fuming at his mother. But why was my experience of the world the one looking at the tree? My cousin must certainly have this same first-person experience, feeling one individual's pain, beholding one individual's sight. I knew I was me, it's basically a tautology. I didn't doubt I was the individual named Paul Barba, but it felt so arbitrary that I shouldn't be my cousin, or my neighbor, or any of 6 billion other individuals.
It's a hard question to ask, because there's a simple question obscuring a deeper one. I am me, because I have to be somebody. That's just how the world works: each person is an individual. There's no explanation necessary, and any perceived asymmetry is just a lack of perspective on my part. But at the same time, the huge divide between Paul's experiences and everyone else's, from my point of view, was troubling as a child.
Ultimately, I think the question is one of consciousness. That was the asymmetry in the world I was detecting, but too young to really understand. Everything else is physical. My cousin, the tree: they were all physical objects describable with reference to atomic patterns. But what about this perception of the world I had? This self-understanding mind, conscious of the passage of time around it? We still don't know how to understand it, really, in terms of electrons and photons. Maybe it's the work of a soul. Maybe solipism is truth, and there is a fundamental divide between me and everyone else. But I'm inclined to believe it's ultimately a matter of matter and energy.
I was Paul at age 7. I'm Paul now, and if it's meant to be I'll be Paul 50 years hence. There's this continuous stream of awareness that links the experiences of that child staring at a tree with the young adult typing on a laptop. I don't doubt that I was experiencing that tree.
But so much has changed: my brain has grown and reformed its patterns since then. Neuroplasticity tells us that the patterns in our brain are constantly changing. Most of my cells have died since then, being replaced by new generations. With twins, you could have argued that twin brothers at 7 are more similar than the 7 year old version and the 40 year old version of the same person. But the 40 year old and the 7 year old share a linked experience of being one person...
Which leads me to believe that it's just the uninterrupted existence of Paul that leads me to connect my current experiences with that child's experiences. I've got memories, but those aren't the same as a real experience. 7 Year old Paul experienced consciousness, 24 year old Paul is experiencing consciousness: everybody is. Memories, opinions, the chemicals rushing past our neurons are just incidentals. My perception of me, my intelligence and opinions, is less fundamental then this experiencing of looking out of two eyes: that's what I relate to as me, more deeply then the realities of a single moment.
Which all, ultimately, leads me to believe that what I would most deeply connect to as myself, the core of being, that thing experiencing emotions and visions and the pinch of the cheeks during a smile and the wind blowing through your hair: that wasn't limited to the boy looking at a tree. Whatever natural phenomenon leads me to perceive the world instead of blindly reacting to physical laws is at work in everybody. Thus while at one level I am Paul, I also believe I'm everyone else, everyone who will exist, at least until humanity dies off or evolves past my experiences. It doesn't feel true, at a level. I still feel, and am, closed off from every other experience. But that division of the world into individual pockets of consciousness doesn't mean they aren't all, in an important way, the same. When I die, these thoughts will be gone, these experiences forgotten, but experiencing and thinking will persist. What I thought was my own first-person experience will continue peering out some billions of pairs of eyes.
It's not a remarkable memory for the vista, I've seen that tree hundreds of times. Rather, it stayed with me because it was the first time my mind stumbled across a mystery that's been with me ever since. I was outside, observing manicured nature. My cousin was in his room, probably fuming at his mother. But why was my experience of the world the one looking at the tree? My cousin must certainly have this same first-person experience, feeling one individual's pain, beholding one individual's sight. I knew I was me, it's basically a tautology. I didn't doubt I was the individual named Paul Barba, but it felt so arbitrary that I shouldn't be my cousin, or my neighbor, or any of 6 billion other individuals.
It's a hard question to ask, because there's a simple question obscuring a deeper one. I am me, because I have to be somebody. That's just how the world works: each person is an individual. There's no explanation necessary, and any perceived asymmetry is just a lack of perspective on my part. But at the same time, the huge divide between Paul's experiences and everyone else's, from my point of view, was troubling as a child.
Ultimately, I think the question is one of consciousness. That was the asymmetry in the world I was detecting, but too young to really understand. Everything else is physical. My cousin, the tree: they were all physical objects describable with reference to atomic patterns. But what about this perception of the world I had? This self-understanding mind, conscious of the passage of time around it? We still don't know how to understand it, really, in terms of electrons and photons. Maybe it's the work of a soul. Maybe solipism is truth, and there is a fundamental divide between me and everyone else. But I'm inclined to believe it's ultimately a matter of matter and energy.
I was Paul at age 7. I'm Paul now, and if it's meant to be I'll be Paul 50 years hence. There's this continuous stream of awareness that links the experiences of that child staring at a tree with the young adult typing on a laptop. I don't doubt that I was experiencing that tree.
But so much has changed: my brain has grown and reformed its patterns since then. Neuroplasticity tells us that the patterns in our brain are constantly changing. Most of my cells have died since then, being replaced by new generations. With twins, you could have argued that twin brothers at 7 are more similar than the 7 year old version and the 40 year old version of the same person. But the 40 year old and the 7 year old share a linked experience of being one person...
Which leads me to believe that it's just the uninterrupted existence of Paul that leads me to connect my current experiences with that child's experiences. I've got memories, but those aren't the same as a real experience. 7 Year old Paul experienced consciousness, 24 year old Paul is experiencing consciousness: everybody is. Memories, opinions, the chemicals rushing past our neurons are just incidentals. My perception of me, my intelligence and opinions, is less fundamental then this experiencing of looking out of two eyes: that's what I relate to as me, more deeply then the realities of a single moment.
Which all, ultimately, leads me to believe that what I would most deeply connect to as myself, the core of being, that thing experiencing emotions and visions and the pinch of the cheeks during a smile and the wind blowing through your hair: that wasn't limited to the boy looking at a tree. Whatever natural phenomenon leads me to perceive the world instead of blindly reacting to physical laws is at work in everybody. Thus while at one level I am Paul, I also believe I'm everyone else, everyone who will exist, at least until humanity dies off or evolves past my experiences. It doesn't feel true, at a level. I still feel, and am, closed off from every other experience. But that division of the world into individual pockets of consciousness doesn't mean they aren't all, in an important way, the same. When I die, these thoughts will be gone, these experiences forgotten, but experiencing and thinking will persist. What I thought was my own first-person experience will continue peering out some billions of pairs of eyes.
Sunday, May 2, 2010
A Few Thoughts on Creating
I find it much easier to start a project than to see it through. I don't think anybody maintains enthusiasm through the whole of a large undertaking, but it seems to be my shortcoming more than many. It's got to have something to do with my preference for ideas: I like the new. I like working out the interactions in a complex system. I like mapping a story arc, or drawing a mock-up. Once that part is done, though, and it's time to turn ideas into realities, I start losing focus. The siren songs of new ideas beckon.
Hence two weeks without a new post.
It's easy to be productive when you've got the enthusiasm for it. Some blog posts stream effortlessly through my fingers and into the electronic aether. But sometimes it has to be forced out. It's a phenomena I don't really understand. It's easy to run out of creative juices in any activity. Sometimes you just stare at a blank sheet of paper, or type away at code despite your brain pleading for television or a walk. Getting away from it, playing a video game or reading a book can help. But consuming media just as often leaves me more lethargic and uninterested than ever.
I feel obligated to create things in a way I don't exactly understand. Having a talent feels like an injunction to use it. The ephemeral nature of life pushes and pulls in this: in the brief time I have on Earth, shouldn't I do something impressive with it? But at the same time another voice asks: with the brief time I have, shouldn't I be enjoying life?
Which is perhaps the core of it: creative activities should be enjoyable. The brain is a massively parallel machine that makes countless evaluations in the process of creating. What topics to move on to? Which to cut? What's the most effective placement of the object in a sentence? Are there too many adverbs? But without enthusiasm, threads of the brain get distracted or quiet. Less brain mass is focused on the creation, and the creation suffers for it. And it seems that maintaining this divided attention, the conscious realm pushing out sentences but the unconscious dallying on other topics, drains us.
My brain is grasping for a concluding idea, something to wrap this all up, some larger context these thoughts have fit into. But there isn't anything obvious coming to mind. Creating is hard. Motivation is hard. Thinking about either too deeply gets into weird existential questions that haven't ever been very helpful to me in creating the motivation. But perhaps the key is that while the creative tides ebb, they rise as well. My motivation for blogging, for writing, for programming have all been weak lately, but that's just one phase in a larger creative cycle. I'm blogging again, after all.
Hence two weeks without a new post.
It's easy to be productive when you've got the enthusiasm for it. Some blog posts stream effortlessly through my fingers and into the electronic aether. But sometimes it has to be forced out. It's a phenomena I don't really understand. It's easy to run out of creative juices in any activity. Sometimes you just stare at a blank sheet of paper, or type away at code despite your brain pleading for television or a walk. Getting away from it, playing a video game or reading a book can help. But consuming media just as often leaves me more lethargic and uninterested than ever.
I feel obligated to create things in a way I don't exactly understand. Having a talent feels like an injunction to use it. The ephemeral nature of life pushes and pulls in this: in the brief time I have on Earth, shouldn't I do something impressive with it? But at the same time another voice asks: with the brief time I have, shouldn't I be enjoying life?
Which is perhaps the core of it: creative activities should be enjoyable. The brain is a massively parallel machine that makes countless evaluations in the process of creating. What topics to move on to? Which to cut? What's the most effective placement of the object in a sentence? Are there too many adverbs? But without enthusiasm, threads of the brain get distracted or quiet. Less brain mass is focused on the creation, and the creation suffers for it. And it seems that maintaining this divided attention, the conscious realm pushing out sentences but the unconscious dallying on other topics, drains us.
My brain is grasping for a concluding idea, something to wrap this all up, some larger context these thoughts have fit into. But there isn't anything obvious coming to mind. Creating is hard. Motivation is hard. Thinking about either too deeply gets into weird existential questions that haven't ever been very helpful to me in creating the motivation. But perhaps the key is that while the creative tides ebb, they rise as well. My motivation for blogging, for writing, for programming have all been weak lately, but that's just one phase in a larger creative cycle. I'm blogging again, after all.
Sunday, April 18, 2010
Thou Shalt Not Infringe Intellectual Properties
I was at the Lupa Zoo last weekend, which was a lot of fun. It's always a pleasure to see and feed exotic animals. Stationed throughout the zoo were boxes with bags of peanuts and crackers. There was a little sign asking for $2 or $5 (depending on size), and informing us "Don't Steal." The honor system was at work.
What's interesting to me is that the food was ultimately destined for the animal's bellies, whether via my hand or some employee's. In a sense, we weren't buying the physical food, we were buying the experience of feeding the animals. The gift shop wasn't based on the honor system: there would be an actual, real expense to stealing a stuffed animal. The profit from the feed bags certainly helps keep the zoo going, but if an individual stole a bag who wouldn't have purchased one otherwise, the zoo doesn't really lose anything.
These economics are the same as noncommercial copyright violations (AKA piracy). And seeing the parallels has really convinced me that the honor system is best in both situations. The new Intellectual Property Czar had a request for comments recently, and the media industries weighed in with their hopes for our future: censorship of the Internet, spyware on our computers to detect any unethical behavior, federal cops enforcing these edicts. Free wifi spots will be a thing of the past. Youtube may be as well. For one thing, this opens us up for abuses (will the ability to freely spy on everything any American does with his computer be limited to downloading music? Australian censorship is already being used to prevent access to any information about euthanasia). But even if you trust our government, it legitimizes the actions of nations like Iran and China, who use this sort of information to capture and torture human rights activists.
Groups like the RIAA and MPAA like to present our options as either accepting censorship and surveillance, or just letting our entertainment industries die. But the idea that laws are the only way to influence behavior is a scary one. If I hadn't paid for the bag of feed, I wouldn't have been fined $2 million. I've just been taught stealing is wrong, absent laws. Cigarette's are a bad choice, but we let people make that choice. Plenty feel premarital sex, or at the very least adultery, are wrong, but we don't punish either of those with fines or jail time. Adultery in particular seems far more hurtful to another human being then downloading music, but we don't legislate against it. Why?
Because we previously understood that it isn't the courts role to dish out vengeance for every little wrong. Especially when an action occurs between two consenting adults (as piracy does), the violations to our freedoms necessary to enforce the law are far too burdensome to be worthwhile. Instead, we have another tactic: we teach children the difference between right and wrong. How about instead of spying, censorship and lawsuits, we just teach our children how buying things lets the producers keep producing? And if the occasional free loader declines, or if a family struggling to feed themselves takes a movie they couldn't afford, or we download a movie because our original dvd has broken, who cares? Of the ten commandments, I only count three we legislate. Do business models deserve a place above the ten commandments?
I recommend we all acknowledge piracy is bad, and then give up on the hunt to eradicate it from the earth.
What's interesting to me is that the food was ultimately destined for the animal's bellies, whether via my hand or some employee's. In a sense, we weren't buying the physical food, we were buying the experience of feeding the animals. The gift shop wasn't based on the honor system: there would be an actual, real expense to stealing a stuffed animal. The profit from the feed bags certainly helps keep the zoo going, but if an individual stole a bag who wouldn't have purchased one otherwise, the zoo doesn't really lose anything.
These economics are the same as noncommercial copyright violations (AKA piracy). And seeing the parallels has really convinced me that the honor system is best in both situations. The new Intellectual Property Czar had a request for comments recently, and the media industries weighed in with their hopes for our future: censorship of the Internet, spyware on our computers to detect any unethical behavior, federal cops enforcing these edicts. Free wifi spots will be a thing of the past. Youtube may be as well. For one thing, this opens us up for abuses (will the ability to freely spy on everything any American does with his computer be limited to downloading music? Australian censorship is already being used to prevent access to any information about euthanasia). But even if you trust our government, it legitimizes the actions of nations like Iran and China, who use this sort of information to capture and torture human rights activists.
Groups like the RIAA and MPAA like to present our options as either accepting censorship and surveillance, or just letting our entertainment industries die. But the idea that laws are the only way to influence behavior is a scary one. If I hadn't paid for the bag of feed, I wouldn't have been fined $2 million. I've just been taught stealing is wrong, absent laws. Cigarette's are a bad choice, but we let people make that choice. Plenty feel premarital sex, or at the very least adultery, are wrong, but we don't punish either of those with fines or jail time. Adultery in particular seems far more hurtful to another human being then downloading music, but we don't legislate against it. Why?
Because we previously understood that it isn't the courts role to dish out vengeance for every little wrong. Especially when an action occurs between two consenting adults (as piracy does), the violations to our freedoms necessary to enforce the law are far too burdensome to be worthwhile. Instead, we have another tactic: we teach children the difference between right and wrong. How about instead of spying, censorship and lawsuits, we just teach our children how buying things lets the producers keep producing? And if the occasional free loader declines, or if a family struggling to feed themselves takes a movie they couldn't afford, or we download a movie because our original dvd has broken, who cares? Of the ten commandments, I only count three we legislate. Do business models deserve a place above the ten commandments?
I recommend we all acknowledge piracy is bad, and then give up on the hunt to eradicate it from the earth.
Sunday, April 11, 2010
The iPhone: A Programmer's View
Apple has been in the news a lot lately with the release of the iPad, details of the next iteration of the iPhone OS, a patent lawsuit against HTC, and some notable changes to their developer agreements. Right now, they dominate the tablet and computer-like smart-phone market, and it looks like they're trying to get away with the same sort of behavior Microsoft pulled against them. Unfortunately I think they've learned the wrong lessons. The OS industry in the 1980's didn't have the depth of competition the smart-phone market has today. The line between phones, tablets and computers is blurring, which will make it harder to monopolize any one domain. And Microsoft succeeded in its bullying because it had buy-in from two crucial groups: the consumer, and the developer. Apple is doing well with the former, but antagonizing the latter.
Windows has over 90% market-share and has done even better in the past. Why? A large part of it is applications. Almost any piece of software you can find will run on Windows. If you like video games, or need a particular piece of business software, Windows will run it. Cross platform development has improved recently, but even today you'll find many more windows exclusives then any other system.
So the consumers go where the software is, and that in turn drives the programmers to support Windows. With 90% of the market, your software doesn't stand much chance if it won't run on Windows. You end up with a feedback loop: consumers go where the apps are, the apps are written for the OS the consumer uses. If you cut either side, you're in danger. For all its failings, Microsoft did well enough keeping the consumer content, and did an excellent job of giving developers what they wanted.
Microsoft packaged QBasic with Windows until recently, which was my first exposure to programming. They've developed one of the best development environments out there, and give out a fully-functional free version. You can write software for windows without spending a penny, and Microsoft demands no licensing fees for you to sell it.
Contrast this with Apple. To release an iPhone application you need to pay $99 up front, then give Apple a cut of your profits. After a great deal of effort producing the application, Apple needs to approve it, and there are many tales on the internet of benign apps getting rejected. If Apple doesn't like it for any reason, your development effort is sunk. While you can program in literally hundreds of languages for Windows (or even write your own), Apple now restricts you to 3.
When Apple rejected Flash on the iPhone/iPad, I was surprised. But the action was understandable: Flash would be a hole in the App Store model: another way of distributing content without Apple's approval or (more cynically) without Apple getting its cut. Adobe responded as many other programming languages have: by writing a compiler that turns flash code into iPhone code. You program in a human-readable, high level language. A compiler turns this into 1's and 0's the computer can read. Each operating system has its own dialect of 1's and 0's, but there's no reason you can't compile into any of them. This seemed like a reasonable solution: the apps would now be indistinguishable from any other app. They would go through Apple's store, through its approval process, and Apple would get its cut. Because the 1's and 0's are essentially the same no matter what the original language they were written in, there would no longer be any obvious difference between a flash app and a c app.
Apple has said no. The latest iteration of the agreement developers need to sign to write for the iPhone or iPad has been updated: you must write your programs in c, c++ or objective-c. Objective-c, born in 1986, is used extensively in Macs and iPhones, but nowhere else. C, born in 1972, was once all the rage but is rather out of date now. It's still used, but not frequently. C++, born 1979 is a very popular (but complicated) language. The youngest is as old as me, and these represent just a small spectrum of the language paradigms that exist. Most programmers have some language they like best, and its usually not any of these anymore. These are all slow languages to develop in: newer ones let you produce working code much faster. And given all the existing software already in existence in another language, there are lots of programs that could have easily been ported to the iPad, but now won't be.
The idea is to force developers to commit exclusively to the Apple universe. With a modern language you could easily develop for every major smartphone and every major tablet at once. Apple seems to hope that by taking away these cross platform choices, developers will give up on the Android, or Windows, and build iPhone exclusives. But I highly doubt that'll happen, at least not with the sort of developers you want to attract. By taking away languages programmers want to write in, by taking away the ability to easily port something you wrote for Windows to the iPad, and by all the other anti-developer actions Apple has taken, I suspect most will just write for something else. Apple has gotten consumer buy-in, but if the developers leave, the consumers will too. Will you still want an iPhone if nobody's writing apps for it? Apple's throwing its weight around because it has an early lead, which worked well for Microsoft. But Microsoft never used its development community as fodder for its corporate battles.
Perhaps times have changed. Perhaps there are enough developers out there that you can push away most of the community and still have all the software you need. As programs continue migrating into web-apps, maybe the battle over natively running apps will stop mattering so much. But I've got a feeling pundits will be pointing at this action in the years to come as the moment the iPhone jumped the shark.
Windows has over 90% market-share and has done even better in the past. Why? A large part of it is applications. Almost any piece of software you can find will run on Windows. If you like video games, or need a particular piece of business software, Windows will run it. Cross platform development has improved recently, but even today you'll find many more windows exclusives then any other system.
So the consumers go where the software is, and that in turn drives the programmers to support Windows. With 90% of the market, your software doesn't stand much chance if it won't run on Windows. You end up with a feedback loop: consumers go where the apps are, the apps are written for the OS the consumer uses. If you cut either side, you're in danger. For all its failings, Microsoft did well enough keeping the consumer content, and did an excellent job of giving developers what they wanted.
Microsoft packaged QBasic with Windows until recently, which was my first exposure to programming. They've developed one of the best development environments out there, and give out a fully-functional free version. You can write software for windows without spending a penny, and Microsoft demands no licensing fees for you to sell it.
Contrast this with Apple. To release an iPhone application you need to pay $99 up front, then give Apple a cut of your profits. After a great deal of effort producing the application, Apple needs to approve it, and there are many tales on the internet of benign apps getting rejected. If Apple doesn't like it for any reason, your development effort is sunk. While you can program in literally hundreds of languages for Windows (or even write your own), Apple now restricts you to 3.
When Apple rejected Flash on the iPhone/iPad, I was surprised. But the action was understandable: Flash would be a hole in the App Store model: another way of distributing content without Apple's approval or (more cynically) without Apple getting its cut. Adobe responded as many other programming languages have: by writing a compiler that turns flash code into iPhone code. You program in a human-readable, high level language. A compiler turns this into 1's and 0's the computer can read. Each operating system has its own dialect of 1's and 0's, but there's no reason you can't compile into any of them. This seemed like a reasonable solution: the apps would now be indistinguishable from any other app. They would go through Apple's store, through its approval process, and Apple would get its cut. Because the 1's and 0's are essentially the same no matter what the original language they were written in, there would no longer be any obvious difference between a flash app and a c app.
Apple has said no. The latest iteration of the agreement developers need to sign to write for the iPhone or iPad has been updated: you must write your programs in c, c++ or objective-c. Objective-c, born in 1986, is used extensively in Macs and iPhones, but nowhere else. C, born in 1972, was once all the rage but is rather out of date now. It's still used, but not frequently. C++, born 1979 is a very popular (but complicated) language. The youngest is as old as me, and these represent just a small spectrum of the language paradigms that exist. Most programmers have some language they like best, and its usually not any of these anymore. These are all slow languages to develop in: newer ones let you produce working code much faster. And given all the existing software already in existence in another language, there are lots of programs that could have easily been ported to the iPad, but now won't be.
The idea is to force developers to commit exclusively to the Apple universe. With a modern language you could easily develop for every major smartphone and every major tablet at once. Apple seems to hope that by taking away these cross platform choices, developers will give up on the Android, or Windows, and build iPhone exclusives. But I highly doubt that'll happen, at least not with the sort of developers you want to attract. By taking away languages programmers want to write in, by taking away the ability to easily port something you wrote for Windows to the iPad, and by all the other anti-developer actions Apple has taken, I suspect most will just write for something else. Apple has gotten consumer buy-in, but if the developers leave, the consumers will too. Will you still want an iPhone if nobody's writing apps for it? Apple's throwing its weight around because it has an early lead, which worked well for Microsoft. But Microsoft never used its development community as fodder for its corporate battles.
Perhaps times have changed. Perhaps there are enough developers out there that you can push away most of the community and still have all the software you need. As programs continue migrating into web-apps, maybe the battle over natively running apps will stop mattering so much. But I've got a feeling pundits will be pointing at this action in the years to come as the moment the iPhone jumped the shark.
Saturday, April 3, 2010
That's no space station...it's a moon!
I posted previously about a future energy source: solar panels in space. Without an atmosphere to get in the way, and without that whole "day and night" thing, solar panels can absorb easily 300% of the energy they would on Earth. Because the energy would be constant, we could avoid having to build wasteful methods of preserving energy for night or cloudy days. Overall, its a very promising technology.
But there are downsides: specifically, cost. Shooting things into space is not cheap. The best figure I could find puts bringing a US ton of matter into space at just under $10m. That would decline if we sent more things into space: it's far more expensive to build individual shuttles then to mass produce the launching mechanisms. But even at a quarter the cost, the economics of these space panels is questionable. You might not get as much sunlight on Earth, but space travel is a pricey proposition. Thus while these space panels may someday form a viable energy source, we're probably not ready yet.
But there's a better option, I've realized. Space solar panels work so well because of the lack of an atmosphere: well, the moon lacks one as well. Solar panels are usually constructed of silicon, which turns out to be the second most prevalent element in the moon's crust. Instead of building solar panels here on Earth and tossing them out of our gravity well, we could just construct the solar panels on the moon. This turns it from a question of cheap space travel to a question of extraterrestrial construction. Any complicated machinery would be constructed here on Earth, then rocketed to the moon. There, cousins of the Mars Rover would shovel moon dust into little self contained factories. Solar Panels would come out, be laid in grids across the lunar surface, and hooked up to a microwave generator that would beam plentiful energy back to Earth. We'd have to keep sending new robots and factories as they break (at least in the short term), but besides that the solar panel fields could grow and grow and grow. Plentiful energy for all!
And it would, I suspect, have to be for all. Space is one thing, but moon-based construction is going to be a thorny political issue. Who owns the land on the moon? The first person to start using it? And would it be rational for America (if we're the ones building the Lunar Solar Fields) to switch to a pure solar energy society while China continues burning coal? No, I suspect it makes far more sense to get everybody over to to this climate friendly energy source as soon as possible. It would require an unprecedented degree of global cooperation, which worries me. But if we could find an agreeable way to distribute the energy we could move over to a vastly more environmentally friendly energy source in the very near future. It's a tricky engineering problem: constructing factories in an inhospitable environment with minimal direct human interaction, but its not something that strikes me as beyond our current means. It shouldn't require terribly advanced robotics, or major advances in solar panel construction. Someday you may look up at the moon and see a little splotch of black, and in the following decades that black would grow until our great grandchildren look up at the sky and can dimly make out a great spherical solar panel, orbiting the planet, providing energy more plentiful then anything we've ever known.
But there are downsides: specifically, cost. Shooting things into space is not cheap. The best figure I could find puts bringing a US ton of matter into space at just under $10m. That would decline if we sent more things into space: it's far more expensive to build individual shuttles then to mass produce the launching mechanisms. But even at a quarter the cost, the economics of these space panels is questionable. You might not get as much sunlight on Earth, but space travel is a pricey proposition. Thus while these space panels may someday form a viable energy source, we're probably not ready yet.
But there's a better option, I've realized. Space solar panels work so well because of the lack of an atmosphere: well, the moon lacks one as well. Solar panels are usually constructed of silicon, which turns out to be the second most prevalent element in the moon's crust. Instead of building solar panels here on Earth and tossing them out of our gravity well, we could just construct the solar panels on the moon. This turns it from a question of cheap space travel to a question of extraterrestrial construction. Any complicated machinery would be constructed here on Earth, then rocketed to the moon. There, cousins of the Mars Rover would shovel moon dust into little self contained factories. Solar Panels would come out, be laid in grids across the lunar surface, and hooked up to a microwave generator that would beam plentiful energy back to Earth. We'd have to keep sending new robots and factories as they break (at least in the short term), but besides that the solar panel fields could grow and grow and grow. Plentiful energy for all!
And it would, I suspect, have to be for all. Space is one thing, but moon-based construction is going to be a thorny political issue. Who owns the land on the moon? The first person to start using it? And would it be rational for America (if we're the ones building the Lunar Solar Fields) to switch to a pure solar energy society while China continues burning coal? No, I suspect it makes far more sense to get everybody over to to this climate friendly energy source as soon as possible. It would require an unprecedented degree of global cooperation, which worries me. But if we could find an agreeable way to distribute the energy we could move over to a vastly more environmentally friendly energy source in the very near future. It's a tricky engineering problem: constructing factories in an inhospitable environment with minimal direct human interaction, but its not something that strikes me as beyond our current means. It shouldn't require terribly advanced robotics, or major advances in solar panel construction. Someday you may look up at the moon and see a little splotch of black, and in the following decades that black would grow until our great grandchildren look up at the sky and can dimly make out a great spherical solar panel, orbiting the planet, providing energy more plentiful then anything we've ever known.
Monday, March 15, 2010
Bangs, Bounces, Freezes, Crunches
The fabric of space-time is expanding in every direction. All the stars in the night sky are receding from our view: the more distant the star, the faster the retreat. Someday the night sky will be black, as even the closest star (if any still burn on) is racing away faster than the light that brings us its news. This is not a violation of Einstein's prohibition against moving faster than light: that rule only applies to matter and energy, not to space-time itself.
What's going on? Empty space is growing around us. I've written about this before, using the analogy of a universe on a ripple spreading out across a pond. Physicists aren't sure what's causing the expansion of the universe, so they give it the mysterious moniker "dark energy". If there really are mysterious bearers of this force, dark-trons you might call them, they account for 74% of the energy/mass in the universe.
Physicists used to believe in a Big Crunch (some still do, I'm sure) where the Universe gets pulled back together into a point, a reverse Big Bang. This lead naturally to the Big Bounce theory, where immediately after the Big Crunch you've got a new Big Bang. The universe would cycle endlessly (although possibly slowly winding down...), life would begin again and again and again.
I recall wondering about this as a child, and what it meant for humanity's future. It seemed to mark a fixed end to our days. No matter how successful our civilization is, no matter how many star systems we colonize, it would face extinction in the Big Crunch. Sure, a new universe might spring to life, but how could we get there? You can't outrun space shrinking, as there's nowhere else to run to. There could be hope, retreating out of space time for a few million years, but such a task would require entirely unknown laws of physics. From a relativistic standpoint we're doomed in the Big Crunch model.
But then physicists observed that not only is the universe expanding, it's expanding at an ever accelerating rate. It shows no sign of pulling back in for a Big Crunch. So another theory for the end of days took center stage: The Big Freeze. In an expanding Universe, there is increasingly less and less energy per unit volume. Someday a single photon zipping across an empty expanse that once housed our solar system would be an usually warm region of the Universe. Again, not much hope for mankind: this story doesn't end with perpetual rebirths of the Universe we could hitch a ride onto: it ends with order decaying into a vast expanse of nothingness.
So humanity must die. If it comforts you, we're probably talking billions of years of time. And anyways, worrying about humanity's end in terms of the universe's death is a bit like declining desert on the Titanic...but then, there's another theory worth considering...
String Theory has given rise to mathematical models that suggest our universe may not be all there is. In these physical theories, we live on a brane (derived from membrane), a self contained universe floating in a larger reality. Reusing the metaphor from an earlier post, we would be analogous to a civilization living on a ripple in a lake. There are other ripples, there may even be other lakes. Spacetime would be a material we live on, but the energy for the Big Bang would have come from a collision with another sheet of reality.
And suddenly, there's hope again. If space is large enough, it may contain many separate universes, different realities created by different big bangs. These would be unimaginably far away, but if you sit in your spaceship for sufficient aeons you could visit. And even if there's just one, given enough time a new collision will occur: Really, this could happen at any moment. You never know when the space around you is suddenly going to erupt with the energy of trillions of suns. It's amazing how successful physics is at introducing new things for us to worry about (quantum vacuum collapse is a fun one for its combination of utter devastation and quantum weirdness).
So what's this all mean for us? If we can survive sufficiently long in the big freeze, and then survive a universe being born around us, we can keep going as a civilization indefinitely. It seems like a harsh journey for our bodies, but if we programmed the patterns of our DNA into stronger matter it could recreate us once the new universe is born and grown up into a more hospitable place. Ideally nanobots would survive a high energy wave as the new universe passes over them, but if that won't work we could leave patterns of energy to get swept up in the new universe. These would interact at a quantum level with the new matter, so it evolves in a pattern we wished, eventually recreating some simple robot tasked with rebuilding humanity.
As our understanding of physics improves I'll keep you up to date, but the current prognosis is that an eternal civilization is possible (however catastrophically unlikely).
What's going on? Empty space is growing around us. I've written about this before, using the analogy of a universe on a ripple spreading out across a pond. Physicists aren't sure what's causing the expansion of the universe, so they give it the mysterious moniker "dark energy". If there really are mysterious bearers of this force, dark-trons you might call them, they account for 74% of the energy/mass in the universe.
Physicists used to believe in a Big Crunch (some still do, I'm sure) where the Universe gets pulled back together into a point, a reverse Big Bang. This lead naturally to the Big Bounce theory, where immediately after the Big Crunch you've got a new Big Bang. The universe would cycle endlessly (although possibly slowly winding down...), life would begin again and again and again.
I recall wondering about this as a child, and what it meant for humanity's future. It seemed to mark a fixed end to our days. No matter how successful our civilization is, no matter how many star systems we colonize, it would face extinction in the Big Crunch. Sure, a new universe might spring to life, but how could we get there? You can't outrun space shrinking, as there's nowhere else to run to. There could be hope, retreating out of space time for a few million years, but such a task would require entirely unknown laws of physics. From a relativistic standpoint we're doomed in the Big Crunch model.
But then physicists observed that not only is the universe expanding, it's expanding at an ever accelerating rate. It shows no sign of pulling back in for a Big Crunch. So another theory for the end of days took center stage: The Big Freeze. In an expanding Universe, there is increasingly less and less energy per unit volume. Someday a single photon zipping across an empty expanse that once housed our solar system would be an usually warm region of the Universe. Again, not much hope for mankind: this story doesn't end with perpetual rebirths of the Universe we could hitch a ride onto: it ends with order decaying into a vast expanse of nothingness.
So humanity must die. If it comforts you, we're probably talking billions of years of time. And anyways, worrying about humanity's end in terms of the universe's death is a bit like declining desert on the Titanic...but then, there's another theory worth considering...
String Theory has given rise to mathematical models that suggest our universe may not be all there is. In these physical theories, we live on a brane (derived from membrane), a self contained universe floating in a larger reality. Reusing the metaphor from an earlier post, we would be analogous to a civilization living on a ripple in a lake. There are other ripples, there may even be other lakes. Spacetime would be a material we live on, but the energy for the Big Bang would have come from a collision with another sheet of reality.
And suddenly, there's hope again. If space is large enough, it may contain many separate universes, different realities created by different big bangs. These would be unimaginably far away, but if you sit in your spaceship for sufficient aeons you could visit. And even if there's just one, given enough time a new collision will occur: Really, this could happen at any moment. You never know when the space around you is suddenly going to erupt with the energy of trillions of suns. It's amazing how successful physics is at introducing new things for us to worry about (quantum vacuum collapse is a fun one for its combination of utter devastation and quantum weirdness).
So what's this all mean for us? If we can survive sufficiently long in the big freeze, and then survive a universe being born around us, we can keep going as a civilization indefinitely. It seems like a harsh journey for our bodies, but if we programmed the patterns of our DNA into stronger matter it could recreate us once the new universe is born and grown up into a more hospitable place. Ideally nanobots would survive a high energy wave as the new universe passes over them, but if that won't work we could leave patterns of energy to get swept up in the new universe. These would interact at a quantum level with the new matter, so it evolves in a pattern we wished, eventually recreating some simple robot tasked with rebuilding humanity.
As our understanding of physics improves I'll keep you up to date, but the current prognosis is that an eternal civilization is possible (however catastrophically unlikely).
Wednesday, March 10, 2010
Stone, Bronze, Iron, Steel...
I find it interesting that we use materials to name the earlier reaches of time: The iron age, the bronze age... Plenty of other variables could be used to divide history, but the materials used to build technology are a very significant, and in particular visible, choice. What would you call the current age? We once went by the Nuclear Age, but that was far from a revolutionary change. Nuclear energy is just another power source, anonymous through our electric grid. Perhaps fission will deserve an age, driving us across the galaxy, but that's all the future.
Nuclear points the way to electricity, and the selection of the Electronic Age feels appropriate. That ephemeral bolt of energy, so recently understood, is a nice nod to the unprecedented growth in scientific knowledge we've seen. Plentiful power, shipped all across the landscape, revolutionized life as fully as anything since agriculture. It's been transforming us since the later acts of the Industrial Revolution, now getting a second run at revolution with the advent of computers and the Internet.
If this is the Electronic Age, what comes next? I suspect an appropriate name will be the Carbon Age, when that plentiful element bends to our whim. After as fundamental a character as the electron, it does feel like a step backwards to move up in size to element, but what can be done? The quark, the gluon, the photon: they're such wispy, enigmatic things. No, Carbon is my candidate, bridging the gap between the macro and the micro-scale.
For one thing, there's this structure, the carbon nanotube:
Unroll it and you've got graphene; these materials have pretty unbelievable properties. If you want to build an elevator to outer-space, the carbon nanotube is just about your only option for the tether. And it now looks like it's going to have a role powering nanotechnology. Researches coated carbon nanotubes in explosives, then ignited one end. While an explosion normally radiates energy in all directions, the nanotube caught the heat and channeled it down its length. The first cool thing that happened was the heat, moving uni-directionally, traversed the nanotube 10,000 times faster than in a regular explosion. From our old friend F=MA, a faster explosion is a more powerful explosion.
On it's own, this would be pretty cool and useful. But something else happened: This wave of heat managed to catch hold of the electrons in the nanotube. You could visualize the electrons as buoys floating in the ocean: waves pass them by and they bob up and down in place. But if a large enough wave (a tidal wave, perhaps) were to flow past, the buoys would get caught up in the motion and wash away. It turns out this heat wave did just that, driving the electrons out the other end of the nanotube. This was something scientists didn't expect could happen. The explosive coated nanotube generated about 100 times the electricity of a battery, by weight. Additionally, while a battery slowly loses energy as it sits unused, there's no obvious reason the nanotube couldn't hold on to its electrical potential for decades if not millenia.
The downside is that the nanotube is not easily reusable like a battery: to generate more energy a new coating of explosives would need to be applied (or potentially pumped into the nanotube as a gas). It's also not clear whether you could scale this up to say, power your house. But we've already got solutions for powering the macroscopic world: this breakthrough is revolutionary for what it could allow in the microscopic realm. Traditional engines do not scale well downwards. Being able to generate electricity at the atomic scale is the first step to being able to construct things on the atomic scale: another small step towards nanobots. Being able to instruct agents to work on the atomic scale could be the key to revolutionizing manufacture, scanning and understanding our brain, stopping cancer and heart attacks, and even colonizing space. Carbon is a great building block, easy to structure into all different shapes, and will likely play a major role in future miniaturization.
Which is half of why I believe the next age will be the Carbon Age. The other reason is a nod to our status as carbon-based lifeforms. Our understanding of DNA continues to grow, and the technology to read and manipulate genes is plummeting in price. Scientists are already starting to custom build bacteria, you may fill your car with gasoline derived from oil that a custom build bacteria produced. If we can bend life itself to our whim, creating never before seen creations to serve our needs, we'll have entered a new phase of futurism in human evolution. This is a topic I'll revisit in more depth later.
Goodbye Electronic Age, hello Carbon Age.
Nuclear points the way to electricity, and the selection of the Electronic Age feels appropriate. That ephemeral bolt of energy, so recently understood, is a nice nod to the unprecedented growth in scientific knowledge we've seen. Plentiful power, shipped all across the landscape, revolutionized life as fully as anything since agriculture. It's been transforming us since the later acts of the Industrial Revolution, now getting a second run at revolution with the advent of computers and the Internet.
If this is the Electronic Age, what comes next? I suspect an appropriate name will be the Carbon Age, when that plentiful element bends to our whim. After as fundamental a character as the electron, it does feel like a step backwards to move up in size to element, but what can be done? The quark, the gluon, the photon: they're such wispy, enigmatic things. No, Carbon is my candidate, bridging the gap between the macro and the micro-scale.
For one thing, there's this structure, the carbon nanotube:
Unroll it and you've got graphene; these materials have pretty unbelievable properties. If you want to build an elevator to outer-space, the carbon nanotube is just about your only option for the tether. And it now looks like it's going to have a role powering nanotechnology. Researches coated carbon nanotubes in explosives, then ignited one end. While an explosion normally radiates energy in all directions, the nanotube caught the heat and channeled it down its length. The first cool thing that happened was the heat, moving uni-directionally, traversed the nanotube 10,000 times faster than in a regular explosion. From our old friend F=MA, a faster explosion is a more powerful explosion.
On it's own, this would be pretty cool and useful. But something else happened: This wave of heat managed to catch hold of the electrons in the nanotube. You could visualize the electrons as buoys floating in the ocean: waves pass them by and they bob up and down in place. But if a large enough wave (a tidal wave, perhaps) were to flow past, the buoys would get caught up in the motion and wash away. It turns out this heat wave did just that, driving the electrons out the other end of the nanotube. This was something scientists didn't expect could happen. The explosive coated nanotube generated about 100 times the electricity of a battery, by weight. Additionally, while a battery slowly loses energy as it sits unused, there's no obvious reason the nanotube couldn't hold on to its electrical potential for decades if not millenia.
The downside is that the nanotube is not easily reusable like a battery: to generate more energy a new coating of explosives would need to be applied (or potentially pumped into the nanotube as a gas). It's also not clear whether you could scale this up to say, power your house. But we've already got solutions for powering the macroscopic world: this breakthrough is revolutionary for what it could allow in the microscopic realm. Traditional engines do not scale well downwards. Being able to generate electricity at the atomic scale is the first step to being able to construct things on the atomic scale: another small step towards nanobots. Being able to instruct agents to work on the atomic scale could be the key to revolutionizing manufacture, scanning and understanding our brain, stopping cancer and heart attacks, and even colonizing space. Carbon is a great building block, easy to structure into all different shapes, and will likely play a major role in future miniaturization.
Which is half of why I believe the next age will be the Carbon Age. The other reason is a nod to our status as carbon-based lifeforms. Our understanding of DNA continues to grow, and the technology to read and manipulate genes is plummeting in price. Scientists are already starting to custom build bacteria, you may fill your car with gasoline derived from oil that a custom build bacteria produced. If we can bend life itself to our whim, creating never before seen creations to serve our needs, we'll have entered a new phase of futurism in human evolution. This is a topic I'll revisit in more depth later.
Goodbye Electronic Age, hello Carbon Age.
Subscribe to:
Posts (Atom)