September 08, 2004 11:37 PM

Bruce Sterling on The Singularity

Some years ago, Vernor Vinge came up with an interesting observation.

At some point in the next few decades, we're going to be able to build artificial intelligences that are comparable to human beings in intellectual power. Moore's Law being what it is, soon thereafter, we'll be able to build AIs that are smarter than people, and pretty soon after that, those AIs will be building yet further AIs that are far smarter than people, and so forth.

It is possible that before we learn how to build AIs, we'll first learn how to perform "intelligence amplification" or "IA", augmenting human brains with electronics or other mechanisms to produce intelligences that are better than human. Such amplified humans would be able to work on improving the amplification technologies, which may also lead to massively superhuman intelligences.

It is possible that the first superhuman intelligences will merely be faster versions of human intelligence implemented by simulating the human brain on a very fast hardware platform. Vinge calls this "weak" superhumanity, but it is still potentially quite impressive. K. Eric Drexler in his fantastic (but somewhat dated) book "Engines of Creation" (also available online), presents a mechanism for simulating a human brain, using a conservative nanotechnological design, that would run about a million times faster than a human brain. Such a being could perform a century's worth of engineering work in less than an hour. Presumably such minds might improve their own hardware designs with breathtaking speed. Drexler's design is a pure gedankenexperiment — no one is likely to ever build the precise construct he describes, but since it there is solid evidence that it could be built, it tells us that at least such a construct is possible, even if far better could be made.

Vinge notes that once there are intelligences that are substantially smarter than people, and which rapidly become smarter still, the world will rapidly change beyond all human comprehension. The limits of human intelligence will no longer be limit the speed of technological progress, and humans will no longer be the apex of our civilization.

Vinge wrote a famous essay some years ago on this topic, coining the term "The Singularity" for it. Once superhuman intelligence appears, our models of the future and our ability to predict what lies ahead get irreparably ruptured. No dog, however clever, will ever understand integral calculus, and it is equally unlikely that humans would understand the science and technologies of beings far smarter than we are. (Vinge's essay is very well written — I encourage people to give it a read.)

Vinge notes in his essay (as of 1993) that he would be surprised if such changes happened before 2005 or much later than 2030, but the dates are immaterial in my opinion. Whether such events happen in ten years or in a hundred years, the impact will be the same, and thirty years or a century are both a blink of an eye in the context of the whole of human history.

Do I believe Vinge? Very much so. Human intelligence is the result of physical processes taking place in the brain, and we will thus someday be able to simulate those processes with machines. We will likely also design machines that produce the same effect by different means, much as cars are not like horses but also provide transportation. To claim that we could never gain such abilities is to claim that human intelligence arises from a supernatural "soul" of some sort, and I see such overwhelming evidence against that claim that I cannot give it even passing credence. That which arises from a physical process we can eventually simulate and understand, and that which we can simulate and understand we can improve. Whether we enter the post-human era today, tomorrow or in two centuries is immaterial — it will happen eventually if we don't kill ourselves off first.

This brings us to the topic of Bruce Sterling.

Sterling has recently made vague attacks on Vinge's arguments in two public fora. One such attack was a speech he gave to the Long Now Foundation (available here). Today, I was pointed at an opinion piece in Wired with much the same content.

Here's an excerpt from the Wired essay:

A singularity looks great in special f/x, but is there any substance in the idea? When Vinge first posed the problem, he was concerned that the imminent eruption in artificial intelligence would lead to ubermenschen of unfathomable mental agility. More than a decade later, we still can't say with any precision what intelligence is, much less how to build it. If you fail to define your terms, it is easy to divide by zero and predict infinite exponential evolution. Sure, computers might someday awaken into something resembling human consciousness, but we have no metrics to describe that awakening and thus no objective way to recognize it if it happens. How would you test a claim like that?

Sterling misrepresents Vinge's essay on the singularity completely. Vinge made no claims to understand intelligence, but his argument does not require that we understand it precisely. Vinge never claimed that such breakthroughs would have happened by now, and his argument in no way requires a particular timetable. He made no claims about "infinite exponential evolution", either.

"Consciousness" is also a red herring. Asking "how would you test a claim like that" is clearly the wrong question to ask — Vinge's claim is not about "consciousness" and there is no need to test the "consciousness" of the superhuman intelligences. We will know if they are more intelligent than us by their actions, such as building constructs we cannot understand, and whether they are "conscious" or not is immaterial to the argument.

Sterling's tone throughout is laden with indirection. He doesn't ever come out and say "I think the Singularity is implausible for the following reasons" — much like astrologers or the Oracle of Delphi, he avoids making specific claims and thus can't be found to be obviously wrong.

The comments he does make, though, seem stunningly off the mark:

Even if machines remain inert and dumb, we still might provoke a singularity by giving humans a superboost. This notion is catnip for the techno-intelligentsia: "Wow, if we brainy geeks were even more like we already are, we'd be godlike!" Check out the biographies of real-life geniuses, though - Newton, Goethe, da Vinci, Einstein - and you find vulnerable mortals who have difficulty maintaining focus. If the world were full of da Vincis, we'd all be quarrelsome, gay, left-handed Italians who couldn't finish a painting.

Glib, but I hardly see what it has to do with Vinge's argument at all. Either minds are a physical phenomenon, and gedankenexperiments such as Drexler's point to ways that we might produce faster (and possibly "better") minds than our own, or they aren't physical phenomena and cannot be understood or simulated. Perhaps Sterling claims the mind does not arise from a physical phenomenon, though that would seem to be solidly contradicted by the science of our day. Perhaps he believes artificial intelligence research is forever doomed to fail even if the mind arises from physical phenomena, though I see little reason to assume that either. Perhaps he truly believes that all superhuman intelligences would be crippled by Attention Deficit Disorder, but that is a pretty implausible claim, and he certainly gives no evidence for it. Perhaps he finds the idea of people exploring this avenue of research distasteful or perhaps he hates smart people (the "brainy geeks" comment seemed a bit anti-intellectual), but any such distaste doesn't appear to have any relevance to whether Vinge is right or not.

Unfortunately, Sterling makes no arguments in any of these directions. He merely insinuates. Since he's fairly non-specific about what it is that he's claiming, one can't be completely sure of what it is that he believes.

What Sterling lacks in specificity, however, he makes up for in irrelevant and fairly bizarre side commentary, such as this:

More likely yet, we live in a dull, self-satisfied, squalid eddy in history, blundering around with no concept of progress and no sense of direction. We have no idea what we really want from our own lives or from society. And no Moore's law rising majestically on any 2-D graph is ever going make us magnificent or spiritual when we lack the will, vision, and appetite for spiritual magnificence.

None of this, of course, in any way intersects with Vinge's arguments in the slightest. It is a complete non-sequitur.

In spite of the fact that Sterling's final paragraphs are in no way relevant to his claims about the ides of the Singularity, I still must take issue with them. I don't see our society making "no progress" or being particularly "squalid". Frankly, it is amazing how much we've done even in the last couple of decades to reduce poverty, disease and other human ills. Virtually any objective measure one chooses to pick, from life expectancy among the poorest 20% of the population to the number of people living without indoor plumbing, will show that pretty clearly.

I also have to admit that I have no particular desire in my life for the "spiritual". If by "spiritual" he means religion, I have no belief in the supernatural, and no desire to see society waste more of its time on such flim-flam. If by "spiritual" he means not enough people share his particular tastes for art or architecture, well, a person who truly appreciates human freedom does not deny others the right to their own taste.

Of course, as I've noted, since Sterling is extremely vague, it is hard to know what he means with any precision. What I can say, though, is that he appears to have failed to make a coherent case against the idea of the Singularity.


Posted by Perry E. Metzger | Categories: Science & Technology