Thursday, March 29, 2012

The Singularity With Vernor Vinge




This interview is of interest because it addresses two issues presently staring us in the face.  The one is human longevity and the other is the advent of artificial intelligence.



I will also share my own thoughts on these topics.

Longevity is coming and from the Noah report, we can establish that a thousand years of biological life is quite achievable.  Our own sense of our biological options suggest that this could actually be the fact.  We already know that specific replacement is almost upon us for all parts of our bodies including even parts of a damaged brain.

What has not been addressed as yet is a method to actually fully back up the brain itself and to rewrite that backup onto a new brain effectively to induce the necessary rewiring to match the abilities of the previous brain.  Most likely this will turn out to either be quite impractical or simply unethical.  Thus the backup will in fact cast off the biological body and use artificial constructs to express itself in the physical world.

I also sense that the mature biological entity will mostly engage organically in the world allowing itself the full of interaction of nature. It makes excellent sense to spend part of every day attending to the needs of the growing plants in one's abode.  It should be done as a child and certainly once one has stepped away from any consuming task.  In this way  an individual can find healthy pleasure in each and every day for thousands of years. Life is never boring if one is paying attention.

That leaves us the issue of so called super intelligence.  First off, such is highly over rated and easily augmented with smart machinery that is always prompting the user.  It all begs the question of what requires super intelligence in the first place.  Speed is irrelevant as that is only an assist.  It matters little as long as the the right answer is acquired in a timely manner.

In fact, even in disparate circumstances, the mind will freeze unless it is preprogramed to do differently as the military knows so well.

It is also true that the bulk of humanity does not avail itself of sufficient training to become proficient in the wide range of tasks that can be expected of it.  Yet facing a long life it is easy to establish cycles of improvement that bring every individual to a superior level.

Real intelligence includes the capacity to imagine a different future and that appears possible with some human beings although even that needs to be addressed in a logical manner.

Fabricating that capacity will be difficult and properly encompass a greater level of integration than any imagine.  My own discoveries over time ultimately relied on the development of a prepared mind and accepting a long time cycle between the definition of the problem itself and a final resolution.  I have grown patient.:-)

Vernor Vinge Is Optimistic About the Collapse of Civilization

  • By Geek's Guide to the Galaxy 
  • Email Author 
  • March 21, 2012 |  
  • 6:00 am |  
  • Categories: Books and Comics, sci-fi


Vernor Vinge's latest novel is called The Children of the Sky.

Noted author and futurist Vernor Vinge is surprisingly optimistic when it comes to the prospect of civilization collapsing.

“I think that [civilization] coming back would actually be a very big surprise,” he says in this week’s episode of the Geek’s Guide to the Galaxy podcast. “The difference between us and us 10,000 years ago is … we know it can be done.”


Vinge has a proven track record of looking ahead. His 1981 novella True Names was one of the first science fiction stories to deal with virtual reality, and he also coined the phrase, “The Technological Singularity” to describe a future point at which technology creates intelligences beyond our comprehension. The term is now in wide use among futurists.

But could humanity really claw its way back after a complete collapse? Haven’t we plundered the planet’s resources in ways that would be impossible to repeat?

“I disagree with that,” says Vinge. “With one exception — fossil fuels. But the stuff that we mine otherwise? We have concentrated that. I imagine that ruins of cities are richer ore fields than most of the natural ore fields we have used historically.”

That’s not to say the collapse of civilization is no big deal. The human cost would be horrendous, and there would be no comeback at all if the crash leaves no survivors. A ravaged ecosphere could stymie any hope of rebuilding, as could a disaster that destroys even the ruins of cities.

“I am just as concerned about disasters as anyone,” says Vinge. “I have this region of the problem that I’m more optimistic about than some people, but overall, avoiding existential threats is at the top of my to-do list.”

Read our complete interview with Vernor Vinge below, in which he talks about living to be 100,000, how the space program could endanger Earth, and how the Technological Singularity might unfold. Or listen to the interview in Episode 56 of the Geek’s Guide to the Galaxy podcast, which also features a chat with Caribbean-born science fiction author Tobias S. Buckell.

Wired: You’re famous for coining the phrase, “The Technological Singularity.” How did you first come up with that?

Vernor Vinge: I used that term first, I think, at an artificial intelligence conference in 1982. Actually, it was a conference with Marvin Minsky, the famous A.I. researcher, and several science fiction writers were on the panel — Robert Sheckley and Jim Hogan. I made the observation that if we got human-level artificial intelligence, that would certainly be a world-shaking event, and if we got superhuman-level intelligence, then what happened afterward would be fundamentally unintelligible.

In the past, when some new invention came along, it generally made all sorts of unexpected consequences, but those consequences could be understood. The example I like to use is that if you had a magical time machine and you could bring Mark Twain forward into the 21st century, you could explain our world to him and he would understand it quite quickly. He’d come up to speed in a day or two, and he would probably have a very good time with it. On the other hand, if you tried to do that explanatory experiment with a goldfish, there’s no way you could explain our world to a goldfish in a way that would be meaningful, as it is to us humans.

That is a consequence of this particular type of progress — that is, in making creatures that are smarter than humans. And I think it was probably even as I was talking on this panel, it occurred to me that the term for that was a little bit like with a black hole. There are only a few types of information you can get out of a black hole — in general relativity — and this was sort of a social or a technological example of the same sort of thing. Now, the particular idea of super-intelligence — not just A.I. but superhuman intelligence A.I. — is intrinsic in stuff that had been going on back at least to the ‘50s, and the notion that it would be something that would not be understandable was probably lurking out there too. I think the only thing I said on that panel that made a special difference was the term, which I think highlighted the situation.

Wired: What are some of the scenarios for how the Singularity might unfold?

Vinge: I think there are all sorts of different paths to the Singularity, at least five pretty different paths. I think they’re going to be all mixed together, but it still helps to think about them separately because it makes them easier to track. For instance, there’s classical artificial intelligence. You just build a big machine and hope you can figure out some way to make it very, very smart. Or really one that is very much I think in a lot of people’s minds now is simply that the internet plus the people on the internet — so the internet, its computers, its support software, its server farms, and then billions of human beings — those together could come to constitute a superhuman entity that would qualify as giving us a Singularity.

Another path to the Singularity that in many ways is the most attractive — and actually was also the topic of the first science fiction story I ever wrote that sold — is the notion of “intelligence amplification,” which is that we get user interfaces with computers that are so transparent to us that it’s like the computer is what David Brin calls our “neo-neocortex.” What’s nice about that is that we actually get to be direct participants, and in that particular case, when I say that the post-Singularity world is unintelligible, well, yeah, it is unintelligible to the likes of you and me, but it would not be unintelligible to the participants that are using intelligence amplification. I have a friend in robotics that I brought this up with long, long ago, and he said, “Well, Vernor, I really don’t have any argument with the claims you’re making about what’s going to happen, except this business about it being unintelligible — it’s not unintelligible if you are riding the curve of increasing intelligence.” And then he smiled and said, “And I intend to ride that curve.”

There are at least two other possibilities. One is simply bio-science raising human intelligence by enhancing our memory and enhancing our ability to think clearly, and then I think there’s one that is becoming more evident but is sort of off-stage, and that is the notion of a “Digital Gaia,” a sort of internet under the internet that consists of all the networked embedded microprocessors in the world, and the Digital Gaia is certainly the most alien of the different possibilities. In fact, I sort of like to trot it out to give an example of something that’s pretty obviously very strange and hard to understand. If you could imagine something like where the world becomes its own database, where reality itself wakes up. Actually more than anything else it looks like some sort of implementation of animism. So that particular possibility, Digital Gaia, to me is certainly the most alien and in some ways the most nervous-making, because if the world woke up then a lot of our common sense about the world is not valid anymore. Karl Schroeder had a great book that discussed this sort of possibility, and that was his novel Ventus.

Wired: Which works of science fiction do you think have featured the best treatment of the Singularity?

Vinge: Probably the most courageous walkthrough into the Singularity was Accelerando by Charles Stross. He actually follows the development from, I think, from the 2010s through the 2070s. He also said that by the time they got to the 2070s, he’s no longer seriously claiming that what he’s describing would be like the post-Singular world. I suspect that comment was related to the notion that after several decades of this, things would be seriously beyond what a writer could understand in our era, and what the readers of our era would understand.

Wired: As a retired math professor, how useful do you think mathematical models are for predicting the future?

Vinge: There are a lot of different things that go under the name “mathematical models.” Moore’s Law is an observation about the past that’s turned around as an extrapolation about the future. There are a lot of different things that are mathematical models, and my attitude toward them is very cautious. I think one of the most important nonfiction books so far this century is Nassim Taleb’s The Black Swan. But I fear that what’s happening with that book is a lot of people give it lip service. “Oh, yeah, Taleb really has good point in The Black Swan about not trusting certain sorts of models.” The thing is, there are mathematical models that are so seductively attractive that even though people recognize that they are not workable, they still go and use them because they’re so easy to use and they give such definite answers. So that’s a book I recommend for everybody to read, and it illustrates fundamental problems with dealing with models when you’re also dealing with people.

Wired: Ray Kurzweil has gotten a lot of attention recently for his optimism about extending human life spans. What do you think about his predictions?

Vinge: First of all, I’m all for human life extension. In The Singularity is Near, I think, he has a nice discussion of the situation that a lot of essayists have where they say, “Oh, we really don’t want that. A wise and philosophical person realizes that life needs be limited, and that’s a good thing,” these essayists say. He does a good job of criticizing that point of view, and I certainly agree with that. Furthermore, I think that a human lifespan of a thousand years with post-Singularity technology is easily doable. I think a lifespan of a thousand years would actually — Singularity aside — would do human society and human nature a great deal of good, and don’t think it is that difficult, it probably can even be achieved without having a Technological Singularity.

Life spans of 10,000 to 100,000 years, then you begin to look at what’s involved, the humans that are involved, and how capable a human mind is of absorbing variety. Larry Niven had a story many years ago called “The Ethics of Madness,” in which — it’s not the main point of the story, I don’t think, the main point of the action — but the story includes the notion of a person who lives to be 100,000 or 200,000 years old. It is really scary what they are like in the last 100,000 years or so. It raises some questions about what it means to be alive. It’s really not what you would want. This is a different sort of complaint than the complaint of all these people that say, “Oh, humans were not meant to live more than a hundred years or so.”

The complaint or the criticism here is that the human mind has a certain level of ability to handle different sorts of complexity, and if you believe that you could go 100,000 years and not be turned into a repeating tape loop, well, then let’s talk about longer period of time. How about a billion years, or a hundred billion years? At a hundred billion years, you’re out there re-engineering the universe. The age of the universe becomes your chief longevity problem. But there’s still the issue of, what would it be like to be you after that? This raises the point, which actually I’m sure is also on Ray’s mind, that if you’re going to last that long you have to become something greater, and the Singularity is ideally set up to supply that. So the people who are into the intelligence amplification mode of looking at these things, this all fits. And I’m not saying that in a critical and negative way, it does all fit, and it puts you in a situation where you are talking realistically about living very long periods of time, perhaps so long that you have to re-engineer the universe because the universe is not long-lived enough. At the same time, you have to be growing and growing and growing. I mean, intellectually growing.

Now, if you look at that situation, it ultimately gets you, I think, to a very interesting philosophical point, which really I don’t think was within the horizon of what people normally thought about two or three or four hundred years ago. And that is, if you did grow intellectually, would you be the same person? Well, most of us would argue that we are pretty much the same person as far back as we can remember. You know, we have changes in viewpoint, but what you were when you were five and what you are now, there is certainly a community of self-interest there, and it probably doesn’t bother most people too much. They feel good about what they know now, and they feel sympathetic to what they were then.

Now, compare yourself to the zygote that became you. It’s a little bit more of an empathetic stretch necessary there. I’m sure that I understand my zygote as well as it ever understood itself, but I bet you that it doesn’t understand me very well. In fact, the amount of it that’s still in me is at a very low level, even in terms of the genes. There’s what’s happened in terms of epigenetic things since that zygote began to grow. Push that further, and the little part of this story that actually is you becomes more and more diluted. So if you really are serious about talking about living forever, not just living for a thousand years or a hundred thousand years, if you’re really serious about that, you come face to face with the same general issues that the Singularity raises, and that is issues of identity and mind.

I don’t mean this as pessimistic, and I certainly don’t mean it to put down the idea of living for a very long time, but it just raises the issue that, in a very cool way, we have come to a point where we can talk with some realism about getting the things that humans have always wanted so much, and actually facing that up close and seeing that we can do it, it pushes optimism to the point where it is, not unreasonably, something that makes people nervous.

Wired: I listened to a talk where you mentioned that one of the drawbacks of the space program is that it would give a lot more people what amounts to WMD capability. Could you talk about that?

Vinge: If you Google my name and the phrase “What if the Singularity does not happen?” that was in that talk. And I’m very proud of that talk, partly because I think that for scenario planners and science fiction writers in general, it’s always good that if you have some idea about what the future’s going to be like, that you also work out a scenario where it doesn’t happen, and try to explain plausibly why it might not happen. And actually, one doesn’t have to scratch that talk very deeply to see that it’s the background for my novel A Deepness in the Sky. Most of the latter part of the talk is how important space travel is for human survival, but there is also the fact that, in the short term at least, when all our eggs are still in one basket, namely on the surface of the Earth, that to be able to get something up to orbital speeds gives it a lot of kinetic energy, and those levels of kinetic energy are — depending on the mass involved — comparable to some pretty serious weapons that could do us grief at least at a city level.

So I think actually, as with all technologies, there are dangers and downsides. I would say these are relatively mild. Probably if we do get space flight there’s going to be rules of the road for anything inside cislunar space. And there’s going to be people watching pretty carefully, especially objects that are very massive. Anybody who sends an asteroid into cislunar space I think is going to be watched very, very carefully, because there you’re getting up to a level of kinetic energy weapon that would do serious damage to everybody on Earth. I have a small theory that this is one reason why space travel development has gone slowly, in that it gives military advantage in an unclear way, and the top players were not interested in poking that particular gorilla, so they just settled for very much slower progress. I think we are entering an era now where we will see a renaissance in space flight. I hope it’s not a military renaissance, which would do the job but would probably raise the risks of the sort of threat that you are talking about. And ultimately, of course, having self-sufficient settlements off Earth is one of the most important insurance policies that the human race can have. Since we don’t know about any life anywhere else in the universe, one could also regard it as a life insurance policy for life itself in the universe.

Wired: Another thing in that lecture that really struck me is that you seemed fairly optimistic about the potential for human civilization to rebuild itself following a complete collapse. I’d always imagined that since we’ve already extracted all the easily obtainable oil and coal and so on that would be very difficult. Could you talk about the issues with that?

Vinge: That’s a really important point, how difficult it is to come back from a civilizational collapse? I’m going to say some optimistic things here, and I don’t mean them to trivialize what happens if you had a civilizational collapse. I mean, if we had a civilizational collapse, even a fairly mild one, you and I would almost certainly be dead. And a serious collapse that involved most of the people dying would obviously do that to most of the human race. It’s just absolutely ghastly.

On the other hand, I think that coming back would actually be a very big surprise. The difference between us and us, say, 10,000 years ago … there are obvious differences, like the level of our technology. But there’s another, more important difference, and that is, we know it can be done. I think the human race wandered around for tens of thousands of years sort of bouncing from one stupid, mean-spirited solution to another, because we had no idea what could be done.

Now, one aspect that you brought up was how we’ve mined all the easily accessible stuff. I disagree with that, with one exception — fossil fuels. I agree when it comes to fossil fuels. But almost every other resource — well, actually, I should also say that if we had a really bad collapse and managed to destroy the ecosphere, that’s another resource that would be hard to get back. But the stuff that we mine otherwise, we have concentrated that. I imagine that ruins of cities are richer ore fields than most of the natural ore fields that we have used historically. And not only at the level of ore, but at the level of all sorts of technological things. Just pre-built steel beams in large cities are all over the place, and they’re quite hard to make. If you really got knocked back a long way, they’re quite hard to make.

With higher sorts of technology, it becomes more and more debatable whether it would still be working, but it’s obvious that a lot of bulk technology is just there for the picking up. And this would make things go very, very fast when combined with the notion that we’d know what’s going on. Depending on how far we got knocked back, we’d have lots of detailed knowledge, even humans that remembered what things were like. Although technology built from scratch by people who not only had no idea about technology but no idea that it could even be done, in a world where there were no ruined cities…. Yeah, that would be something that would be very problematical to happen in any near-term sort of way.

I had a very interesting chat with [science fiction author] David Weber a few years ago. We were wandering around the American Library Association dealer’s floor, chatting about this exact issue, and I found that actually David Weber had a point of view that I have come to subscribe to, which is even more optimistic. His assertion was that human population could be a long time coming back, just because of human biology, but he felt that if we did not get wiped out, if there were humans left afterward, that there would be areas on Earth at 1800 to 1900 levels of technology within one human lifetime of the crash, and I’ve thought about that a lot, and I can see how it fits with the rest of the argument that I was peddling but that I didn’t have quite that much optimism for. Now, having said all that, I am afraid that it might lead some people to the conclusion that I’m saying, “Oh, there will be a bad day or two, but don’t worry about those disasters, you know, we’ll muddle through and be back as good as new before you can say ‘Jack Robinson,’” and I am not saying that. First of all, there are disasters that could kill everybody, and there’s also just the level of destruction that we are talking about, and the level of human tragedy, and the tragedy for the earth. Looking at the universe as a whole, furthermore, it is entirely plausible that there are disasters that nobody ever climbs out of. And so I would say that I am just as concerned about disasters as anyone. I have this region of the problem that I am more optimistic about than some people, but overall, avoiding existential threats is at the top of my to-do list.

Wired: Within the science fiction field, two of the concepts you’re best known for are the idea of the “zones of thought” and the idea of the “gestalt-sentient species.” How did you come up with those ideas?

Vinge: Both the zones of thought and the Tines group mind critters started out in the same milieu. The zones of thought were my attempt to get around the limitations that it seems to me that the Technological Singularity imposes on us science fiction writers. And the magical assumption — and it is a magical assumption — about the zones of thought is that superhuman intelligence is simply impossible in certain parts of the galaxy. And then as sort of a fillip I added two other zones. One was an intermediate zone in which superhuman intelligence was not possible but faster-than-light drives were. So in one universe I was able to have three or four different subgenres of hard science fiction. One is about the Technological Singularity, one is about faster-than-light travel, and one is where faster-than-light travel is not possible. Then there was a fourth zone which is essentially intractable, and that is where even human-level intelligence is not possible, and that’s the Unthinking Depths. So that gave me a nice single universe that I could have accomplished otherwise only by doing it as a progression in time, as technology improved, different things becoming possible.

The Tines were not really to solve a problem like the zones were. The Tines grew out of my idea box — as ideas occur to me I write them down, and one observation that I made a long time ago, when I read science fictions stories, I noticed that there were all sorts of science fiction stories about group minds. The Borg was not the first such. They go back to probably the beginning of the 20th century, and they were very big in Star Maker by Olaf Stapledon. But one thing I noticed is that these group minds usually involved very large numbers of members. The individual members might be of human intelligence or they might only be of animal intelligence, but the ensemble was actually a very large group, and I noticed there were hardly ever any group minds where there were three or four or five members. It definitely had been done — for instance, Poul Anderson had a novel I think in the Flandry series that involved a race where each individual is actually from a different species. There was an avian type, and an herbivore type, and I think there was an ape type, and it took the three of them to make a single person.

That may be the only such story that I remember, at least at the time I made the observation, so that had been lying around in my idea box for a long time, and I decided to use it, and I think the great piece of good luck — from a purely writer standpoint in using the idea — was I decided to make the group members from a species that was at least vaguely doglike. So that meant that I had a lot of leverage with what we humans are already familiar with. We’re familiar with dealing with dogs as individuals, we’re familiar — less familiar, but somewhat familiar — with dealing with dogs as part of pack-like groups. So an awful lot of stuff sort of came along with that idea, and I did not have to further explain those sorts of things. They were sort of already rooted in the consciousness of most readers. So adding the notion that the pack itself was intelligent meant that a whole lot of things were very, very easy to do, and lots of language was easy to use in terms of packs and in terms of group behavior.

Wired: Your latest novel is called The Children of the Sky. What’s it about?

Vinge: The Children of the Sky is a sequel to A Fire Upon the Deep, and when I say “sequel,” I mean sequel as that term is understood by most people nowadays. I’ve sort of made a career of writing strange sequels, like sequels that take place 50 million years later, or sequels that take place 10,000 years earlier, and things like that. This is really a canonical sequel — it takes place two to ten years after the end of A Fire Upon the Deep. It has many of the surviving characters from A Fire Upon the Deep, and it follows along with their problems. It’s not giving anything away, but one disappointing thing about it is that it really doesn’t get into space. It’s all on Tines World, and it’s about the travails of the refugee children, who have now all been revived — almost all the survivors have been revived from the refugee ship in A Fire Upon the Deep, and so it follows their adventures along with these pack-minded creatures called the Tines.

Wired: Are there any other new or upcoming projects you’d like to mention?

Vinge: I’m trying to decide what is the right next thing to write. I’ve gotten quite a bit of feedback from people who want the sequel to the sequel, that is, the sequel to The Children of the Sky, and I do have ideas for that. I also have ideas for near future things on Earth, which tie in more to the sort of things that we’ve been talking about earlier in this interview. Every time I turn around now, you know, it’s 2012! We are going into the middle of things, and maybe it’s my imagination, but I think there are all sorts of things that are visible now that were not so visible before, and I think that there’s all sorts of really cool science fiction that folks could write, and I hope to be one of those folks.

Wired: Which works of science fiction do you think have featured the best treatment of the Singularity?

Vinge: Probably the most courageous walkthrough into the Singularity was Accelerandoby Charles Stross. He actually follows the development from, I think, from the 2010s through the 2070s. He also said that by the time they got to the 2070s, he’s no longer seriously claiming that what he’s describing would be like the post-Singular world. I suspect that comment was related to the notion that after several decades of this, things would be seriously beyond what a writer could understand in our era, and what the readers of our era would understand.

Wired: As a retired math professor, how useful do you think mathematical models are for predicting the future?

Vinge: There are a lot of different things that go under the name “mathematical models.” Moore’s Law is an observation about the past that’s turned around as an extrapolation about the future. There are a lot of different things that are mathematical models, and my attitude toward them is very cautious. I think one of the most important nonfiction books so far this century is Nassim Taleb’s The Black Swan. But I fear that what’s happening with that book is a lot of people give it lip service. “Oh, yeah, Taleb really has good point in The Black Swan about not trusting certain sorts of models.” The thing is, there are mathematical models that are so seductively attractive that even though people recognize that they are not workable, they still go and use them because they’re so easy to use and they give such definite answers. So that’s a book I recommend for everybody to read, and it illustrates fundamental problems with dealing with models when you’re also dealing with people.

Wired: Ray Kurzweil has gotten a lot of attention recently for his optimism about extending human life spans. What do you think about his predictions?

Vinge: First of all, I’m all for human life extension. In The Singularity is Near, I think, he has a nice discussion of the situation that a lot of essayists have where they say, “Oh, we really don’t want that. A wise and philosophical person realizes that life needs be limited, and that’s a good thing,” these essayists say. He does a good job of criticizing that point of view, and I certainly agree with that. Furthermore, I think that a human lifespan of a thousand years with post-Singularity technology is easily doable. I think a lifespan of a thousand years would actually — Singularity aside — would do human society and human nature a great deal of good, and don’t think it is that difficult, it probably can even be achieved without having a Technological Singularity.

Life spans of 10,000 to 100,000 years, then you begin to look at what’s involved, the humans that are involved, and how capable a human mind is of absorbing variety. Larry Niven had a story many years ago called “The Ethics of Madness,” in which — it’s not the main point of the story, I don’t think, the main point of the action — but the story includes the notion of a person who lives to be 100,000 or 200,000 years old. It is really scary what they are like in the last 100,000 years or so. It raises some questions about what it means to be alive. It’s really not what you would want. This is a different sort of complaint than the complaint of all these people that say, “Oh, humans were not meant to live more than a hundred years or so.”

The complaint or the criticism here is that the human mind has a certain level of ability to handle different sorts of complexity, and if you believe that you could go 100,000 years and not be turned into a repeating tape loop, well, then let’s talk about longer period of time. How about a billion years, or a hundred billion years? At a hundred billion years, you’re out there re-engineering the universe. The age of the universe becomes your chief longevity problem. But there’s still the issue of, what would it be like to be you after that? This raises the point, which actually I’m sure is also on Ray’s mind, that if you’re going to last that long you have to become something greater, and the Singularity is ideally set up to supply that. So the people who are into the intelligence amplification mode of looking at these things, this all fits. And I’m not saying that in a critical and negative way, it does all fit, and it puts you in a situation where you are talking realistically about living very long periods of time, perhaps so long that you have to re-engineer the universe because the universe is not long-lived enough. At the same time, you have to be growing and growing and growing. I mean, intellectually growing.

Now, if you look at that situation, it ultimately gets you, I think, to a very interesting philosophical point, which really I don’t think was within the horizon of what people normally thought about two or three or four hundred years ago. And that is, if you did grow intellectually, would you be the same person? Well, most of us would argue that we are pretty much the same person as far back as we can remember. You know, we have changes in viewpoint, but what you were when you were five and what you are now, there is certainly a community of self-interest there, and it probably doesn’t bother most people too much. They feel good about what they know now, and they feel sympathetic to what they were then.

Now, compare yourself to the zygote that became you. It’s a little bit more of an empathetic stretch necessary there. I’m sure that I understand my zygote as well as it ever understood itself, but I bet you that it doesn’t understand me very well. In fact, the amount of it that’s still in me is at a very low level, even in terms of the genes. There’s what’s happened in terms of epigenetic things since that zygote began to grow. Push that further, and the little part of this story that actually is you becomes more and more diluted. So if you really are serious about talking about living forever, not just living for a thousand years or a hundred thousand years, if you’re really serious about that, you come face to face with the same general issues that the Singularity raises, and that is issues of identity and mind.

I don’t mean this as pessimistic, and I certainly don’t mean it to put down the idea of living for a very long time, but it just raises the issue that, in a very cool way, we have come to a point where we can talk with some realism about getting the things that humans have always wanted so much, and actually facing that up close and seeing that we can do it, it pushes optimism to the point where it is, not unreasonably, something that makes people nervous.

Wired: I listened to a talk where you mentioned that one of the drawbacks of the space program is that it would give a lot more people what amounts to WMD capability. Could you talk about that?

Vinge: If you Google my name and the phrase “What if the Singularity does not happen?” that was in that talk. And I’m very proud of that talk, partly because I think that for scenario planners and science fiction writers in general, it’s always good that if you have some idea about what the future’s going to be like, that you also work out a scenario where it doesn’t happen, and try to explain plausibly why it might not happen. And actually, one doesn’t have to scratch that talk very deeply to see that it’s the background for my novel A Deepness in the Sky. Most of the latter part of the talk is how important space travel is for human survival, but there is also the fact that, in the short term at least, when all our eggs are still in one basket, namely on the surface of the Earth, that to be able to get something up to orbital speeds gives it a lot of kinetic energy, and those levels of kinetic energy are — depending on the mass involved — comparable to some pretty serious weapons that could do us grief at least at a city level.

So I think actually, as with all technologies, there are dangers and downsides. I would say these are relatively mild. Probably if we do get space flight there’s going to be rules of the road for anything inside cislunar space. And there’s going to be people watching pretty carefully, especially objects that are very massive. Anybody who sends an asteroid into cislunar space I think is going to be watched very, very carefully, because there you’re getting up to a level of kinetic energy weapon that would do serious damage to everybody on Earth. I have a small theory that this is one reason why space travel development has gone slowly, in that it gives military advantage in an unclear way, and the top players were not interested in poking that particular gorilla, so they just settled for very much slower progress. I think we are entering an era now where we will see a renaissance in space flight. I hope it’s not a military renaissance, which would do the job but would probably raise the risks of the sort of threat that you are talking about. And ultimately, of course, having self-sufficient settlements off Earth is one of the most important insurance policies that the human race can have. Since we don’t know about any life anywhere else in the universe, one could also regard it as a life insurance policy for life itself in the universe.

Wired: Another thing in that lecture that really struck me is that you seemed fairly optimistic about the potential for human civilization to rebuild itself following a complete collapse. I’d always imagined that since we’ve already extracted all the easily obtainable oil and coal and so on that would be very difficult. Could you talk about the issues with that?

Vinge: That’s a really important point, how difficult it is to come back from a civilizational collapse? I’m going to say some optimistic things here, and I don’t mean them to trivialize what happens if you had a civilizational collapse. I mean, if we had a civilizational collapse, even a fairly mild one, you and I would almost certainly be dead. And a serious collapse that involved most of the people dying would obviously do that to most of the human race. It’s just absolutely ghastly.

On the other hand, I think that coming back would actually be a very big surprise. The difference between us and us, say, 10,000 years ago … there are obvious differences, like the level of our technology. But there’s another, more important difference, and that is, we know it can be done. I think the human race wandered around for tens of thousands of years sort of bouncing from one stupid, mean-spirited solution to another, because we had no idea what could be done.

Now, one aspect that you brought up was how we’ve mined all the easily accessible stuff. I disagree with that, with one exception — fossil fuels. I agree when it comes to fossil fuels. But almost every other resource — well, actually, I should also say that if we had a really bad collapse and managed to destroy the ecosphere, that’s another resource that would be hard to get back. But the stuff that we mine otherwise, we have concentrated that. I imagine that ruins of cities are richer ore fields than most of the natural ore fields that we have used historically. And not only at the level of ore, but at the level of all sorts of technological things. Just pre-built steel beams in large cities are all over the place, and they’re quite hard to make. If you really got knocked back a long way, they’re quite hard to make.

With higher sorts of technology, it becomes more and more debatable whether it would still be working, but it’s obvious that a lot of bulk technology is just there for the picking up. And this would make things go very, very fast when combined with the notion that we’d know what’s going on. Depending on how far we got knocked back, we’d have lots of detailed knowledge, even humans that remembered what things were like. Although technology built from scratch by people who not only had no idea about technology but no idea that it could even be done, in a world where there were no ruined cities…. Yeah, that would be something that would be very problematical to happen in any near-term sort of way.

I had a very interesting chat with [science fiction author] David Weber a few years ago. We were wandering around the American Library Association dealer’s floor, chatting about this exact issue, and I found that actually David Weber had a point of view that I have come to subscribe to, which is even more optimistic. His assertion was that human population could be a long time coming back, just because of human biology, but he felt that if we did not get wiped out, if there were humans left afterward, that there would be areas on Earth at 1800 to 1900 levels of technology within one human lifetime of the crash, and I’ve thought about that a lot, and I can see how it fits with the rest of the argument that I was peddling but that I didn’t have quite that much optimism for. Now, having said all that, I am afraid that it might lead some people to the conclusion that I’m saying, “Oh, there will be a bad day or two, but don’t worry about those disasters, you know, we’ll muddle through and be back as good as new before you can say ‘Jack Robinson,’” and I am not saying that. First of all, there are disasters that could kill everybody, and there’s also just the level of destruction that we are talking about, and the level of human tragedy, and the tragedy for the earth. Looking at the universe as a whole, furthermore, it is entirely plausible that there are disasters that nobody ever climbs out of. And so I would say that I am just as concerned about disasters as anyone. I have this region of the problem that I am more optimistic about than some people, but overall, avoiding existential threats is at the top of my to-do list.

Wired: Within the science fiction field, two of the concepts you’re best known for are the idea of the “zones of thought” and the idea of the “gestalt-sentient species.” How did you come up with those ideas?

Vinge: Both the zones of thought and the Tines group mind critters started out in the same milieu. The zones of thought were my attempt to get around the limitations that it seems to me that the Technological Singularity imposes on us science fiction writers. And the magical assumption — and it is a magical assumption — about the zones of thought is that superhuman intelligence is simply impossible in certain parts of the galaxy. And then as sort of a fillip I added two other zones. One was an intermediate zone in which superhuman intelligence was not possible but faster-than-light drives were. So in one universe I was able to have three or four different subgenres of hard science fiction. One is about the Technological Singularity, one is about faster-than-light travel, and one is where faster-than-light travel is not possible. Then there was a fourth zone which is essentially intractable, and that is where even human-level intelligence is not possible, and that’s the Unthinking Depths. So that gave me a nice single universe that I could have accomplished otherwise only by doing it as a progression in time, as technology improved, different things becoming possible.

The Tines were not really to solve a problem like the zones were. The Tines grew out of my idea box — as ideas occur to me I write them down, and one observation that I made a long time ago, when I read science fictions stories, I noticed that there were all sorts of science fiction stories about group minds. The Borg was not the first such. They go back to probably the beginning of the 20th century, and they were very big in Star Maker by Olaf Stapledon. But one thing I noticed is that these group minds usually involved very large numbers of members. The individual members might be of human intelligence or they might only be of animal intelligence, but the ensemble was actually a very large group, and I noticed there were hardly ever any group minds where there were three or four or five members. It definitely had been done — for instance, Poul Anderson had a novel I think in the Flandry series that involved a race where each individual is actually from a different species. There was an avian type, and an herbivore type, and I think there was an ape type, and it took the three of them to make a single person.

That may be the only such story that I remember, at least at the time I made the observation, so that had been lying around in my idea box for a long time, and I decided to use it, and I think the great piece of good luck — from a purely writer standpoint in using the idea — was I decided to make the group members from a species that was at least vaguely doglike. So that meant that I had a lot of leverage with what we humans are already familiar with. We’re familiar with dealing with dogs as individuals, we’re familiar — less familiar, but somewhat familiar — with dealing with dogs as part of pack-like groups. So an awful lot of stuff sort of came along with that idea, and I did not have to further explain those sorts of things. They were sort of already rooted in the consciousness of most readers. So adding the notion that the pack itself was intelligent meant that a whole lot of things were very, very easy to do, and lots of language was easy to use in terms of packs and in terms of group behavior.

Wired: Your latest novel is called The Children of the Sky. What’s it about?

Vinge: The Children of the Sky is a sequel to A Fire Upon the Deep, and when I say “sequel,” I mean sequel as that term is understood by most people nowadays. I’ve sort of made a career of writing strange sequels, like sequels that take place 50 million years later, or sequels that take place 10,000 years earlier, and things like that. This is really a canonical sequel — it takes place two to ten years after the end of A Fire Upon the Deep. It has many of the surviving characters from A Fire Upon the Deep, and it follows along with their problems. It’s not giving anything away, but one disappointing thing about it is that it really doesn’t get into space. It’s all on Tines World, and it’s about the travails of the refugee children, who have now all been revived — almost all the survivors have been revived from the refugee ship in A Fire Upon the Deep, and so it follows their adventures along with these pack-minded creatures called the Tines.

Wired: Are there any other new or upcoming projects you’d like to mention?

Vinge: I’m trying to decide what is the right next thing to write. I’ve gotten quite a bit of feedback from people who want the sequel to the sequel, that is, the sequel to The Children of the Sky, and I do have ideas for that. I also have ideas for near future things on Earth, which tie in more to the sort of things that we’ve been talking about earlier in this interview. Every time I turn around now, you know, it’s 2012! We are going into the middle of things, and maybe it’s my imagination, but I think there are all sorts of things that are visible now that were not so visible before, and I think that there’s all sorts of really cool science fiction that folks could write, and I hope to be one of those folks.

No comments: