What a Safer Intelligence Explosion Might Look Like

An intelligence explosion is an extremely dangerous possibility for reasons that have been well articulated. There is no way to guarantee that a runaway self-improving AI will have humanity’s best interests in mind.

But what if, rather than originating from some computer, an intelligence explosion arises from the augmentation of existing human beings? This seems a potentially safer route, since some continuity would be maintained. These new super beings would have once been human. Thus they might have a higher chance of sharing our values.

Of course, if only one superhuman, or singleton, gets to decide the fate of the rest of us, that is also frightening. In such a situation, we are stuck betting on the benevolence of one lone individual. Thus it would seem preferable to simultaneously upgrade as many people as possible. A democratized or distributed intelligence explosion would produce multiple actors who could check each other’s power and increase the chance that a more universally beneficial world is created.

Putting these two concepts together, we get the idea of a continuous, distributed intelligence explosion. What would such an explosion look like?

Imagine we all have nanobots in our brains. These nanobots are cheaper even than cellphones are today. People in the poorest developing countries can afford to have them. These nanobots improve people’s cognitive abilities, but not to the point that a full on intelligence explosion is triggered. When the critical intelligence upgrade finally arrives, it is downloaded by everyone on the planet simultaneously.

Is this scenario implausible? Perhaps. But it might be a helpful ideal to steer towards. As an outcome it seems far safer than one where self-improving AI is unilaterally created on some corporate or government super computer.

Advanced Technology Could Make Hell A Reality

I don’t believe in hell. But that doesn’t mean that with advanced technology we couldn’t create a functional version of it in the real world.

There are some pundits out there who think we might have a chance of defeating death in the future. One of the nice things about death is that it is the ultimate eject switch. No matter how long you are tortured, whether by a disease or a sadistic individual, eventually you will die. But when you introduce the possibility of immortality, you simultaneously introduce the possibility of unending suffering.

What would be the motive for creating a real-life hell? Perhaps it would be used as a form of punishment for certain kinds of criminals. Unending suffering would be a far more effective deterrent than jail or the death penalty. Alternately, the humans or AIs in charge of some future dystopia might simply be sadists. While such outcomes seem improbable, their sheer awfulness means they deserve attention. Arguably, avoiding unending suffering via technology should rank even higher on the priority list than avoiding extinction.