What a Safer Intelligence Explosion Might Look Like

An intelligence explosion is an extremely dangerous possibility for reasons that have been well articulated. There is no way to guarantee that a runaway self-improving AI will have humanity’s best interests in mind.

But what if, rather than originating from some computer, an intelligence explosion arises from the augmentation of existing human beings? This seems a potentially safer route, since some continuity would be maintained. These new super beings would have once been human. Thus they might have a higher chance of sharing our values.

Of course, if only one superhuman, or singleton, gets to decide the fate of the rest of us, that is also frightening. In such a situation, we are stuck betting on the benevolence of one lone individual. Thus it would seem preferable to simultaneously upgrade as many people as possible. A democratized or distributed intelligence explosion would produce multiple actors who could check each other’s power and increase the chance that a more universally beneficial world is created.

Putting these two concepts together, we get the idea of a continuous, distributed intelligence explosion. What would such an explosion look like?

Imagine we all have nanobots in our brains. These nanobots are cheaper even than cellphones are today. People in the poorest developing countries can afford to have them. These nanobots improve people’s cognitive abilities, but not to the point that a full on intelligence explosion is triggered. When the critical intelligence upgrade finally arrives, it is downloaded by everyone on the planet simultaneously.

Is this scenario implausible? Perhaps. But it might be a helpful ideal to steer towards. As an outcome it seems far safer than one where self-improving AI is unilaterally created on some corporate or government super computer.

Comments are closed.