The Singularity: A problem to solve, to solve more problems

There are a lot of questions facing our world today, and many more that we’ll be faced with in the future. What are we going to do about famine, poverty, disease, overpopulation, and our growing energy needs? Will we be able to manage the changing climate of our planet? How does human consciousness work? What is the fundamental nature of the universe? Is there a way we could extend our lifespan? Some of these problems have solutions that are within our grasp. Others seem like they might never be solved. So what happens if we can’t? Do we give up and conclude that it’s just too difficult – even impossible? Or is there another option?

Suppose we could develop some kind of intelligence that was slightly better than the human intellect, whether through an enhancement to our own minds, or the creation of a fully synthetic system. Could this be done? It seems improbable that humanity represents the pinnacle of all possible minds that could exist. After all, we’re descended from much less intelligent ancestors, and other, superior lifeforms may one day descend from us. And there’s no reason why our own cognitive architecture must be the only kind that’s capable of intelligence. This is just how we happened to evolve in the environments of Earth.

So, suppose we were to focus on creating a better-than-human intelligence. Not vastly greater, but greater nonetheless. What could this new and improved intelligence be directed towards? Might it be able to figure out some of the problems we have to deal with? Could it come up with better answers than we would have? Perhaps, or maybe not. Maybe it still wouldn’t be smart enough. So what if we put it to work on developing another intelligent system that’s slightly smarter than itself? Being superior to humans, it would be even more proficient at this than we would, making it better and faster at designing its own successor. And the system that results from this would be even better at designing a greater intelligence.

This series of progressively superior intellects could thus continue in this way, each becoming even more skilled at developing the next generation of intelligent systems until some kind of limit is reached. The accelerating and escalating production of ever greater intelligence by greater intelligence is known as the technological singularity. It means taking incremental steps to bridge a very large gap in intelligence, in a much shorter period of time.

With a vastly greater amount of intelligence available to us, we would no longer have to settle for the human pace of technological development or the limits of human problem-solving ability. That is what the singularity offers us: solutions, smarter than the ones we could come up with, quicker than we could come up with them. Even with all of the problem-solving ability that could conceivably exist, there may still be some problems to which there are no good answers. But we’ll be in a much better position to determine that once we have the best tools available to tackle them rather than settling for the status quo.

There is a tradeoff here: When we create a system to do the things we don’t know how to do, we also have no way of knowing just what it is going to do. To know that, we would have to be that smart already – but we’re not. Some people have pointed out that this is dangerous: a smarter system could outsmart us, and a very smart system might be able to figure out how to do almost anything it wanted. So the development of human-superior intelligence is often considered too unpredictable and too uncontrollable to attempt. But there may not be another option here. If we choose to refrain from doing this, someone else might decide to do it first – and there’s no telling what the results would be.

Intelligent systems like AIs have commonly been portrayed as a bloodthirsty threat to humanity. But the reality is that nothing of the sort is inherent to all AIs. An artificial intelligence is fully specified by how it’s initially written, and there are as many different kinds of AI minds as could possibly be programmed. But when AIs are the ones writing the AIs, this is something we have to be especially careful with. When an intelligence can modify itself or decide what the systems it produces will want to do, it’s very important that this intelligence has a stable and benevolent system of goals. This is absolutely crucial to ensuring a suitable outcome, and designing a safe singularity may end up being one of the most pressing problems we’ll have to face in the coming century.

The future is an endless network of dim and potentially lethal corridors which we must navigate with the utmost caution, and so far, the most we can do is fumble in the dark. Here we have a chance to turn on the light, chart a path ahead, and find the best way through. Is it guaranteed? No. But if we can solve this problem, it might unlock the rest. So don’t give up just yet. There are better answers out there. We just have to look a little harder.

{advertisement}