Why Machine Learning is not a path to the Singularity

(This piece was originally posted on Medium)

All of the advances in Artificial Intelligence that you ever hear about — whether it’s machines beating humans at Go, great advances in Machine Translation, self-driving cars, or anything else — are examples of weak AI. They are each focused on a single, narrowly defined task. Some would have you believe, even fear, that these advances will inevitably lead to strong AI (human-level intelligence), which in turn will lead to a Superintelligence we’ll no longer control. The fear is even that such an entity will subjugate the human race and take over the world! Perhaps there are paths to this sort of Superintelligence, but weak AI — whether using Machine Learning or other techniques — isn’t one of them.

Among those convinced of the simple progression from weak AI to strong AI are Nick Bostrom, author of SuperIntelligence: Paths, Dangers, Strategies, who recently told Business Insider that machines will reach human levels of intelligence within the next few decades, and neuroscientist / philosopher Sam Harris. Harris proclaimed in a recent TED talk,

as long as we continue to build systems of atoms that display more and more intelligent behavior, we will eventually build general intelligence into our machines.

He even added that,

Any progress is enough, we just need to keep going.

The idea of an artificial Superintelligence arising as an inevitable consequence of continued technological progress was popularized (albeit more optimistically) by Ray Kurzweil. In his book, The Singularity is Near, Kurzweil offers page after page of exponential growth curves as evidence for this inevitability.

One thing that both the pessimistic and optimistic takes on the Singularity have in common is a complete lack of rigor in defining what they’re even talking about. Bostrom, disappointingly but perhaps also wisely, never attempts a definition of intelligence. Harris defines it as information processing and warns us that our continued improvements in information processing will just inevitably result in entities far more intelligent than us.

Let’s imagine these entities of the future that have arisen as a result of progress in weak AI. Imagine them as creatures, who can do things in the world (otherwise, how could they threaten us?). What are these creatures interested in? Whatever their interests, they must be interests that are better served with increased information processing capabilities. Now let’s flip the progress curve around to go back in time…

Traveling back in time, the information processing ability of these creatures, which is what enables them to better serve their own interests than we can serve ours, decreases. So at each point we should have creatures with interests, i.e., with desires, motivations, etc., that they are less well able to serve, since they’ve got less “intelligence”. But we already know that’s not what we’ll find. Imagine we go all the way back to the present! All we’ll find is machines built by humans, with algorithms written by humans, all designed to serve the interests of humans. If our imagined superintelligent entities of the future existed — and remember, they’re supposed to be the descendants of current weak AI systems—then their past would exist right now. But it doesn’t.

Intelligence isn’t an end in itself; it’s a means to an end. We humans make use of our superior information processing abilities to survive and thrive in the world. This includes creating weak AI systems. If in the future there are other entities that can make similar use of such resources, then it’s the creation of those creatures, not the weak AI systems they’ll make use of, that marks the beginning of progress towards Superintelligence. After all, weak AI systems are essentially tools. They’re great for doing specific things, but they’re never going take over the world by themselves. For that sort of danger, you need a creature or an entity who looks at these tools and says, “I want to take over the world, and maybe I can use these tools to help me do that.”

comments powered by Disqus