Artificial Intelligence – Our Final Invention?
Robots and artificial intelligence have always played a prominent role in science fiction. Memorable characters include HAL 9000 from 2001: A Space Odyssey and Marvin the paranoid android from The Hitchhiker's Guide to the Galaxy. But is it just science fiction? Could we actually succeed in creating artificial general intelligence?
What do we mean when we talk about artificial general intelligence (AGI), also known as “strong AI”? The simple definition is “a machine that could successfully perform any intellectual task that a human being can”. In practice it should be able to communicate with us in natural language without us noticing that it’s a machine (see Turing test). Alternatively it should be able to enrol in a university, take and pass classes, and obtain a degree.
As Ben Goertzel puts it, we’re talking about systems that can "sense their environment and then act on it, and can gather, manipulate and modify knowledge about their own actions and perceptions”. This definition rules out machines that are good at just a specific task, like finding web pages or beating humans in chess. We have plenty of those already, and interestingly enough, it’s been already nearly 20 years since IBM’s supercomputer Deep Blue won Chess Champion Garry Kasparov.
How likely is it that we’ll be able to develop AGI? And when could this happen earliest? Most predictions have so far failed notoriously, which has giving rise to a lot of skepticism. But in 2015 famous people like Bill Gates, Elon Musk, and Stephen Hawking voiced concern about artificial intelligence, warning that it could potentially be more dangerous than nuclear weapons. If these notable persons take artificial intelligence seriously, why shouldn’t the rest of us too?
Nick Bostrom at the University of Oxford, known for his book Superintelligence, has recently surveyed some of the world’s leading AI experts. They were asked by which year they think there is a 50% probability that we will have achieved human-level machine intelligence. The median answer was 2040 or 2050, depending on precisely which group of experts were asked. See Bostrom's interesting TED talk for more fascinating details.
Now, let’s assume that we are able to build – within our lifetime – a machine that mimics the functions of the human brain. Inversely, let’s also assume that the human brain is nothing more than a biological computer inside the skull. A machine would obviously not have the size limitation of a skull, but could in theory be scaled up enormously. These assumptions raise many significant follow-up questions:
- Will the machine (or superintelligence) have a consciousness or feelings?
- How quickly will the superintelligence learn and become a lot more intelligent than humans?
- Will we ever need to invent anything else if the superintelligence does all the future inventing for us?
- Could the superintelligence become dangerous for us in the same way that humans are dangerous to ants?
There’s a lot of interesting debate going on around these questions. I’ll probably come back to each of them in future blog posts, but for now I’ll just write a few thoughts about the danger issue.
The idea of machines becoming much more intelligent than humans is undoubtedly a bit scary. It’s difficult to avoid seeing images of The Terminator in your mind. In theory a superintelligence would be capable of recursive self-improvement, or of designing and building computers or robots better than itself on its own. The rise of a self-improving machine is called the technological singularity and the term has been made popular by futurists like Stanislaw Ulam and Ray Kurzweil.
The problem is that without proper initial rules about morality, or what is desirable and what is not, we could end up in a paperclip scenario. As described by Nick Bostrom, a paperclip maximizer is an AGI whose goal is to maximize the number of paperclips in its collection. The AGI would also be capable of self-improvement. At some point things could get seriously out of hand. The machine might in its maximization efforts convert most of the matter in the solar system into paperclips. Human beings would be long gone before that.
I believe we should seriously start thinking about a moral framework for AGI as soon as possible. This should be done regardless of when (or if) we’ll be able to develop AGI. The tricky part is to agree on which moral framework we should use, since there are quite a few of them on offer. For an interesting discussion around this topic, please listen to "Surviving the Cosmos", a conversation with philosopher Sam Harris and physicist David Deutsch.
Finally, I want to say that I’m one of the optimistic guys regarding AGI. First of all, I think it's more likely than unlikely that we will achieve human-level machine learning within our lifetime. The progress we're doing in machine learning and computation, including quantum computation, supports this assumption. Furthermore, I don’t believe in the “Terminator Future”, because I’m fairly convinced we’ll be able to lay a good foundation for AGI. The fact that more and more people are talking about this is encouraging.
I believe that machines will help us to come up with a lot of new inventions in the future. AGI won’t be our last invention – at least not in the apocalyptic or utopian sense. Instead it will be one of our greatest achievements ever.