From smart to supersmart

Machines could achieve human levels of intelligence by 2029 — but then what happens?

In 1996, a defining moment of computerintelligenceoccurred when IBMs Deep Blue supercomputer beat world chess champion Gary Kasparov. Twenty years later, a new defining moment arose when Googles AlphaGo defeated Lee Sedol, world champion of Go, a game considered much more complex than chess.

Artificial intelligence (AI) research is speeding up and futurologist Raymond Kurzweil expects machines to achieve human levels of intelligence by 2029, reports Business Insider.

Then, by 2040, we could see the advent of superintelligence, which Oxford philosopher Nick Bostrom defines asan intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” But it might not even take that long: famed entrepreneur Elon Musk predicts that once artificial intelligence (AI) attains human level, it may be only a matter of days before a machine becomessmarter than the sum of humanity.”

Therefore, things could go very fast, and probably get a little out of hand. Demis Hassabis, CEO of Google DeepMind, even predicted that in 50 to 100 years, a kid in his garage could create a seed AI. Thats why, at the Future of Lifes recent Beneficial AI conference, Hassabis called on technology companies and leaders in AI research to be transparent in their advances, to coordinate with each other and, if need be, to "slow down ... at the end." This would give society a chance to adapt to superintelligence gradually, while providing scientists with the opportunity to carry out further research that could mitigate the risks of developing harmful AI.

The conference resulted in Hassabis, Musk and Cambridge theoretical physicist Stephen Hawking signing a set of 23 principles for the development of AI that is safe, ethical and beneficial to humanity. For example, principle 1 reads: “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.” No. 21: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.”