Source: MIT Technology Review, Sep 2016
In early March 2016, AAAI (American Association for Artificial Intelligence) sent out an anonymous survey … posing the following question to 193 fellows:
“In his book, Nick Bostrom has defined Superintelligence as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’ When do you think we will achieve Superintelligence?”
Over the next week or so, 80 fellows responded (a 41 percent response rate), and their responses are summarized below:
In essence, according to 92.5 percent of the respondents, superintelligence is beyond the foreseeable horizon. This interpretation is also supported by written comments shared by the fellows.
Even though the survey was anonymous, 44 fellows chose to identify themselves, including Geoff Hinton (deep-learning luminary), Ed Feigenbaum (Stanford, Turing Award winner), Rodney Brooks (leading roboticist), and Peter Norvig (Google).
The respondents also shared several comments, including the following:
- “Way, way, way more than 25 years. Centuries most likely. But not never.”
- “We’re competing with millions of years’ evolution of the human brain. We can write single-purpose programs that can compete with humans, and sometimes excel, but the world is not neatly compartmentalized into single-problem questions.”
- “Nick Bostrom is a professional scare monger. His Institute’s role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the ‘Donald Trump’ of AI.”
it’s possible that AI systems could collaborate with people to create a symbiotic superintelligence. That would be very different from the pernicious and autonomous kind envisioned by Professor Bostrom.