Category Archives: AI

Hassabis on AI

Source: The Verge, Jul 2017

In a review published in the journal Neuron today, Hassabis and three co-authors argue that the field of AI needs to reconnect to the world of neuroscience, and that it’s only by finding out more about naturalintelligence can we truly understand (and create) the artificial kind.

You’ve talked in the past, Demis, about how one of the biggest aims of DeepMind is to create AI that can help further scientific discovery, and act as a tool for increasing human ingenuity. 

One of the things you talk about in the paper that AI needs is to understand the physical world like us — to be placed in a room and be able to “interpret and reason about scenes in a humanlike way.” Researchers often talk about this sort of “embodied cognition,” and say we won’t be able to create general AI without it, is that something you agree with?

Yeah, so, one of our big founding principles was that embodied cognition is key. It’s the idea that a system needs to be able to build its own knowledge from first principles — from its sensory and motor streams — and then creating abstract knowledge from there. 

if you want to do things like making connections between different domains, or if you want new knowledge to be discovered (the sort of thing we like to do in science) then these pre-programmed, specialized systems are not going to be enough. They’re going to be limited to the knowledge that can be put in them, so it’s hard for those to really discover new things or innovate or create. So any task that requires innovation or invention or some flexibility — I think the general system will be the only to do that.

One bit of brain functionality that you mention as key to improving AI is imagination, and the ability to plan what will happen in the future.

if you look at things like imagination, it’s the idea that humans and some other animals rely on generative models of the world they’ve built up. They use these models to generate new trajectories and scenarios — counter-factual scenarios — in order to plan and assess [what will happen] before they carry out actions in the real world, which may have consequences or be costly in some way.

Imagination is a hugely powerful planning tool [for this]. You need to build a model of the world; you need to be able use that model for planning; and you need to be able to project forward in time. So when you start breaking down what’s involved in imagination, you start getting clues as to what kind of capabilities and functions you’re going to need in order to have that overall capability.

Advertisements

Erik on AI

Source: HBR, Jul 2017

Interview of Erik Brynjolfsson. He’s the director of the MIT Initiative on the Digital Economy. And he’s the co-author with Andrew McAfee of the new HBR article, ” The Business of Artificial Intelligence.”

Machines are taking over more and more tasks are combining, teaming up in more and more tasks but in particular, machines are not very good at very broad-scale creativity you know. Being an entrepreneur or writing a novel or developing a new scientific theory or approach, those kinds of creativity are beyond what machines can do today by and large.

AI Displaces Jobs

Source: NYTimes, Jun 2017

The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. It’s a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.

For example, the Chinese speech-recognition company iFlytek and several Chinese face-recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies — Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent — are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.

The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.

AI and the Future of Work

Source: Future of Life, Jun 2017

Vardi primarily studies current job automation, but he also worries that AI could eventually leave most humans unemployed. He explains, “The hope is that we’ll continue to create jobs for the vast majority of people. But if the situation arises that this is less and less the case, then we need to rethink: how do we make sure that everybody can make a living?”

The most common job in US states

 

Creativity Surpasses Automation

Source: MIT, Sep 2017

NPR: What do you see as the sector of the workforce that is least likely to change or least likely to disappear?

BRYNJOLFSSON: Well, there are three big categories that machines are really bad at. They’ve made tremendous advances, but they’re bad at first off doing creative work. Whether you’re an entrepreneur, or a scientist, or a novelist, I think you’re in pretty good shape doing that long-range creativity. 

There’s probably no better time in history to be somebody with some real creative insights. And then the technology helps you leverage that to millions, or billions of people. People who can combine some creativity with an understanding of the digital world are especially well-positioned.

When Will AI Exceed Human Performance?

Source:
<paper: https://arxiv.org/pdf/1705.08807.pdf>

when will a machine do your job better than you?

Today, we have an answer of sorts thanks to the work of Katja Grace at the Future of Humanity Institute at the University of Oxford and a few pals. To find out, these guys asked the experts. They surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. And many of the answers are something of a surprise.

The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by 2027).

But many other tasks will take much longer for machines to master. AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of working as a surgeon until 2053.

age and expertise make no difference to the prediction, but origin does. While North American researchers expect AI to outperform humans at everything in 74 years, researchers from Asia expect it in just 30 years.

Related Readings:

  1. SSC, Jun 2017
  2. AI Impacts, Jun 2017

AlphaGo – Learning on its Own

Source: The Verge, May 2017

The version of AlphaGo that played Ke has been completely rearchitected — DeepMind calls it AlphaGo Master. “The main innovation in AlphaGo Master is that it’s become its own teacher,” says Dave Silver, DeepMind’s lead researcher on AlphaGo. “So [now] AlphaGo actually learns from its own searches to improve its neural networks, both the policy network and the value network, and this makes it learn in a much more general way. One of the things we’re most excited about is not just that it can play Go better but we hope that this’ll actually lead to technologies that are more generally applicable to other challenging domains.”

AlphaGo is comprised of two networks: a policy network that selects the next move to play, and a value network that analyzes the probability of winning. The policy network was initially based on millions of historical moves from actual games played by Go professionals. But AlphaGo Master goes much further by searching through the possible moves that could occur if a particular move is played, increasing its understanding of the potential fallout.

“The original system played against itself millions of times, but it didn’t have this component of using the search,” Hassabis tells The Verge. “[AlphaGo Master is] using its own strength to improve its own predictions. So whereas in the previous version it was mostly about generating data, in this version it’s actually using the power of its own search function and its own abilities to improve one part of itself, the policy net.” Essentially, AlphaGo is now better at assessing why a particular move would be the strongest possible option.

You have to kind of coax it to learn new knowledge or explore that new area of the domain, and there are various strategies to do that. You can use adversarial opponents that push you into exploring those spaces, and you can keep different varieties of the AlphaGo versions to play each other so there’s more variety in the player pool.”

“Another thing we did is when we assessed what kinds of positions we thought AlphaGo had a problem with, we looked at the self-play games and we identified games algorithmically — we wrote another algorithm to look at all those games and identify places where AlphaGo seemed to have this kind of problem. So we have a library of those sorts of positions, and we can test our new systems not only against each other in the self-play but against this database of known problematic positions, so then we could quantify the improvement against that.”