Category Archives: AI

Humans are Still Better @ Creativity

Source: Business Insider, Jan 2017

McKinsey’s new report on the future of automation notes that humans are better than robots at: spotting new patterns, logical reasoning, creativity, coordination between multiple agents, natural language understanding, identifying social and emotional states, responding to social and emotional states, displaying social and emotional states, and moving around diverse environments.

 

Garry Kasparov on AI

Source: Chessbase, Dec 2016

As Sam Harris puts it to Kasparov: “You will go down in history as the first person to be beaten by a machine in an intellectual pursuit where you were the most advanced member of our species. You will have a special place in history, even if that history is written by robot overlords.”

Harris initiates the discussion with an important point: “Chess is this a quintessential intellectual activity, but it is actually a fairly simple one, similar to the way that music and mathematics can be simple. This is one of the reasons why you have child prodigies in these areas, and you don’t have child prodigies in novel writing or political debates or other areas that are different in an intellectual sense. This is one of the reasons why chess was one of the first things to fall to Artificial Intelligence.”

Kasparov says [at 1:17:40] that after he beat the computer in 1996 and then lost to it in 1997 he was quite upset that IBM didn’t want to play a rubber match [the decider]. “It’s a painful story, since I will be entering history as the chess champion who represented humanity in an intellectual pursuit and was beaten by the machine. But the reason I wrote the book is not to settle old scores or give my version of the match, but to say that we should not be paralyzed by a dystopian vision of the future – worrying about killer AI and super-intelligent robots, which is like worrying about overcrowding on Mars.”

Even more remarkable is that Kasparov [1:19:10] has had a change of heart: “While writing the book I did a lot of research – analysing the games with modern computers, also soul-searching – and I changed my conclusions. I am not writing any love letters to IBM, by my respect for the Deep Blue team went up, and my opinion of my own play, and Deep Blue’s play, went down. [1:21:55] Today you can buy a chess engine for your laptop that will beat Deep Blue quite easily.”

Kasparov concedes that he would not stand a chance against today’s computer. He says [1:22.25]: “The problems that humans are facing is that we are not consistent, we cannot play under great pressure. Our games are marked by good and bad moves – not blunders, just inaccuracies. They remain unnoticed in human chess, but are very damaging when you are facing a machine.” He has a very interesting analogy: 90% accuracy is good enough for translating a news article, but 90% accuracy for driving a car, or even 99%, is a bad day on the road.

So competing with computers in chess is “about our ability to play high-quality moves for many hours. Human psychology works against us. If I have a computer, even a very weak one, at my side, the tables could be turned, and I or some strong GM would be able to beat a very powerful computer, because I can guide the machine and definitely eliminate blunders, the very root of human weakness when facing the computer. That is why I am promoting the idea of combining our forces.”

Kasparov is referring to Advanced and Freestyle Chess, where humans are allowed to use computers during their games, a form of play he invented and promoted. “The future belongs to human and computer cooperation,” he believes, “man plus machine decision making. We are entering a new era, and there is nothing definite about it – the outcome is not already decided. In the last few decades we have moved from utopian sci-fi to dystopian sci-fi, with machines like the Matrix and Terminator. It could be, but it very much depends on us, on our attitude and our ability to come up with new ideas. It’s up to us to prove that we are not redundant.”

Using AI to Identify Candidate Drugs

Source: MIT Technology Review, Nov 2016
<original Arxiv research report>

The AI program could help the search for new drug compounds. Pharmaceutical research tends to rely on software that exhaustively crawls through giant pools of candidate molecules using rules written by chemists, and simulations that try to identify or predict useful structures. The former relies on humans thinking of everything, while the latter is limited by the accuracy of simulations and the computing power required.

Aspuru-Guzik’s system can dream up structures more independently of humans and without lengthy simulations. It leverages its own experience, built up by training machine-learning algorithms with data on hundreds of thousands of drug-like molecules.

Generative models are more typically used to create images, speech, or text, for example in the case of Google’s Smart Reply feature that suggests responses to e-mails. But last month Aspuru-Guzik and colleagues at Harvard, the University of Toronto, and the University of Cambridge published results from creating a generative model trained on 250,000 drug-like molecules.

The system could generate plausible new structures by combining properties of existing drug compounds, and be asked to suggest molecules that strongly displayed certain properties such as solubility, and being easy to synthesize.

The researchers have already experimented with training their system on a database of organic LED molecules, which are important for displays. But making the technique into a practical tool will require improving its chemistry skills, because the structures it suggests are sometimes nonsensical.

Pande says one challenge for asking software to learn chemistry may be that researchers have not yet identified the best data format to use to feed chemical structures into deep-learning software. Images, speech, and text have proven to be a good fit—as evidenced by software that rivals humans at image and speech recognition and translation—but existing ways of encoding chemical structures may not be quite right.

… giving his system more data, to broaden its chemistry knowledge, will improve its power, in the same way that databases of millions of photos have helped image recognition become useful. The American Chemical Society’s database records around 100 million published chemical structures. Before long, Aspuru-Guzik hopes to feed all of them to a version of his AI program.

Training an AI

Source: Singularity Hub, Dec 2016

the non-profit institute OpenAI unveiled a virtual world for AI to explore and play in. Dubbed Universe, the goals of the project are as vast as its name: to train a single AI to be proficient at any task a human can do with a computer.

By teaching individual AI “agents” to excel at a variety of real-world tasks, OpenAI hopes to lead us one step closer to truly intelligent bots — those equipped with the flexible reasoning skills we humans possess.

AIs are still only good at what they’re trained to do. Ask AlphaGo to play chess, and the program likely returns the machine equivalent of utter bewilderment, even after you explain the rules in great detail.

As of now, our AI systems are ultra-efficient one-trick ponies. The way they’re trained is partly at fault: researchers generally initialize a blank slate AI, put it through millions of trials until it masters one task and call it quits. The AI never experiences anything else, so how would it know how to solve any other problem?

To get to general intelligence — a human-like ability to use previous experiences to tackle new problems — AIs need to carry their experiences into a variety of new tasks. This is where Universe comes in. By experiencing a world full of different scenarios, OpenAI researchers reason, AIs may finally be able to develop world knowledge and flexible problem solving skills that allow them to “think,” rather than forever getting stuck in a singular loop.

A whole new world

In a nutshell, Universe is a powerful platform encompassing thousands of environments that provides a common way for researchers to train their AI agent.

As a software platform Universe provides a stage to run other software, and each of these programs contributes a different environment — Atari and flash games, apps and websites, for example, are already applicable.

games only form a sliver of our interactions with the digital world, and Universe is already expanding beyond their limitations with a project dubbed the Mini World of Bits. Bits is a collection of different web browser interactions we encounter while browsing the Internet: typing things into text boxes or selecting an option from a dropdown menu and clicking “submit.”

These tasks, although simple, form the foundation of how we tap into the treasure trove that is the web. Ultimately OpenAI envisions AIs that can fluidly navigate the web towards a goal — for example, booking a flight. In one of Universe’s environments, researchers can already give an AI a desired booking schedule and train it to browse for the flight on multiple airlines.

Universe is only set to grow larger. Microsoft’s Malmo, an AI platform that uses Minecraft as its testing ground, is just about to integrate into Universe, as are the popular protein folding game fold.it, Android apps, HTML5 games and “really anything else people think of.”

AI (in summary)

Source: A VC website, Dec 2016

  1. Better algorithms. Research is constantly coming up with better ways to train models and machines.
  2. Better GPUs. The same chips that make graphics come alive on your screen are used to train models, and these chips are improving rapidly.
  3. More data. The Internet and humanity’s use of it has produced a massive data set to train machines with.
  4. Cloud services. Companies, such as our portfolio company Clarifai, are now offering cloud based services to developers which allow them to access artificial intelligence “as a service” instead of having to “roll your own”.

AI @ Google

Source: NYTimes, Dec 2016

The Google of the future, Pichai had said on several occasions, was going to be “A.I. first.” What that meant in theory was complicated and had welcomed much speculation. What it meant in practice, with any luck, was that soon the company’s products would no longer represent the fruits of traditional computer programming, exactly, but “machine learning.”

A rarefied department within the company, Google Brain, was founded five years ago on this very principle: that artificial “neural networks” that acquaint themselves with the world via trial and error, as toddlers do, might in turn develop something like human flexibility.

Since 2011, though, Google Brain has demonstrated that this approach to artificial intelligence could solve many problems that confounded decades of conventional efforts. Speech recognition didn’t work very well until Brain undertook an effort to revamp it; the application of machine learning made its performance on Google’s mobile platform, Android, almost as good as human transcription. The same was true of image recognition. Less than a year ago, Brain for the first time commenced with the gut renovation of an entire consumer product, and its momentous results were being celebrated tonight.

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium.

What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.

When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive.

There has always been another vision for A.I. — a dissenting view — in which the computers would learn from the ground up (from data) rather than from the top down (from rules). This notion dates to the early 1940s, when it occurred to researchers that the best model for flexible automated intelligence was the brain itself. A brain, after all, is just a bunch of widgets, called neurons, that either pass along an electrical charge to their neighbors or don’t. What’s important are less the individual neurons themselves than the manifold connections among them. This structure, in its simplicity, has afforded the brain a wealth of adaptive advantages. The brain can operate in circumstances in which information is poor or missing; it can withstand significant damage without total loss of control; it can store a huge amount of knowledge in a very efficient way; it can isolate distinct patterns but retain the messiness necessary to handle ambiguity.

Aarrangements of simple artificial neurons could carry out basic logical functions. They could also, at least in theory, learn the way we do. With life experience, depending on a particular person’s trials and errors, the synaptic connections among pairs of neurons get stronger or weaker. An artificial neural network could do something similar, by gradually altering, on a guided trial-and-error basis, the numerical relationships among artificial neurons. It wouldn’t need to be preprogrammed with fixed rules. It would, instead, rewire itself to reflect patterns in the data it absorbed.

Minsky’s criticism of the Perceptron extended only to networks of one “layer,” i.e., one layer of artificial neurons between what’s fed to the machine and what you expect from it — and later in life, he expounded ideas very similar to contemporary deep learning. But Hinton already knew at the time that complex tasks could be carried out if you had recourse to multiple layers. The simplest description of a neural network is that it’s a machine that makes classifications or predictions based on its ability to discover patterns in data. With one layer, you could find only simple patterns; with more than one, you could look for patterns of patterns.

Each successive layer of the network looks for a pattern in the previous layer.

parallels the way information is put together in increasingly abstract ways as it travels from the photoreceptors in the retina back and up through the visual cortex. At each conceptual step, detail that isn’t immediately relevant is thrown away. If several edges and circles come together to make a face, you don’t care exactly where the face is found in the visual field; you just care that it’s a face.

These ideas remained popular, however, among philosophers and psychologists, who called it “connectionism” or “parallel distributed processing.” “This idea,” Hinton told me, “of a few people keeping a torch burning, it’s a nice myth. It was true within artificial intelligence. But within psychology lots of people believed in the approach but just couldn’t do it.”

An average brain has something on the order of 100 billion neurons. Each neuron is connected to up to 10,000 other neurons, which means that the number of synapses is between 100 trillion and 1,000 trillion. For a simple artificial neural network of the sort proposed in the 1940s, the attempt to even try to replicate this was unimaginable. We’re still far from the construction of a network of that size, but Google Brain’s investment allowed for the creation of artificial neural networks comparable to the brains of mice.

All they’re doing is shuffling information around in search of commonalities — basic patterns, at first, and then more complex ones — and for the moment, at least, the greatest danger is that the information we’re feeding them is biased in the first place.

Part of the reason there was so much resistance to these ideas in computer-science departments is that because the output is just a prediction based on patterns of patterns, it’s not going to be perfect, and the machine will never be able to define for you what, exactly, a cat is. It just knows them when it sees them.

What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabeled data and pick out for itself a high-order human concept. The Brain researchers had shown the network millions of still frames from YouTube videos, and out of the welter of the pure sensorium the network had isolated a stable pattern any toddler or chipmunk would recognize without a moment’s hesitation as the face of a cat. The machine had not been programmed with the foreknowledge of a cat; it reached directly into the world and seized the idea for itself. (The researchers discovered this with the neural-network equivalent of something like an M.R.I., which showed them that a ghostly cat face caused the artificial neurons to “vote” with the greatest collective enthusiasm.)

Le and two colleagues finally demonstrated that neural networks might be configured to handle the structure of language. He drew upon an idea, called “word embeddings,” that had been around for more than 10 years. When you summarize images, you can divine a picture of what each stage of the summary looks like — an edge, a circle, etc. When you summarize language in a similar way, you essentially produce multidimensional maps of the distances, based on common usage, between one word and every single other word in the language. The machine is not “analyzing” the data the way that we might, with linguistic rules that identify some of them as nouns and others as verbs. Instead, it is shifting and twisting and warping the words around in the map. In two dimensions, you cannot make this map useful. You want, for example, “cat” to be in the rough vicinity of “dog,” but you also want “cat” to be near “tail” and near “supercilious” and near “meme,” because you want to try to capture all of the different relationships — both strong and weak — that the word “cat” has to other words. It can be related to all these other words simultaneously only if it is related to each of them in a different dimension. You can’t easily make a 160,000-dimensional map, but it turns out you can represent a language pretty well in a mere thousand or so dimensions — in other words, a universe in which each word is designated by a list of a thousand numbers. Le gave me a good-natured hard time for my continual requests for a mental picture of these maps. “Gideon,” he would say, with the blunt regular demurral of Bartleby, “I do not generally like trying to visualize thousand-dimensional vectors in three-dimensional space.”

Still, certain dimensions in the space, it turned out, did seem to represent legible human categories, like gender or relative size. If you took the thousand numbers that meant “king” and literally just subtracted the thousand numbers that meant “queen,” you got the same numerical result as if you subtracted the numbers for “woman” from the numbers for “man.” And if you took the entire space of the English language and the entire space of French, you could, at least in theory, train a network to learn how to take a sentence in one space and propose an equivalent in the other. You just had to give it millions and millions of English sentences as inputs on one side and their desired French outputs on the other, and over time it would recognize the relevant patterns in words the way that an image classifier recognized the relevant patterns in pixels. You could then give it a sentence in English and ask it to predict the best French analogue.

The major difference between words and pixels, however, is that all of the pixels in an image are there at once, whereas words appear in a progression over time. You needed a way for the network to “hold in mind” the progression of a chronological sequence — the complete pathway from the first word to the last. In a period of about a week, in September 2014, three papers came out — one by Le and two others by academics in Canada and Germany — that at last provided all the theoretical tools necessary to do this sort of thing. That research allowed for open-ended projects like Brain’s Magenta, an investigation into how machines might generate art and music. It also cleared the way toward an instrumental task like machine translation. Hinton told me he thought at the time that this follow-up work would take at least five more years.

And in the more distant, speculative future, machine translation was perhaps the first step toward a general computational facility with human language. This would represent a major inflection point — perhaps the major inflection point — in the development of something that felt like true artificial intelligence.

The most important thing happening in Silicon Valley right now is not disruption. Rather, it’s institution-building — and the consolidation of power — on a scale and at a pace that are both probably unprecedented in human history. Brain has interns; it has residents; it has “ninja” classes to train people in other departments

Celebrate Machines Taking Over our Jobs (?)

Source: Heleo, Dec 2016

Traditionally the debate has been framed between “The machines are coming to take our jobs and this is terrible,” and “The threat is overrated. The machines aren’t going to take it. People are going to invent new jobs.” You introduced this third possibility: the machines are going to take our jobs… and we might want to celebrate that.

Derek: My concern about universal basic income is precisely that it has taken what work represents now—which is community, income, and meaning—and only takes a single strand from it, the income, and left the other two strands alone. Where does meaning come from? Where does community come from?

Maybe it comes from video games. I don’t like video games—that suggests to me that there are probably other people that don’t really like video games. Where are they going to get their meaning? Are they going to make art? Are we going to have a flourishing of artisanal shops like we had in colonial America? Maybe, but that’s the challenge. Not “How do you replace the money?” We’re three times richer than we were in the 1980s, and we were plenty rich under Reagan. The question is: how do you replace meaning? Maybe art plays a role, but universal basic income doesn’t cover it.