Category Archives: AI

China’s AI Race

Source: MIT Tech Review, Feb 2018

China will have at least a 50/50 chance of winning the race, and there are several reasons for that.

First, China has a huge army of young people coming into AI. Over the past decade, the number of AI publications by Chinese authors has doubled. Young AI engineers from Face++, a Chinese face-recognition startup, recently won first place in three computer-vision challenges—ahead of teams from Google, Microsoft, Facebook, and Carnegie Mellon University.

Second, China has more data than the US—way more. Data is what makes AI go. A very good scientist with a ton of data will beat a super scientist with a modest amount of data. China has the most mobile phones and internet users in the world—triple the number in the United States. But the gap is even bigger than that because of the way people in China use their devices. People there carry no cash. They pay all their utility bills with their phones. They can do all their shopping on their phones. You get off work and open an app to order food. By the time you reach home, the food is right there, hot off the electric motorbike. In China, shared bicycles generate 30 terabytes of sensor data in their 50 million paid rides per day—that’s roughly 300 times the data being generated in the US.

Third, Chinese AI companies have passed the copycat phase. Fifteen years ago almost every decent startup in China was simply copying the functionality, look, and feel of products offered in the US. But all that copying taught eager Chinese entrepreneurs how to become good product managers, and now they’re on to the next stage: exceeding their overseas counterparts. Even today, Weibo is better than Twitter. WeChat delivers a way better experience than Facebook Messenger.

 

And fourth, government policies are accelerating AI in China. The Chinese government’s stated plan is to catch up with the US on AI technology and applications by 2020 and to become a global AI innovation hub by 2030. In a speech in October, President Xi Jinping encouraged further integration of the internet, big data, and artificial intelligence with the real-world economy. 

there are the symbiotic optimists, who think that AI combined with humans should be better than either one alone. This will be true for certain professions—doctors, lawyers—but most jobs won’t fall in that category. Instead they are routine, single-domain jobs where AI excels over the human by a large margin.

Advertisements

Using AI to Augment Intelligence

Source: Distill.pub, Dec 2017

This essay describes a new field, emerging today out of a synthesis of AI and IA. For this field, we suggest the name artificial intelligence augmentation (AIA): the use of AI systems to help develop new methods for intelligence augmentation. This new field introduces important new fundamental questions, questions not associated to either parent field. 

Our essay begins with a survey of recent technical work hinting at artificial intelligence augmentation, including work on generative interfaces – that is, interfaces which can be used to explore and visualize generative machine learning models. Such interfaces develop a kind of cartography of generative models, ways for humans to explore and make meaning from those models, and to incorporate what those models “know” into their creative work.

We believe now is a good time to identify some of the broad, fundamental questions at the foundation of this emerging field.

  1. To what extent are these new tools enabling creativity?
  2. Can they be used to generate ideas which are truly surprising and new, or are the ideas cliches, based on trivial recombinations of existing ideas?
  3. Can such systems be used to develop fundamental new interface primitives?
  4. How will those new primitives change and expand the way humans think?

Scientific theories often greatly simplify the description of what appear to be complex phenomena, reducing large numbers of variables to just a few variables from which many aspects of system behavior can be deduced. Furthermore, good scientific theories sometimes enable us to generalize to discover new phenomena.

consider ordinary material objects. Such objects have what physicists call a phase – they may be a liquid, a solid, a gas, or perhaps something more exotic, like a superconductor or Bose-Einstein condensateA priori, such systems seem immensely complex, involving perhaps 10^{23}1023 or so molecules. But the laws of thermodynamics and statistical mechanics enable us to find a simpler description, reducing that complexity to just a few variables (temperature, pressure, and so on), which encompass much of the behavior of the system.

Furthermore, sometimes it’s possible to generalize, predicting unexpected new phases of matter. For example, in 1924, physicists used thermodynamics and statistical mechanics to predict a remarkable new phase of matter, Bose-Einstein condensation, in which a collection of atoms may all occupy identical quantum states, leading to surprising large-scale quantum interference effects. We’ll come back to this predictive ability in our later discussion of creativity and generative models.

The font tool is an example of a kind of cognitive technology. In particular, the primitive operations it contains can be internalized as part of how a user thinks. In this it resembles a program such as Photoshop or a spreadsheet or 3D graphics programs. Each provides a novel set of interface primitives, primitives which can be internalized by the user as fundamental new elements in their thinking. This act of internalization of new primitives is fundamental to much work on intelligence augmentation.

Using the same interface, we can use a generative model to manipulate images of human faces using qualities such as expression, gender, or hair color. Or to manipulate sentences using length, sarcasm, or tone. Or to manipulate molecules using chemical properties:

Such generative interfaces provide a kind of cartography of generative models, ways for humans to explore and make meaning using those models.

Two models of computation

a model of a computer as a way of outsourcing cognition. In speculative depictions of possible future AI, this cognitive outsourcing model often shows up in the view of an AI as an oracle, able to solve some large class of problems with better-than-human performance.

a very different conception of what computers are for is possible, a conception much more congruent with work on intelligence augmentation.

To understand this alternate view, consider our subjective experience of thought. For many people, that experience is verbal: they think using language, forming chains of words in their heads, similar to sentences in speech or written on a page. For other people, thinking is a more visual experience, incorporating representations such as graphs and maps. Still other people mix mathematics into their thinking, using algebraic expressions or diagrammatic techniques, such as Feynman diagrams and Penrose diagrams.

In each case, we’re thinking using representations invented by other people: words, graphs, maps, algebra, mathematical diagrams, and so on. We internalize these cognitive technologies as we grow up, and come to use them as a kind of substrate for our thinking.

For most of history, the range of available cognitive technologies has changed slowly and incrementally. A new word will be introduced, or a new mathematical symbol. More rarely, a radical new cognitive technology will be developed. For example, in 1637 Descartes published his “Discourse on Method”, explaining how to represent geometric ideas using algebra, and vice versa:

This enabled a radical change and expansion in how we think about both geometry and algebra.

Historically, lasting cognitive technologies have been invented only rarely.

It’s this kind of cognitive transformation model which underlies much of the deepest work on intelligence augmentation. Rather than outsourcing cognition, it’s about changing the operations and representations we use to think; it’s about changing the substrate of thought itself. And so while cognitive outsourcing is important, this cognitive transformation view offers a much more profound model of intelligence augmentation. It’s a view in which computers are a means to change and expand human thought itself.

AI systems can enable the creation of new cognitive technologies. … can be used to explore and discover, to provide new representations and operations, which can be internalized as part of the user’s own thinking. 

 the systems use machine learning to enable new primitives which can be integrated into the user’s thinking. 

Finding powerful new primitives of thought

We’ve argued that machine learning systems can help create representations and operations which serve as new primitives in human thought. What properties should we look for in such new primitives? 

 In the 1940s, different formulations of the theory of quantum electrodynamics were developed independently by the physicists Julian Schwinger, Shin’ichirō Tomonaga, and Richard Feynman. In their work, Schwinger and Tomonaga used a conventional algebraic approach, along lines similar to the rest of physics. Feynman used a more radical approach, based on what are now known as Feynman diagrams, for depicting the interaction of light and matter:

breakthroughs in representation often appear strange at first. Is there any underlying reason that is true?

Part of the reason is because if some representation is truly new, then it will appear different than anything you’ve ever seen before. 

Good representations sharpen up such insights, eliding the familiar to show that which is new as vividly as possible. But because of that emphasis on unfamiliarity, the representation will seem strange: it shows relationships you’ve never seen before. In some sense, the task of the designer is to identify that core strangeness, and to amplify it as much as possible.

Strange representations are often difficult to understand. At first, physicists preferred Schwinger-Tomonaga to Feynman. But as Feynman’s approach was slowly understood by physicists, they realized that although Schwinger-Tomonaga and Feynman were mathematically equivalent, Feynman was more powerful. 

Ideally, an interface will surface the deepest principles underlying a subject, revealing a new world to the user. When you learn such an interface, you internalize those principles, giving you more powerful ways of reasoning about that world. Those principles are the diffs in your understanding. 

our machine learning models will help us build interfaces which reify deep principles in ways meaningful to the user. For that to happen, the models have to discover deep principles about the world, recognize those principles, and then surface them as vividly as possible in an interface, in a way comprehensible by the user.

Do these interfaces inhibit creativity?

 helpful to identify two different modes of creativity. This two-mode model is over-simplified: creativity doesn’t fit so neatly into two distinct categories. Yet the model nonetheless clarifies the role of new interfaces in creative work.

The first mode of creativity is the everyday creativity of a craftsperson engaged in their craft. Much of the work of a font designer, for example, consists of competent recombination of the best existing practices. Such work typically involves many creative choices to meet the intended design goals, but not developing key new underlying principles.

For such work, the generative interfaces we’ve been discussing are promising. While they currently have many limitations, future research will identity and fix many deficiencies. 

The second mode of creativity aims toward developing new principles that fundamentally change the range of creative expression. One sees this in the work of artists such as Picasso or Monet, who violated existing principles of painting, developing new principles which enabled people to see in new ways.

Is it possible to do such creative work, while using a generative interface? Don’t such interfaces constrain us to the space of natural images, or natural fonts, and thus actively prevent us from exploring the most interesting new directions in creative work?

 In a sufficiently powerful generative model, the generalizations discovered by the model may contain ideas going beyond what humans have discovered. In that case, exploration of the latent space may enable us to discover new fundamental principles. The model would have discovered stronger abstractions than human experts. 

Conclusion

At its deepest, interface design means developing the fundamental primitives human beings think and create with. 

The interface-oriented work we’ve discussed is outside the narrative used to judge most existing work in artificial intelligence. 

 a much more subjective and difficult-to-measure criterion: is it helping humans think and create in new ways?

This creates difficulties for doing this kind of work, particularly in a research setting. Where should one publish? What community does one belong to? What standards should be applied to judge such work? What distinguishes good work from bad?

The long-term test of success will be the development of tools which are widely used by creators.  … Are scientists in other fields using them to develop understanding in ways not otherwise possible? These are great aspirations, and require an approach that builds on conventional AI work, but also incorporates very different norms.

Eric Schmidt (Former Google CEO) : MIT Innovation Fellow

Source: MIT, Feb 2018

Today, MIT President L. Rafael Reif announced that Eric Schmidt, who until January was the executive chairman of Google’s parent company, Alphabet, will join MIT as a visiting innovation fellow for one year, starting in Spring.

Schmidt will figure prominently in MIT’s plans to bring human and machine intelligence to the next level, serving as an advisor to the newly launched MIT Intelligence Quest, an Institute-wide initiative to pursue hard problems on the horizon of intelligence research.

“I am thrilled that Dr. Schmidt will be joining us,” says MIT President L. Rafael Reif. “As MIT IQ seeks to shape transformative new technologies to serve society, Eric’s brilliant strategic and tactical insight, organizational creativity, and exceptional technical judgment will be a tremendous asset. And for our students, his experience in driving some of the most important innovations of our time will serve as an example and an inspiration.”

In his role as a visiting innovation fellow, Schmidt will work directly with MIT scholars to explore the complex processes involved in taking innovation beyond invention to address urgent global problems. In addition, Schmidt will engage with the MIT community through events, lectures, and individual sessions with student entrepreneurs.

MIT’s Intelligence Quest

Source: TechCrunch, Feb 2018

This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”

“The Core is basically reverse-engineering human intelligence,” dean of the MIT School of Engineering Anantha Chandrakasan tells TechCrunch, “which will give us new insights into developing tools and algorithms, which we can apply to different disciplines. And at the same time, these new computer science techniques can help us with the understanding of the human brain. It’s very tightly linked between cognitive science, near science and computer science.”

The Bridge, meanwhile, is designed to provide access to AI and ML tools across its various disciplines. That includes research from both MIT and other schools, made available to students and staff.

“Many of the products are moonshoots,” explains James DiCarlo, head of the Department of Brain and Cognitive Sciences. “They involve teams of scientists and engineers working together. It’s essentially a new model and we need folks and resources behind that.”

Funding for the initiative will be provided by a combination of philanthropic donations and partnerships with corporations. But while the school has had blanket partnerships in the past, including, notably, the MIT-IBM Watson AI Lab, the goal here is not to become beholden to any single company. Ideally the school will be able to work alongside a broad range of companies to achieve its large-scale goals.

“Imagine if we can build machine intelligence that grows the way a human does,” adds professor of Cognitive Science and Computation, Josh Tenenbaum. “That starts like a baby and learns like a child. That’s the oldest idea in AI and it’s probably the best idea… But this is a thing we can only take on seriously now and only by combining the science and engineering of intelligence.”

Tim O’Reilly on AI

Source: Medium, Dec 2017

The superpower of technology is to augment humans so that they can do things that were previously impossible. If we use it to put people out of work, we are using it wrong.

The various ICOs and blockchain fever is probably the worst, though. While blockchain is an incredibly powerful and important idea, and may well end up being worldchanging, the fact that its principal application so far is currency speculation is truly disappointing.

too many companies want to use the new superpowers of technology simply to do the same old things more cheaply, putting people out of work rather than putting them TO work. One of the big lessons I draw from tech platforms that have application to the broader economy is that the best platforms are inclusive, and create more value for their participants than they capture for the platform owner. Once platform growth slows, they tend to become extractive, and that’s when they begin to fail and the creativity goes out of them. This same pattern can be seen in the wider economy. In my book,

An ICO for AI: SingularityNet

Source: Wired, Oct 2017

… fostering the emergence of human-level artificial intelligence on a decentralised, open-source platform

“SingularityNET’s idea is to create a distributed AI platform on the [Ethereum] blockchain, with each blockchain node backing up an AI algorithm,” Goertzel explains. AI researchers or developers would be able to make their AI products available to SingularityNET users, which would pay for services with network-specific crypto-tokens.

Initially, the plan is to have a system that provides visibility — and payment — to independent developers of AI programmes. 

“We want create a system that learns on its own how to cobble together modules to carry out different functions. You’ll see a sort of federation of AIs emerge from the spontaneous interaction among the nodes, without human guidance,” he explains. “It’ll have AI inside each and every nodes, and between them, and they’ll learn how to combine their skills.”

The expected endgame is that these swarms of smart nodes would get as intertwined as clusters of neurons, eventually evolving into human-level AI. Goertzel admits that it might take decades for that to happen, but he is positive that the primary purpose of the SingularityNET project is bringing about “beneficial Artificial General Intelligence” (that is: human-level AI).

SingularityNET will sell 50 percent of its whole token trove, distributing the other half to its staff and to a foundation that will reinvest them in charitable AI projects. Goertzel is optimistic about the sale, which he thinks could be appealing even to technology heavyweights.

“I have been working with Cisco, Huawei, and Intel, and I think we can pull in a lot of major customers who want to buy a lot of tokens to do AI analysis for their own purposes,” he says. “In general, though, this ICO will allow us to start with a bang. We’ll be competing with Google and Facebook…so having a war chest would allow us to take on them more easily.”

Is Deep Learning Sufficient?

Source: MIT Technology Review, Sep 2017

the peculiar thing about deep learning is just how old its key ideas are. Hinton’s breakthrough paper, with colleagues David Rumelhart and Ronald Williams, was published in 1986.

The paper elaborated on a technique called backpropagation, or backprop for short. Backprop, in the words of Jon Cohen, a computational psychologist at Princeton, is “what all of deep learning is based on—literally everything.”

Hinton’s breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. But it took another 26 years before increasing computational power made good on the discovery. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. “Deep learning” took off. To the outside world, AI seemed to wake up overnight. For Hinton, it was a payoff long overdue.

Backprop is remarkably simple, though it works best with huge amounts of data. 

The goal of backprop is to change those weights so that they make the network work: so that when you pass in an image of a hot dog to the lowest layer, the topmost layer’s “hot dog” neuron ends up getting excited.

Backprop is a procedure for rejiggering the strength of every connection in the network so as to fix the error for a given training example.  The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world. 

Inside his head there’s some big pattern of neural activity.” Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled. A deep neural net that recognizes images can be totally stymied when you change a single pixel, or add visual noise that’s imperceptible to a human. 

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way—which perhaps explains why its intelligence can sometimes seem so shallow. Indeed, backprop wasn’t discovered by probing deep into the brain, decoding thought itself; it grew out of models of how animals learn by trial and error in old classical-conditioning experiments. And most of the big leaps that came about as it developed didn’t involve some new insight about neuroscience; they were technical improvements, reached by years of mathematics and engineering. What we know about intelligence is nothing against the vastness of what we still don’t know.

Hinton himself says, “Most conferences consist of making minor variations … as opposed to thinking hard and saying, ‘What is it about what we’re doing now that’s really deficient? What does it have difficulty with? Let’s focus on that.’”

It’s worth asking whether we’ve wrung nearly all we can out of backprop. If so, that might mean a plateau for progress in artificial intelligence.

 

Patience

If you want to see the next big thing, something that could form the basis of machines with a much more flexible intelligence, you should probably check out research that resembles what you would’ve found had you encountered backprop in the ’80s: smart people plugging away on ideas that don’t really work yet.

We make sense of new phenomena in terms of things we already understand. 

 

 

 

 

 

 

A real intelligence doesn’t break when you slightly change the requirements of the problem it’s trying to solve. And the key part of Eyal’s thesis was his demonstration, in principle, of how you might get a computer to work that way: to fluidly apply what it already knows to new tasks, to quickly bootstrap its way from knowing almost nothing about a new domain to being an expert.

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

As for Hinton, he is convinced that overcoming AI’s limitations involves building “a bridge between computer science and biology.” Backprop was, in this view, a triumph of biologically inspired computation; the idea initially came not from engineering but from psychology. So now Hinton is trying to pull off a similar trick.

Neural networks today are made of big flat layers, but in the human neocortex real neurons are arranged not just horizontally into layers but vertically into columns. Hinton thinks he knows what the columns are for—in vision, for instance, they’re crucial for our ability to recognize objects even as our viewpoint changes. So he’s building an artificial version—he calls them “capsules”—to test the theory. So far, it hasn’t panned out; the capsules haven’t dramatically improved his nets’ performance. But this was the same situation he’d been in with backprop for nearly 30 years.

“This thing just has to be right,” he says about the capsule theory, laughing at his own boldness. “And the fact that it doesn’t work is just a temporary annoyance.”