Category Archives: AI

Eric Schmidt (Former Google CEO) : MIT Innovation Fellow

Source: MIT, Feb 2018

Today, MIT President L. Rafael Reif announced that Eric Schmidt, who until January was the executive chairman of Google’s parent company, Alphabet, will join MIT as a visiting innovation fellow for one year, starting in Spring.

Schmidt will figure prominently in MIT’s plans to bring human and machine intelligence to the next level, serving as an advisor to the newly launched MIT Intelligence Quest, an Institute-wide initiative to pursue hard problems on the horizon of intelligence research.

“I am thrilled that Dr. Schmidt will be joining us,” says MIT President L. Rafael Reif. “As MIT IQ seeks to shape transformative new technologies to serve society, Eric’s brilliant strategic and tactical insight, organizational creativity, and exceptional technical judgment will be a tremendous asset. And for our students, his experience in driving some of the most important innovations of our time will serve as an example and an inspiration.”

In his role as a visiting innovation fellow, Schmidt will work directly with MIT scholars to explore the complex processes involved in taking innovation beyond invention to address urgent global problems. In addition, Schmidt will engage with the MIT community through events, lectures, and individual sessions with student entrepreneurs.

Advertisements

MIT’s Intelligence Quest

Source: TechCrunch, Feb 2018

This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”

“The Core is basically reverse-engineering human intelligence,” dean of the MIT School of Engineering Anantha Chandrakasan tells TechCrunch, “which will give us new insights into developing tools and algorithms, which we can apply to different disciplines. And at the same time, these new computer science techniques can help us with the understanding of the human brain. It’s very tightly linked between cognitive science, near science and computer science.”

The Bridge, meanwhile, is designed to provide access to AI and ML tools across its various disciplines. That includes research from both MIT and other schools, made available to students and staff.

“Many of the products are moonshoots,” explains James DiCarlo, head of the Department of Brain and Cognitive Sciences. “They involve teams of scientists and engineers working together. It’s essentially a new model and we need folks and resources behind that.”

Funding for the initiative will be provided by a combination of philanthropic donations and partnerships with corporations. But while the school has had blanket partnerships in the past, including, notably, the MIT-IBM Watson AI Lab, the goal here is not to become beholden to any single company. Ideally the school will be able to work alongside a broad range of companies to achieve its large-scale goals.

“Imagine if we can build machine intelligence that grows the way a human does,” adds professor of Cognitive Science and Computation, Josh Tenenbaum. “That starts like a baby and learns like a child. That’s the oldest idea in AI and it’s probably the best idea… But this is a thing we can only take on seriously now and only by combining the science and engineering of intelligence.”

Tim O’Reilly on AI

Source: Medium, Dec 2017

The superpower of technology is to augment humans so that they can do things that were previously impossible. If we use it to put people out of work, we are using it wrong.

The various ICOs and blockchain fever is probably the worst, though. While blockchain is an incredibly powerful and important idea, and may well end up being worldchanging, the fact that its principal application so far is currency speculation is truly disappointing.

too many companies want to use the new superpowers of technology simply to do the same old things more cheaply, putting people out of work rather than putting them TO work. One of the big lessons I draw from tech platforms that have application to the broader economy is that the best platforms are inclusive, and create more value for their participants than they capture for the platform owner. Once platform growth slows, they tend to become extractive, and that’s when they begin to fail and the creativity goes out of them. This same pattern can be seen in the wider economy. In my book,

An ICO for AI: SingularityNet

Source: Wired, Oct 2017

… fostering the emergence of human-level artificial intelligence on a decentralised, open-source platform

“SingularityNET’s idea is to create a distributed AI platform on the [Ethereum] blockchain, with each blockchain node backing up an AI algorithm,” Goertzel explains. AI researchers or developers would be able to make their AI products available to SingularityNET users, which would pay for services with network-specific crypto-tokens.

Initially, the plan is to have a system that provides visibility — and payment — to independent developers of AI programmes. 

“We want create a system that learns on its own how to cobble together modules to carry out different functions. You’ll see a sort of federation of AIs emerge from the spontaneous interaction among the nodes, without human guidance,” he explains. “It’ll have AI inside each and every nodes, and between them, and they’ll learn how to combine their skills.”

The expected endgame is that these swarms of smart nodes would get as intertwined as clusters of neurons, eventually evolving into human-level AI. Goertzel admits that it might take decades for that to happen, but he is positive that the primary purpose of the SingularityNET project is bringing about “beneficial Artificial General Intelligence” (that is: human-level AI).

SingularityNET will sell 50 percent of its whole token trove, distributing the other half to its staff and to a foundation that will reinvest them in charitable AI projects. Goertzel is optimistic about the sale, which he thinks could be appealing even to technology heavyweights.

“I have been working with Cisco, Huawei, and Intel, and I think we can pull in a lot of major customers who want to buy a lot of tokens to do AI analysis for their own purposes,” he says. “In general, though, this ICO will allow us to start with a bang. We’ll be competing with Google and Facebook…so having a war chest would allow us to take on them more easily.”

Is Deep Learning Sufficient?

Source: MIT Technology Review, Sep 2017

the peculiar thing about deep learning is just how old its key ideas are. Hinton’s breakthrough paper, with colleagues David Rumelhart and Ronald Williams, was published in 1986.

The paper elaborated on a technique called backpropagation, or backprop for short. Backprop, in the words of Jon Cohen, a computational psychologist at Princeton, is “what all of deep learning is based on—literally everything.”

Hinton’s breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. But it took another 26 years before increasing computational power made good on the discovery. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. “Deep learning” took off. To the outside world, AI seemed to wake up overnight. For Hinton, it was a payoff long overdue.

Backprop is remarkably simple, though it works best with huge amounts of data. 

The goal of backprop is to change those weights so that they make the network work: so that when you pass in an image of a hot dog to the lowest layer, the topmost layer’s “hot dog” neuron ends up getting excited.

Backprop is a procedure for rejiggering the strength of every connection in the network so as to fix the error for a given training example.  The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world. 

Inside his head there’s some big pattern of neural activity.” Big patterns of neural activity, if you’re a mathematician, can be captured in a vector space, with each neuron’s activity corresponding to a number, and each number to a coordinate of a really big vector. In Hinton’s view, that’s what thought is: a dance of vectors.

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled. A deep neural net that recognizes images can be totally stymied when you change a single pixel, or add visual noise that’s imperceptible to a human. 

Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way—which perhaps explains why its intelligence can sometimes seem so shallow. Indeed, backprop wasn’t discovered by probing deep into the brain, decoding thought itself; it grew out of models of how animals learn by trial and error in old classical-conditioning experiments. And most of the big leaps that came about as it developed didn’t involve some new insight about neuroscience; they were technical improvements, reached by years of mathematics and engineering. What we know about intelligence is nothing against the vastness of what we still don’t know.

Hinton himself says, “Most conferences consist of making minor variations … as opposed to thinking hard and saying, ‘What is it about what we’re doing now that’s really deficient? What does it have difficulty with? Let’s focus on that.’”

It’s worth asking whether we’ve wrung nearly all we can out of backprop. If so, that might mean a plateau for progress in artificial intelligence.

 

Patience

If you want to see the next big thing, something that could form the basis of machines with a much more flexible intelligence, you should probably check out research that resembles what you would’ve found had you encountered backprop in the ’80s: smart people plugging away on ideas that don’t really work yet.

We make sense of new phenomena in terms of things we already understand. 

 

 

 

 

 

 

A real intelligence doesn’t break when you slightly change the requirements of the problem it’s trying to solve. And the key part of Eyal’s thesis was his demonstration, in principle, of how you might get a computer to work that way: to fluidly apply what it already knows to new tasks, to quickly bootstrap its way from knowing almost nothing about a new domain to being an expert.

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

As for Hinton, he is convinced that overcoming AI’s limitations involves building “a bridge between computer science and biology.” Backprop was, in this view, a triumph of biologically inspired computation; the idea initially came not from engineering but from psychology. So now Hinton is trying to pull off a similar trick.

Neural networks today are made of big flat layers, but in the human neocortex real neurons are arranged not just horizontally into layers but vertically into columns. Hinton thinks he knows what the columns are for—in vision, for instance, they’re crucial for our ability to recognize objects even as our viewpoint changes. So he’s building an artificial version—he calls them “capsules”—to test the theory. So far, it hasn’t panned out; the capsules haven’t dramatically improved his nets’ performance. But this was the same situation he’d been in with backprop for nearly 30 years.

“This thing just has to be right,” he says about the capsule theory, laughing at his own boldness. “And the fact that it doesn’t work is just a temporary annoyance.”

Mindfire Foundation: Mission-1 in Davos (May 12-20 2018)

Source: Mindfire website, 2017
<all expenses covered>

From May 12th through May 20th, 2018

We are starting our quest for true-AI with a new approach, “Artificial Organisms”, which will define our inaugural mission, and all our future missions. The 100 selected talents will form teams according to their skill sets and the given challenges.

Each team will be assigned a professional coach or a subject matter expert. No reporting and no hierarchies, just a shared sense of pursuit. Every day, the talents will have the opportunity to meet top AI researchers who will be there to support them, as mentors.

Talents

The imperative to progress true AI is now!

For that reason Mindfire is looking for the best talent out there. Your travel, accommodation and all planned recreational activities are fully funded by us. Mindfire Mission-1 will allow you to:

  • Work alongside 99 other bright minds and 15 eminent researchers.
  • Build a prototype to showcase your progress in helping to solve true-AI.
  • Secure further funding and sponsorship to continue the research for the best projects.
  • Become a member of the exclusive Mindfire community with access to expert know-how.
  • Be rewarded with Mindfire tokens and profit from the Intellectual Property proceeds.

Eligibility criteria

You can only apply as a private individual, i.e. not affiliated to any organization or enterprise. You are currently an undergraduate, masters or PhD student in science or engineering. You can also apply if you are an entrepreneur, using AI within your business.

Be part of the movement and join us from May 12th thru 20th, 2018 in Davos.

Generative Adversarial Networks

Source: O’Reilly, Sep 2017

Through a handful of generative techniques, it’s possible to feed a lot of images into a neural network and then ask for a brand-new image that resembles the ones it’s been shown. Generative AI has turned out to be remarkably good at imitating human creativity at superficial levels.

A generative adversarial network consists of two neural networks: a generator that learns to produce some kind of data (such as images) and a discriminator that learns to distinguish “fake” data created by the generator from “real” data samples (such as photos taken in the real world). The generator and the discriminator have opposing training objectives: the discriminator’s goal is to accurately classify real and fake data; the generator’s goal is to produce fake data the discriminator can’t distinguish from real data.

Generative neural networks are convincing at reconstructing information thanks to their ability to understand information at multiple levels. It’s hard to overstate how remarkable these GAN-generated images of bedrooms are; not only do the sheets, carpets, and windows look convincing, but the high-level structures of the rooms are correct: the sheets are on the beds, the beds are on the floors, and the windows are on the walls.

Instead of detecting patterns and matching them to features in an image, the generator uses transpose convolution to identify fundamental image building-blocks and learns to assemble and blend these building-blocks into convincing images. For instance, our GAN generated this remarkably convincing image of the numeral 9: