Category Archives: AI

Artificial Intelligence, Machine Learning, and Deep Learning

Source:  Open Data Science, Mar 2017

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it

Robots Head to Wall Street

Source: NYTimes, Feb 2016

Within a decade, he said, between a third and a half of the current employees in finance will lose their jobs to Kensho and other automation software. It began with the lower-paid clerks, many of whom became unnecessary when stock tickers and trading tickets went electronic. It has moved on to research and analysis, as software like Kensho has become capable of parsing enormous data sets far more quickly and reliably than humans ever could.

The next ‘‘tranche,’’ as Nadler puts it, will come from the employees who deal with clients: Soon, sophisticated interfaces will mean that clients no longer feel they need or even want to work through a human being.

If jobs can be displaced at Goldman, they can probably be displaced even more quickly at other, less sophisticated companies, within the financial industry as well as without.

Antony Jenkins, who was dismissed a few months earlier as chief executive of Barclays, the giant British bank, gave a speech in which he said a coming series of ‘‘Uber moments’’ would hit the financial industry.

‘‘I predict that the number of branches and people employed in the financial-services sector may decline by as much as 50 percent,’’ Jenkins told the audience. ‘‘Even in a less-harsh scenario, I expect a decline of at least 20 percent.’’

AI and Human Curiosity

Source:  HBR, Apr 2017

Curiosity has been hailed as one of the most critical competencies for the modern workplace. It’s been shown to boost people’s employability. Countries with higher curiosity enjoy more economic and political freedom, as well as higher GDPs. It is therefore not surprising that, as future jobs become less predictable, a growing number of organizations will hire individuals based on what they could learn, rather than on what they already know.

Since no skill can be learned without a minimum level of interest, curiosity may be considered one of the critical foundations of talent. As Albert Einstein famously noted, “I have no special talent. I am only passionately curious.

Curiosity is only made more important for people’s careers by the growing automation of jobs. At this year’s World Economic Forum, ManpowerGroup predicted that learnability, the desire to adapt one’s skill set to remain employable throughout one’s working life, is a key antidote to automation. Those who are more willing and able to upskill and develop new expertise are less likely to be automated.

AI is constrained in what it can learn. Its focus and scope are very narrow compared to that of a human, and its insatiable learning appetite applies only to extrinsic directives — learn X, Y, or Z. This is in stark contrast to AI’s inability to self-direct or be intrinsically curious. In that sense, artificial curiosity is the exact opposite of human curiosity; people are rarely curious about something because they are told to be. Yet this is arguably the biggest downside to human curiosity: It is free-flowing and capricious, so we cannot boost it at will, either in ourselves or in others.

computers can constantly learn and test ideas faster than we can, so long as they have a clear set of instructions and a clearly defined goal. However, computers still lack the ability to venture into new problem domains and connect analogous problems, perhaps because of their inability to relate unrelated experiences. For instance, the hiring algorithms can’t play checkers, and the car design algorithms can’t play computer games. In short, when it comes to performance, AI will have an edge over humans in a growing number of tasks, but the capacity to remain capriciously curious about anything, including random things, and pursue one’s interest with passion may remain exclusively human.

Beyond Deep Learning

Source: VentureBeat, Apr 2017

At the recent AI By The Bay conference, Francois Chollet emphasized that deep learning is simply more powerful pattern recognition than previous statistical and machine learning methods. “The most important problem for AI today is abstraction and reasoning,” explains Chollet, an AI researcher at Google and famed inventor of widely used deep learning library Keras. “Current supervised perception and reinforcement learning algorithms require lots of data, are terrible at planning, and are only doing straightforward pattern recognition.”

What’s beyond deep learning? 

How can we overcome the limitations of deep learning and proceed toward general artificial intelligence? Chollet’s initial plan of attack involves using “super-human pattern recognition, like deep learning, to augment explicit search and formal systems,” starting with the field of mathematical proofs. Automated Theorem Provers (ATPs) typically use brute force search and quickly hit combinatorial explosions in practical use. In the DeepMath project, Chollet and his colleagues used deep learning to assist the proof search process, simulating a mathematician’s intuitions about what lemmas (a subsidiary or intermediate theorem in an argument or proof) might be relevant.

Another approach is to develop more explainable models. In handwriting recognition, neural nets currently need to be trained on tens to hundreds of thousands of examples to perform decent classification. Instead of looking at just pixels, however, Launchbury of DARPA explains that generative models can be taught the strokes behind any given character and can use this physical construction information to disambiguate between similar numbers, such as a 9 or a 4.

Yann LeCun, inventor of convolutional neural networks (CNNs) and director of AI research at Facebook, proposes “energy-based models” as a method of overcoming limits in deep learning. Typically, a neural network is trained to produce a single output, such as an image label or sentence translation. LeCun’s energy-based models instead give an entire set of possible outputs, such as the many ways a sentence could be translated, along with scores for each configuration.

Geoffrey Hinton, widely called the “father of deep learning” wants to replace neurons in neural networks with “capsules” that he believes more accurately reflect the cortical structure in the human mind. “Evolution must have found an efficient way to adapt features that are early in a sensory pathway so that they are more helpful to features that are several stages later in the pathway,” Hinton explains. He hopes that capsule-based neural network architectures will be more resistant to the adversarial attacks that Goodfellow illuminated above.

Perhaps all of these approaches to overcoming the limits of deep learning have truth value. Perhaps none of them do. Only time and continued investment in AI research will tell.



Source: The New Yorker, Apr 2017

The most powerful element in these clinical encounters, I realized, was not knowing that or knowing how—not mastering the facts of the case, or perceiving the patterns they formed. It lay in yet a third realm of knowledge: knowing why.

nowing why—asking why—is our conduit to every kind of explanation, and explanation, increasingly, is what powers medical advances.

“A deep-learning system doesn’t have any explanatory power,” as Hinton put it flatly. A black box cannot investigate cause. Indeed, he said, “the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.” The algorithm can solve a case. It cannot build a case.

If more and more clinical practice were relegated to increasingly opaque learning machines, if the daily, spontaneous intimacy between implicit and explicit forms of knowledge—knowing how, knowing that, knowing why—began to fade, is it possible that we’d get better at doing what we do but less able to reconceive what we ought to be doing, to think outside the algorithmic black box?

The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together.

AI Dances! Dance Dance Convolution

Source: The Verge, Mar 2017

computer engineers have created a quicker way to generate step charts for any song — using the power of neural networks.

In a paper published this week (with the quite brilliant title Dance Dance Convolution), a trio of researchers from the University of California describe training a neural network to generate new step charts. Neural networks study data to analyze patterns and then create similar-looking outputs, and in this case, there was an abundant source of data in the form of fan-written step charts.

The results are perfectly human-playable, but, as with many creative forays by artificial intelligence, professionals can still tell the difference. Speaking to The Register, step chart creator Fraxtil, who made many of the charts used to train the neural network, said, “It’s pretty easy to tell that its output is synthetic.”

“There’s a lot of creativity involved in step charting, mainly selective use of repetition and contrast, that the AI either can’t learn or can’t apply effectively,” said Fraxtil. But, they added, of all the attempts they’ve seen to auto-generate step charts, this one was by far “the most successful iteration.”


Source: FT, Mar 2017

With its cadre of researchers, from Bayesian mathematicians to cognitive neuroscientists, statisticians and computer scientists, DeepMind has amassed arguably the most formidable community of world-leading academics specialising in machine intelligence anywhere in the world.

“What we are trying to do is a unique cultural hybrid — the focus and energy you get from start-ups with the kind of blue-sky thinking you get from academia,” says Demis Hassabis, co-founder and chief executive. “We’ve hired 250 of the world’s best scientists, so obviously they’re here to let their creativity run riot, and we try and create an environment that’s perfect for that.”

DeepMind’s researchers have in common a clearly defined if lofty mission: to crack human intelligence and recreate it artificially.

Today, the goal is not just to create a powerful AI to play games better than a human professional, but to use that knowledge “for large-scale social impact”, says DeepMind’s other co-founder, Mustafa Suleyman, a former conflict-resolution negotiator at the UN.

To solve seemingly intractable problems in healthcare, scientific research or energy, it is not enough just to assemble scores of scientists in a building; they have to be untethered from the mundanities of a regular job — funding, administration, short-term deadlines — and left to experiment freely and without fear.

“If you look at how Google worked five or six years ago, [its research] was very product-related and relatively short-term, and it was considered to be a strength,” Hassabis says. “[But] if you’re interested in advancing the research as fast as possible, then you need to give [scientists] the space to make the decisions based on what they think is right for research, not for whatever kind of product demand has just come in.”

DeepMind’s three appearances in quick succession in Nature, along with more than 120 papers published and presented at cutting-edge scientific conferences, are a mark of its prodigious scientific productivity.

Our research team today is insulated from any short-term pushes or pulls, whether it be internally at Google or externally. We want to have a big impact on the world, but our research has to be protected,” Hassabis says. “We showed that you can make a lot of advances using this kind of culture. I think Google took notice of that and they’re shifting more towards this kind of longer-term research.”

DeepMind has six more early manuscripts that it hopes will be published by Nature, or by that other most highly regarded scientific journal, Science, within the next year. “We may publish better than most academic labs, but our aim is not to produce a Nature paper,” Hassabis says. “We concentrate on cracking very specific problems. What I tell people here is that it should be a natural side-effect of doing great science.”

Structurally, DeepMind’s researchers are organised into four main groups with titles such as “Neuroscience” or “Frontiers” (a group comprising mostly physicists and mathematicians who test the most futuristic theories in AI).

Every eight weeks, scientists present what they have achieved to team leaders, including Hassabis and Shane Legg, head of research, who decide how to allocate resources to the dozens of projects. “It’s sort of a bubbling cauldron of ideas, and exploration, and testing things out, and finding out what seems to be working and why — or why not,” Legg says.

Projects that are progressing rapidly are allocated more manpower and time, while others may be closed down, all in a matter of weeks. “In academia you’d have to wait for a couple of years for a new grant cycle, but we can be very quick about switching resources,” Hassabis says.

This organisational culture has been a magnet for some of the world’s brightest minds. Jane Wang, a cognitive neuroscientist at DeepMind, used to be a postdoctoral researcher at Northwestern University in Chicago, and says that she was attracted to DeepMind’s clear, social mission. “I have interviewed at other industry labs, but DeepMind is different in that there isn’t pressure to patent or come up with products — there is no issue with the bottom line. The mission here is about being curious,” she says.

For Matt Botvinick, neuroscience team lead, joining DeepMind was not just a career choice but a lifestyle change too. The former professor who led Princeton University’s Neuroscience Institute continues to live in the US, where his wife is a practising physician, and commutes to DeepMind’s labs in London every other week. “At Princeton, I was surrounded by people I considered utterly brilliant and had no interest in working in an environment any less focused on primary scientific questions,” he says. “But I couldn’t resist the opportunity to come here because there is something qualitatively new going on, both with the scale and the spirit of ideas.”

What sets DeepMind apart from academic labs, he says, is its culture of cross-disciplinary collaboration, reflected in the company’s hiring of experts, who can cut across different domains from psychology to deep learning, physics or computer programming.

“In a lot of research institutions, things can become siloed. Two neighbouring labs could be working on similar topics but never exchange and pool information,” Botvinick says. “Unlike any place I’ve ever experienced before, all conversations are enhanced rather than undermined by differences in background.”