Category Archives: AI

Tech Wars

Source: ZeroHedge, Dec 2019

why has the trade war transformed into a tech war? 

The simple reason is that China could overtake the US as a major economic power by 2030. The US has been unconsciously fueling China’s ascension as a rising superpower by supplying high-tech semiconductor chips to Chinese companies. But recent actions by the Trump administration have limited the flow of chips to China, to slow their development in artificial intelligence (AI) and global domination.

A new report from the Global AI Index, first reported by South China Morning Post, indicates that China could overtake the US in AI by 2025 to 2030.

The index specifies that based on talent, infrastructure, operating environment, research, development, government strategy, and commercial ventures, China will likely dominate the US in the AI space in the next decade.

The tech war between both countries, to get more specific, has also blossomed into a global AI race, the report said.

By 2030, Washington is forecasted to have earmarked $35 billion for AI development, with the Chinese government allocating at least $22 billion over the same period.

China’s leadership, including President Xi Jinping, has specified that AI will be essential for its global military force and economic power competition against the US.

China’s State Council issued the New Generation Artificial Intelligence Development Plan (AIDP) back in 2017, stating that China’s AI strategy will allow it to become a global superpower.

In a recent speech, Xi said that China must “ensure that our country marches in the front ranks where it comes to theoretical research in this important area of AI, and occupies the high ground in critical and AI core technologies.”

Related Resource: SCMP, Dec 2019

The US is the undisputed leader in artificial intelligence (AI) development while China is the fastest-growing country set to overtake the US in five to 10 years on its current trajectory, according to The Global AI Index published this week by Tortoise Intelligence.

The index, which ranks 54 countries based on their AI capabilities, measured seven key indicators over 12 months: talent, infrastructure, operating environment, research, development, government strategy and commercial ventures.

The US was ahead on the majority of key metrics by a significant margin. It received a score of 100, almost twice as high as second-placed China with 58.3, due to the quality of its research, talent and private funding. The UK, Canada and Germany ranked 3rd, 4th and 5th respectively.

 

Evolution via multi-agent systems

Source: Quanta, Nov 2019

how AI agents could learn to use things around them as tools, according to the OpenAI team. That’s important not because AI needs to be better at hiding and seeking, but because it suggests a way to build AI that can solve open-ended, real-world problems.

“These systems figured out so quickly how to use tools. Imagine when they can use many tools, or create tools. Would they invent a ladder?”

The hide-and-seek experiment was different: Rewards were associated with hiding and finding, and tool use just happened — and evolved — along the way.

Because the game was open-ended, the AI agents even began using tools in ways the programmers hadn’t anticipated. They’d predicted that the agents would hide or chase, and that they’d create forts. But after enough games, the seekers learned, for example, that they could move boxes even after climbing on top of them. This allowed them to skate around the arena in a move the OpenAI team called “box surfing.”

The researchers never saw it coming, even though the algorithms didn’t explicitly prohibit climbing on boxes. The tactic conferred a double advantage, combining movement with the ability to peer nimbly over walls, and it showed a more innovative use of tools than the human programmers had imagined.

Neuro-Evolution with Novelty Search

Source: Quanta, Nov 2019

the steppingstone principle — and, with it, a way of designing algorithms that more fully embraces the endlessly creative potential of biological evolution.

The steppingstone principle goes beyond traditional evolutionary approaches. Instead of optimizing for a specific goal, it embraces creative exploration of all possible solutions. By doing so, it has paid off with groundbreaking results.

Earlier this year, one system based on the steppingstone principle mastered two video games that had stumped popular machine learning methods. And in a paper published last week in Nature, DeepMind — the artificial intelligence company that pioneered the use of deep learning for problems such as the game of Go — reported success in combining deep learning with the evolution of a diverse population of solutions.

Biological evolution is also the only system to produce human intelligence, which is the ultimate dream of many AI researchers.

Because of biology’s track record, Stanley and others have come to believe that if we want algorithms that can navigate the physical and social world as easily as we can — or better! — we need to imitate nature’s tactics. Instead of hard-coding the rules of reasoning, or having computers learn to score highly on specific performance metrics, they argue, we must let a population of solutions blossom. Make them prioritize novelty or interestingness instead of the ability to walk or talk. They may discover an indirect path, a set of steppingstones, and wind up walking and talking better than if they’d sought those skills directly.

After Picbreeder, Stanley set out to demonstrate that neuroevolution could overcome the most obvious argument against it: “If I run an algorithm that’s creative to such an extent that I’m not sure what it will produce,” he said, “it’s very interesting from a research perspective, but it’s a harder sell commercially.”

He hoped to show that by simply following ideas in interesting directions, algorithms could not only produce a diversity of results, but solve problems. More audaciously, he aimed to show that completely ignoring an objective can get you there faster than pursuing it. He did this through an approach called novelty search.

To test the steppingstone principle, Stanley and his student Joel Lehman tweaked the selection process. Instead of selecting the networks that performed best on a task, novelty search selected them for how different they were from the ones with behaviors most similar to theirs. (In Picbreeder, people rewarded interestingness. Here, as a proxy for interestingness, novelty search rewarded novelty.)

“Novelty search is important because it turned everything on its head,” said Julian Togelius, a computer scientist at New York University, “and basically asked what happens when we don’t have an objective.”

A key element of these algorithms is that they foster steppingstones. Instead of constantly prioritizing one overall best solution, they maintain a diverse set of vibrant niches, any one of which could contribute a winner. And the best solution might descend from a lineage that has hopped between niches.

Now even DeepMind, that powerhouse of reinforcement learning, has revealed its growing interest in neuroevolution. In January, the team showed off AlphaStar, software that can beat top professionals at the complex video game StarCraft II, in which two opponents control armies and build colonies to dominate a digital landscape. AlphaStar evolved a population of players that competed against and learned from each other.

In last week’s Nature paper, DeepMind researchers announced that an updated version of AlphaStar has been ranked among the top 0.2% of active StarCraft II players on a popular gaming platform, becoming the first AI to reach the top tier of a popular esport without any restrictions.

As in the children’s game rock-paper-scissors, there is no single best game strategy in StarCraft II. So DeepMind encouraged its population of agents to evolve a diversity of strategies — not as steppingstones but as an end in itself. When AlphaStar beat two pros each five games to none, it combined the strategies from five different agents in its population. The five agents had been chosen so that not all of them would be vulnerable to any one opponent strategy. Their strength was in their diversity.

AlphaStar demonstrates one of the main uses of evolutionary algorithms: maintaining a population of different solutions.

To mirror this open-ended conversation between problems and solutions, earlier this year Stanley, Clune, Lehman and another Uber colleague, Rui Wang, released an algorithm called POET, for Paired Open-Ended Trailblazer.

For example, one bot learned to cross flat terrain while dragging its knee. It was then randomly switched to a landscape with short stumps, where it had to learn to walk upright. When it was switched back to its first obstacle course, it completed it much faster. An indirect path allowed it to improve by taking skills learned from one puzzle and applying them to another.

POET could potentially design new forms of art or make scientific discoveries by inventing new challenges for itself and then solving them. It could even go much further, depending on its world-building ability. Stanley said he hopes to build algorithms that could still be doing something interesting after a billion years.

In a recent paper, Clune argues that open-ended discovery is likely the fastest path toward artificial general intelligence — machines with nearly all the capabilities of humans.

Clune thinks more attention should be paid to AI that designs AI. Algorithms will design or evolve both the neural networks and the environments in which they learn, using an approach like POET’s.

Such open-ended exploration might lead to human-level intelligence via paths we never would have anticipated — or a variety of alien intelligences that could teach us a lot about intelligence in general. “Decades of research have taught us that these algorithms constantly surprise us and outwit us,” he said. “So it’s completely hubristic to think that we will know the outcome of these processes, especially as they become more powerful and open-ended.”

Robotics!

Source: The Verge, Sep 2019

Atlas, its spectacular bipedal robot that’s previously been seen doing everything from parkour to backflips. In this latest video, Atlas does a small gymnastics routine, consisting of a number of somersaults, a short handstand, a 360-degree spinning jump, and even a balletic split leap.

What’s most impressive is seeing Atlas tie all these moves together into one pretty cohesive routine.

In the video’s description, Boston Dynamics says that it’s using a “model predictive controller” to blend from one maneuver to the next. Presumably each somersault gives the robot a fair amount of forward momentum, but at no point in the video does it seem to lose its balance as a result.

Amazingly, Atlas is able to roll gracefully along its back without any of its machinery getting squashed or tangled.

Vision: Human & Computer

Source: Quanta, Sep 2019

The neural networks underlying computer vision are fairly straightforward. They receive an image as input and process it through a series of steps. They first detect pixels, then edges and contours, then whole objects, before eventually producing a final guess about what they’re looking at. These are known as “feed forward” systems because of their assembly-line setup.

… a new mathematical model that tries to explain the central mystery of human vision: how the visual cortex in the brain creates vivid, accurate representations of the world based on the scant information it receives from the retina.

The model suggests that the visual cortex achieves this feat through a series of neural feedback loops that refine small changes in data from the outside world into the diverse range of images that appear before our mind’s eye. This feedback process is very different from the feed-forward methods that enable computer vision.

“This work really shows how sophisticated and in some sense different the visual cortex is” from computer vision, said Jonathan Victor, a neuroscientist at Cornell University.

But computer vision is superior to human vision at some tasks. This raises the question: Does computer vision need inspiration from human vision at all?

In some ways, the answer is obviously no. The information that reaches the visual cortex is constrained by anatomy: Relatively few nerves connect the visual cortex with the outside world, which limits the amount of visual data the cortex has to work with. Computers don’t have the same bandwidth concerns, so there’s no reason they need to work with sparse information.

“If I had infinite computing power and infinite memory, do I need to sparsify anything? The answer is likely no,” Tsotsos said.

But Tsotsos thinks it’s folly to disregard human vision.

The classification tasks computers are good at today are the “low-hanging fruit” of computer vision, he said. To master these tasks, computers merely need to find correlations in massive data sets. For higher-order tasks, like scanning an object from multiple angles in order to determine what it is (think about the way you familiarize yourself with a statue by walking around it), such correlations may not be enough to go on. Computers may need to take a nod from humans to get it right.

For example, a key feature of human vision is the ability to do a double take. We process visual information and reach a conclusion about what we’ve seen. When that conclusion is jarring, we look again, and often the second glance tells us what’s really going on. Computer vision systems working in a feed-forward manner typically lack this ability, which leads computer vision systems to fail spectacularly at even some simple vision tasks.

There’s another, subtler and more important aspect of human vision that computer vision lacks.

It takes years for the human visual system to mature. A 2019 paper by Tsotsos and his collaborators found that people don’t fully acquire the ability to suppress clutter in a crowded scene and focus on what they’re looking for until around age 17. Other research has found that the ability to perceive faces keeps improving until around age 20.

Computer vision systems work by digesting massive amounts of data. Their underlying architecture is fixed and doesn’t mature over time, the way the developing brain does. If the underlying learning mechanisms are so different, will the results be, too? Tsotsos thinks computer vision systems are in for a reckoning.

“Learning in these deep learning methods is as unrelated to human learning as can be,” he said. “That tells me the wall is coming. You’ll reach a point where these systems can no longer move forward in terms of their development.”

2018 Turing Award Winners on AI

Source: The Verge, Mar 2019

The 2018 Turing Award, known as the “Nobel Prize of computing,” has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence.

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — sometimes called the ‘godfathers of AI’ — have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

Related Resource: Geoff Hinton’s webpage, 2015

AI Startups & AI Skills Impact on Pay

Source: Gofman.info, Aug 2019