Category Archives: AI

Vision: Human & Computer

Source: Quanta, Sep 2019

The neural networks underlying computer vision are fairly straightforward. They receive an image as input and process it through a series of steps. They first detect pixels, then edges and contours, then whole objects, before eventually producing a final guess about what they’re looking at. These are known as “feed forward” systems because of their assembly-line setup.

… a new mathematical model that tries to explain the central mystery of human vision: how the visual cortex in the brain creates vivid, accurate representations of the world based on the scant information it receives from the retina.

The model suggests that the visual cortex achieves this feat through a series of neural feedback loops that refine small changes in data from the outside world into the diverse range of images that appear before our mind’s eye. This feedback process is very different from the feed-forward methods that enable computer vision.

“This work really shows how sophisticated and in some sense different the visual cortex is” from computer vision, said Jonathan Victor, a neuroscientist at Cornell University.

But computer vision is superior to human vision at some tasks. This raises the question: Does computer vision need inspiration from human vision at all?

In some ways, the answer is obviously no. The information that reaches the visual cortex is constrained by anatomy: Relatively few nerves connect the visual cortex with the outside world, which limits the amount of visual data the cortex has to work with. Computers don’t have the same bandwidth concerns, so there’s no reason they need to work with sparse information.

“If I had infinite computing power and infinite memory, do I need to sparsify anything? The answer is likely no,” Tsotsos said.

But Tsotsos thinks it’s folly to disregard human vision.

The classification tasks computers are good at today are the “low-hanging fruit” of computer vision, he said. To master these tasks, computers merely need to find correlations in massive data sets. For higher-order tasks, like scanning an object from multiple angles in order to determine what it is (think about the way you familiarize yourself with a statue by walking around it), such correlations may not be enough to go on. Computers may need to take a nod from humans to get it right.

For example, a key feature of human vision is the ability to do a double take. We process visual information and reach a conclusion about what we’ve seen. When that conclusion is jarring, we look again, and often the second glance tells us what’s really going on. Computer vision systems working in a feed-forward manner typically lack this ability, which leads computer vision systems to fail spectacularly at even some simple vision tasks.

There’s another, subtler and more important aspect of human vision that computer vision lacks.

It takes years for the human visual system to mature. A 2019 paper by Tsotsos and his collaborators found that people don’t fully acquire the ability to suppress clutter in a crowded scene and focus on what they’re looking for until around age 17. Other research has found that the ability to perceive faces keeps improving until around age 20.

Computer vision systems work by digesting massive amounts of data. Their underlying architecture is fixed and doesn’t mature over time, the way the developing brain does. If the underlying learning mechanisms are so different, will the results be, too? Tsotsos thinks computer vision systems are in for a reckoning.

“Learning in these deep learning methods is as unrelated to human learning as can be,” he said. “That tells me the wall is coming. You’ll reach a point where these systems can no longer move forward in terms of their development.”

Advertisements

2018 Turing Award Winners on AI

Source: The Verge, Mar 2019

The 2018 Turing Award, known as the “Nobel Prize of computing,” has been given to a trio of researchers who laid the foundations for the current boom in artificial intelligence.

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — sometimes called the ‘godfathers of AI’ — have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

Related Resource: Geoff Hinton’s webpage, 2015

AI Startups & AI Skills Impact on Pay

Source: Gofman.info, Aug 2019

Deriving Structure from Textual Descriptions

Source: Nature.com, Jul 2019

The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. B

By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases1,2, which encompass only a small fraction of the knowledge present in the research literature.

Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors.

To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing3,4,5,6,7,8,9,10, which requires large hand-labelled datasets for training.

Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings11,12,13 (vector representations of words) without human labelling or supervision.

Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure–property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery.

This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature.

Related Resource: ZeroHedge, Jul 2019

the algorithm found predictions for potential thermoelectric materials which can convert heat into energy for various heating and cooling applications.

“It can read any paper on material science, so can make connections that no scientists could,” said researcher Anubhav Jain. “Sometimes it does what a researcher would do; other times it makes these cross-discipline associations.

The algorithm was designed to assess the language in 3.3 million abstracts from material sciences, and was able to build a vocabulary of around half-a-million words. Word2Vec used machine learning to analyze relationships between words.

“The way that this Word2vec algorithm works is that you train a neural network model to remove each word and predict what the words next to it will be,” said Jain, adding that “by training a neural network on a word, you get representations of words that can actually confer knowledge.

Using just the words found in scientific abstracts, the algorithm was able to understand concepts such as the periodic table and the chemical structure of molecules. The algorithm linked words that were found close together, creating vectors of related words that helped define concepts. In some cases, words were linked to thermoelectric concepts but had never been written about as thermoelectric in any abstract they surveyed. This gap in knowledge is hard to catch with a human eye, but easy for an algorithm to spot.

Using AI to deter Cat Predatory Behaviour

Source: The Verge, Jun 2019

Machine learning can be an incredible addition to any tinkerer’s toolbox, helping to fix that little problem in life that no commercial gadget can handle. For Amazon engineer Ben Hamm, that problem was stopping his “sweet, murderous cat” Metric from bringing home dead and half-dead prey in the middle of the night and waking him up.

Hamm gave an entertaining presentation on this subject at Ignite Seattle, and you can watch a video of his talk above. In short, in order to stop Metric from following his instincts, Hamm hooked up the cat flap in his door to an AI-enabled camera (Amazon’s own DeepLens) and an Arduino-powered locking system.

AI & Machine Learning & Deep Learning

Source: Medium, Sep 2018

What is artificial intelligence?

 artificial intelligence can be loosely interpreted to mean incorporating human intelligence to machines.

What is machine learning?

As the name suggests, machine learning can be loosely interpreted to mean empowering computer systems with the ability to “learn”.

The intention of ML is to enable machines to learn by themselves using the provided data and make accurate predictions.

Training in machine learning entails giving a lot of data to the algorithm and allowing it to learn more about the processed information.

What is deep learning?

 deep learning is a subset of ML; in fact, it’s simply a technique for realizing machine learning. In other words, DL is the next evolution of machine learning.

DL algorithms are roughly inspired by the information processing patterns found in the human brain.

while DL can automatically discover the features to be used for classification, ML requires these features to be provided manually.

Furthermore, in contrast to ML, DL needs high-end machines and considerably big amounts of training data to deliver accurate results.

Making 3-pointers (basketball)