Pretty in Pink – Final Scene

Advertisements

Analytics helps to form hypotheses; statistics helps to test them

Source: Medium, Sep 2019

Analytics helps you form hypotheses.

It improves the quality of your questions.

Statistics helps you test hypotheses.

It improves the quality of your answers.

Vision: Human & Computer

Source: Quanta, Sep 2019

The neural networks underlying computer vision are fairly straightforward. They receive an image as input and process it through a series of steps. They first detect pixels, then edges and contours, then whole objects, before eventually producing a final guess about what they’re looking at. These are known as “feed forward” systems because of their assembly-line setup.

… a new mathematical model that tries to explain the central mystery of human vision: how the visual cortex in the brain creates vivid, accurate representations of the world based on the scant information it receives from the retina.

The model suggests that the visual cortex achieves this feat through a series of neural feedback loops that refine small changes in data from the outside world into the diverse range of images that appear before our mind’s eye. This feedback process is very different from the feed-forward methods that enable computer vision.

“This work really shows how sophisticated and in some sense different the visual cortex is” from computer vision, said Jonathan Victor, a neuroscientist at Cornell University.

But computer vision is superior to human vision at some tasks. This raises the question: Does computer vision need inspiration from human vision at all?

In some ways, the answer is obviously no. The information that reaches the visual cortex is constrained by anatomy: Relatively few nerves connect the visual cortex with the outside world, which limits the amount of visual data the cortex has to work with. Computers don’t have the same bandwidth concerns, so there’s no reason they need to work with sparse information.

“If I had infinite computing power and infinite memory, do I need to sparsify anything? The answer is likely no,” Tsotsos said.

But Tsotsos thinks it’s folly to disregard human vision.

The classification tasks computers are good at today are the “low-hanging fruit” of computer vision, he said. To master these tasks, computers merely need to find correlations in massive data sets. For higher-order tasks, like scanning an object from multiple angles in order to determine what it is (think about the way you familiarize yourself with a statue by walking around it), such correlations may not be enough to go on. Computers may need to take a nod from humans to get it right.

For example, a key feature of human vision is the ability to do a double take. We process visual information and reach a conclusion about what we’ve seen. When that conclusion is jarring, we look again, and often the second glance tells us what’s really going on. Computer vision systems working in a feed-forward manner typically lack this ability, which leads computer vision systems to fail spectacularly at even some simple vision tasks.

There’s another, subtler and more important aspect of human vision that computer vision lacks.

It takes years for the human visual system to mature. A 2019 paper by Tsotsos and his collaborators found that people don’t fully acquire the ability to suppress clutter in a crowded scene and focus on what they’re looking for until around age 17. Other research has found that the ability to perceive faces keeps improving until around age 20.

Computer vision systems work by digesting massive amounts of data. Their underlying architecture is fixed and doesn’t mature over time, the way the developing brain does. If the underlying learning mechanisms are so different, will the results be, too? Tsotsos thinks computer vision systems are in for a reckoning.

“Learning in these deep learning methods is as unrelated to human learning as can be,” he said. “That tells me the wall is coming. You’ll reach a point where these systems can no longer move forward in terms of their development.”

36 Questions to Greater Intimacy

Source: Ideas & Discoveries, Nov 2019

Picture of Source:

A Totally Black Diamond

Source: MIT News, Sep 2019

… a 16.78-carat natural yellow diamond from LJ West Diamonds, estimated to be worth $2 million, which the team coated with the new, ultrablack CNT material. The effect is arresting: The gem, normally brilliantly faceted, appears as a flat, black void.

In Memory of Ric Ocasek – The Cars

 

 

“The Book of Why” – Judea Pearl

Source: Boston Review, Sep 2019

“We live in an era that presumes Big Data to be the solution to all our problems,” he says, “but I hope with this book to convince you that data are profoundly dumb.” Data may help us predict what will happen—so well, in fact, that computers can drive cars and beat humans at very sophisticated games of strategy, from chess and Go to Jeopardy!—but even today’s most sophisticated techniques of statistical machine learning can’t make the data tell us why.

For Pearl, the missing ingredient is a “model of reality,” which crucially depends on causes. Modern machines, he contends against a chorus of enthusiasts, are nothing like our minds.

Causation really cannot be reduced to correlation, even in large data sets, Pearl came to see. Throwing more computational resources at the problem, as Pearl did in his early work (on “Bayes nets,” which apply Thomas Bayes’s basic rule for updating probabilities in light of new evidence to large sets of interconnected data), will never yield a solution. In short, you will never get causal information out without beginning by putting causal hypotheses in.

he developed simple but powerful techniques using what he calls “causal graphs” to answer questions about causation, or to determine when such questions cannot be answered from the data at all.

the main innovation that Pearl is advertising—the use of causal hypotheses—gets couched not so much in algebra-laden statistics as in visually intuitive pictures: “directed graphs” that illustrate possible causal structures, with arrows pointing from postulated causes to effects. A good deal of the book’s argument can be grasped simply by attending only to these diagrams and the various paths through them.

Consider two basic building blocks of such graphs. If two arrows emerge from a single node, then we have a “common-causal fork,” which can produce statistical correlations between properties that are not, themselves, causally related (such as car color and accident rate on the reckless-drivers-tend-to-like-the-color-red hypothesis). In this scenario, A may cause both B and C, but B and C are not causally related.

On the other hand, if two different arrows go into the same node then we have a “collider,” and that raises an entirely different set of methodological issues. In this case, A and B may jointly cause C, but A and B are not causally related. The distinction between these two structures has important consequences for causal reasoning. While controlling for a common cause can eliminate misleading correlations, for example, controlling for a collider can create them. As Pearl shows, the general analytic approach, given a certain causal model, is to identify both “back door” (common cause) and “front door” (collider) paths that connect nodes and take appropriate cautions in each case.

The method of causal graphs allows us to test the hypotheses, both by themselves and against each other, by appeal to the data; it does not tell us which hypotheses to test.

(“We collect data only after we posit the causal model,” Pearl insists, “after we state the scientific query we wish to answer. . . . This contrasts with the traditional statistical approach . . . which does not even have a causal model.”)

Sometimes the data may refute a theory. Sometimes we find that none of the data we have at hand can decide between a pair of competing causal hypotheses, but new data we could acquire would allow us to do so. And sometimes we find that no data at all can serve to distinguish the hypotheses.

why care about causes? One reason is pure scientific curiosity: we want to understand the world, and part of that requires figuring out its hidden causal structure. But just as important, we are not mere passive observers of the world: we are also agents. We want to know how to effectively intervene in the world to prevent disaster and promote well-being. Good intentions alone are not enough.

We also need insight into how the springs and forces of nature are interconnected. So ultimately, the why of the world must be deciphered if we are to understand the how of successful action.