Category Archives: Neuroscience

Working Memory: 3-5 chunks

Source: NIH website, Feb 2010

Working memory storage capacity is important because cognitive tasks can be completed only with sufficient ability to hold information as it is processed. The ability to repeat information depends on task demands but can be distinguished from a more constant, underlying mechanism: a central memory store limited to 3 to 5 meaningful items in young adults.

Many studies indicate that working memory capacity varies among people, predicts individual differences in intellectual ability, and changes across the life span (Cowan, 2005).

As Cowan (2001) noted, many theorists with mathematical models of particular aspects of problem-solving and thought have allowed the number of items in working memory to vary as a free parameter, and the models seem to settle on a value of about 4, where the best fit is typically achieved.

The capacity-limit-as-strength camp includes diverse hypotheses. Mathematical simulations suggest that, under certain simple assumptions, searches through information are most efficient when the groups to be searched include about 3.5 items on average. A list of three items is well-structured with a beginning, middle, and end serving as distinct item-marking characteristics; a list of five items is not far worse, with two added in-between positions. More items than that might lose distinctiveness within the list. A relatively small central working memory may allow all concurrently-active concepts to become associated with one another (chunked) without causing confusion or distraction.

Parietal Lobe for Visual/Spatial Processing

Source: Harvard, date indeterminate

Source: MD-Health, date indeterminate

The parietal lobe processes sensory information for cognitive purposes and helps coordinate spatial relations so we can make sense of the world around us. The parietal lobe resides in the middle section of the brain behind the central sulcus, above the occipital lobe.

Related Resource:  American Scientist, Nov 2006

Coxeter simply had a superior brain. It’s being studied by the neuroscientist Sandra Witelson at McMaster University in Hamilton, Ontario, and she’s also studying Einstein’s brain. Her results on Coxeter’s brain so far indicate that he, like Einstein, had an abnormally large parietal lobe—and, fittingly enough, this is the region of the brain responsible for visual thinking and spatial reasoning.

 

Embodied Cognition: Ideas

Source: Stanford, Dec 2015

A common assumption in traditional accounts is that concepts are context-independent amodal symbols.

There are several problems with this view and research is strong in suggesting that conceptual capacities incorporate and are structured in terms of patterns of bodily activity. Talking or thinking about objects have been suggested to imply the reactivation of previous experiences, and the recruitment of the same neural circuits involved during perception and action towards those objects would allow the re-enactment of multimodal information (color, size, width, etc).

In principle, the view that concepts are represented through abstract symbols, rather than modality specific features, and cognition requires stable forms of representation should be either dropped or strongly revisited. 

Cognitive Skills

Source: MIT, Mar 2015

different components of fluid intelligence peak at different ages, some as late as age 40.

… each cognitive skill they were testing peaked at a different age. For example, raw speed in processing information appears to peak around age 18 or 19, then immediately starts to decline. Meanwhile, short-term memory continues to improve until around age 25, when it levels off and then begins to drop around age 35.

For the ability to evaluate other people’s emotional states, the peak occurred much later, in the 40s or 50s.

rystallized intelligence peaks later in life, as previously believed, but the researchers also found something unexpected: While data from the Weschler IQ tests suggested that vocabulary peaks in the late 40s, the new data showed a later peak, in the late 60s or early 70s.

 

Mapping the Human Brain

Source: NYTimes, Jan 2015

device that imaged brain tissue with enough resolution to make out the connections between individual neurons. But drawing even a tiny wiring diagram required herculean efforts, as people traced the course of neurons through thousands of blurry black-and-white images. What the field needed, Tank said, was a computer program that could trace them automatically — a way to map the brain’s connections by the millions, opening a new area of scientific discovery. For Seung to tackle the problem, though, it would mean abandoning the work that had propelled him to the top of his discipline in favor of a highly speculative engineering project.

Seung published a paper in the prestigious journal Nature, demonstrating how the brain’s neural connections can be mapped — and discoveries made — using an ingenious mix of artificial intelligence and a competitive online game. Seung has also become the leading proponent of a plan, which he described in a 2012 book, to create a wiring diagram of all 100 trillion connections between the neurons of the human brain, an unimaginably vast and complex network known as the connectome.

With connectome mapping, Seung explained last month, it is possible to start answering questions that theorists have puzzled over for decades, including the ones that prompted him to put aside his own work in frustration. He is planning, among other things, to prove that he can find a specific memory in the brain of a mouse and show how neural connections sustain it. “I am going back to settle old scores,” he said.

The question is not whether a map can be made, but what insights it will bring. Will future generations cherish a cartographer’s work or shake their heads and deliver it up to the inclemencies?

The ur-map of this big science is the one produced by the Human Genome Project, a stem-to-stern accounting of the DNA that provides every cell’s genetic instructions. The genome project was completed faster than anyone expected, thanks to Moore’s Law, and has become an essential scientific tool. In its wake have come a proliferation of projects in the same vein — the proteome (proteins), the foldome (folding of proteins) — each promising a complete description of something or other. (One online listing includes the antiome: “The totality of people who object to the propagation of omes.”)

The Brain Initiative, the United States government’s 12-year, $4.5 billion brain-mapping effort, is a conscious echo of the genome project, but neuroscientists find themselves in a far more tenuous position at the outset. The brain might be mapped in a host of ways, and the initiative is pursuing many at once. In fact, Seung and his colleagues, who are receiving some of the funding, are working at the margins of contemporary neuroscience. Much of the field’s most exciting new technology has sought to track the brain’s activity — like functional M.R.I., with its images of parts of the brain “lighting up” — while the connectome would map the brain’s physical structure.

What makes the connectome’s relationship to our identity so difficult to understand, Seung told me, is that we associate our “self” with motion.

a cross-disciplinary group of researchers, including Seung, hit on a new way of thinking that is described as connectionism.

The basic idea (which borrows from computer science) is that simple units, connected in the right way, can give rise to surprising abilities (memory, recognition, reasoning). In computer chips, transistors and other basic electronic components are wired together to make powerful processors. In the brain, neurons are wired together — and rewired.

Every time a girl sees her dog (wagging tail, chocolate brown fur), a certain set of neurons fire; this churn of activity is like Seung’s Colorado River. When these neurons fire together, the connections between them grow stronger, forming a memory — a part of Seung’s riverbed, the connectome that shapes thought.

A typical human neuron has thousands of connections; a neuron can be as narrow as one ten-thousandth of a millimeter and yet stretch from one side of the head to the other. Only once have scientists ever managed to map the complete wiring diagram of an animal — a transparent worm called C. elegans, one millimeter long with just 302 neurons — and the work required a stunning display of resolve. Beginning in 1970 and led by the South African Nobel laureate Sydney Brenner, it involved painstakingly slicing the worm into thousands of sections, each one-thousandth the width of a human hair, to be photographed under an electron microscope.

That was the easy part. To pull a wiring diagram from the stack of images required identifying each neuron and then following it through the sections, a task akin to tracing the full length of every strand of pasta in a bowl of spaghetti and meatballs, using pens and thousands of blurry black-and-white photos. For C. elegans, this process alone consumed more than a dozen years. When Seung started, he estimated that it would take a single tracer roughly a million years to finish a cubic millimeter of human cortex — meaning that tracing an entire human brain would consume roughly one trillion years of labor. He would need a little help.

In 2012, Seung started EyeWire, an online game that challenges the public to trace neuronal wiring — now using computers, not pens — in the retina of a mouse’s eye. Seung’s artificial-­intelligence algorithms process the raw images, then players earn points as they mark, paint-by-numbers style, the branches of a neuron through a three-dimensional cube. The game has attracted 165,000 players in 164 countries. In effect, Seung is employing artificial intelligence as a force multiplier for a global, all-volunteer army that has included Lorinda, a Missouri grandmother who also paints watercolors, and Iliyan (a.k.a. @crazyman4865), a high-school student in Bulgaria who once played for nearly 24 hours straight. Computers do what they can and then leave the rest to what remains the most potent pattern-recognition technology ever discovered: the human brain.

“Blink” Creativity

Source: Fast Company, Nov 2014

In the 1990s, cognitive scientists John Kounios and Mark Beeman started studying the insightful moment when you’re suddenly able to see things differently, also known as the “aha!” or eureka moment.

… right before the problem is presented, activity in the visual part of an analytical person’s brain would amp up to take in as much information as possible. On the other hand, the visual cortex would shut down for those who don’t solve problems in a methodical way, which allows them to block out the environment, look inward, and “find and retrieve subconscious ideas,” says Kounios.

The Visual Paradox

While more creative people shut down visual information before their “aha!” moment, these people tend to take in more visuals compared to others on a daily basis. Kounios says when these people walk down the street, they tend to study others, take in information, and may seem very scattered about their own agenda. However, the information they take in and synthesis may be a product of unconscious processing for years before ideas emerge.

Those who are more analytical are more focused with their attention. When they walk down the street, they are focused on where they’re going and how they’re going to get there. They tend not to stray into different areas of thoughts.

What you can do is be receptive and expose yourself to a lot of insight triggers. Also, positive moods tend to promote eureka moments. On the contrary, anxiety will promote analytical thoughts.

Lastly, Kounios advises to people who want epiphanies to get more sleep.

TED AI xPrize

Source: xPrize website, 2014

On March 20, from the TED2014 stage, Chris Anderson and Peter Diamandis join forces to announce the A.I. XPRIZE presented by TED, a modern-day Turing test to be awarded to the first A.I. to walk or roll out on stage and present a TED Talk so compelling that it commands a standing ovation from you, the audience. The detailed rules are yet to be created because we want your help to create what the rules should be.