Category Archives: Math

In Memory of V. Voevodsky

Advertisements

The Mathematical Geometry of Viruses

Source: Quanta, Oct 2017

a triumph in applying mathematical principles to the understanding of biological entities. It may also eventually help to revolutionize the prevention and treatment of viral diseases in general by opening up a new, potentially safer way to develop vaccines and antivirals.

A Geodesic Insight

In 1962, the biologist-chemist duo Donald Caspar and Aaron Klug published a seminal paper on the structural organization of viruses. Among a series of sketches, models and X-ray diffraction patterns that the paper featured was a photograph of a building designed by Richard Buckminster Fuller, the inventor and architect: It was a geodesic dome, the design for which Fuller would become famous. 

… necessary once more for an outside approach — made possible by theories in pure mathematics — to provide insights into the biology of viruses.

… with knowledge of structures, “we could make an impact on understanding how viruses function, how they assemble, how they infect, how they evolve.” She didn’t look back: She has spent her time since then working as a mathematical biologist, using tools from group theory and discrete math to continue where Caspar and Klug left off. “We really developed this integrative, interdisciplinary approach,” she said, “where the math drives the biology and the biology drives the math.”

“If you talk to biologists,” Holmes-Cerfon said, “the language they use is so different than the language they use in physics and math. The questions are different, too.” The challenge for mathematicians is tied to their willingness to seek out questions with answers that inform the biology. One of Twarock’s real talents, she said, “is doing that interdisciplinary work.”

A Spinning Calabi-Yau shape

Source: Wolfram.com, date indeterminate

String Theory predicts the existence of more than the 3 space dimensions and 1 time dimension we are all familiar with.

According to string theory, there are additional dimensions that we are unfamiliar with because they are curled up into tiny complicated shapes that can only be seen on tiny scales. If we could shrink to this tiny, Planck-sized scale we could see that at every 3D point in space, we can also explore 6 additional dimensions. This animation shows a Calabi-Yau surface which is a projection of these higher dimensions into the more familiar dimensions we are aware of.

The Monty Hall Problem

Source: Zero Hedge, Oct 2017

In September 1990, Marilyn vos Savant devoted one of her columns to a reader’s question, which presented a variation of the Monty Hall Problem: 

“Suppose you’re on a game show, and you’re given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what’s behind the doors, opens another door, say #3, which has a goat. He says to you, “Do you want to pick door #2?” Is it to your advantage to switch your choice of doors?”

“Yes; you should switch,” she replied. “The first door has a 1/3 chance of winning, but the second door has a 2/3 chance.”

Though her answer was correct, a vast swath of academics responded with outrage. In the proceeding months, vos Savant received more than 10,000 letters — including a pair from the Deputy Director of the Center for Defense Information, and a Research Mathematical Statistician from the National Institutes of Health — all of which contended that she was entirely incompetent. 

Debunking the Monty Hall Problem

Since two doors (one containing a car, and the other a goat) remain after the host opens door #3, most would assume that the probability of selecting the car is ½. This is not the case.

“The winning odds of 1/3 on the first choice can’t go up to 1/2 just because the host opens a losing door,” writes vos Savant. Indeed, if you map out six games exploring all possible outcomes, it becomes clear that switching doors results in winning two-thirds (66.6%) of the timeand keeping your original door results in winning only one-third (33.3%) of the time:

Another way to look at this is to break down every door-switching possibility. As we’ve delineated below, 6 out of the 9 possible scenarios (two-thirds) result in winning the car: 

These results seem to go against our intuitive statistical impulses — so why does switching doors increase our odds of winning?

The short answer is that your initial odds of winning with door #1 (?) don’t change simply because the host reveals a goat behind door #3; instead, Hall’s action increases the odds to ? that you’ll win by switching.

Here’s another way to visualize this. Imagine that instead of three doors, Monty Hall presents you with 100 doors; behind 99 of them are goats, and behind one of them is the car. You select door #1, and your initial odds of winning the car are now 1/100:

Then, let’s suppose that Monty Hall opens 98 of the other doors, revealing a goat behind each one. Now you’re left with two choices: keep door #1, or switch to door #100:

When you select door #1, there is a 99/100 chance that the car is behind one of the other doors. The fact that Monty Hall reveals 98 goats does not change these initial odds — it merely “shifts” that 99/100 chance to door #100. You can either stick with your original 1/100 odds pick, or switch to door #100, with a much higher probability of winning the car.

Monstrous Moonshine’s Pariahs

Source: Quanta, Sep 2017

moonshine forges deep connections between groups of symmetries, models of string theory and objects from number theory called modular forms.

In 1978, John McKay of Concordia University in Montreal noticed that the same number — 196,884 — occurs in two widely different mathematical contexts. One is as a combination of two numbers from the monster group, and the other is as a coefficient of the “j-function,” one of the simplest examples of a modular form — a type of function with repeating patterns like those in Escher’s circular angels-and-devils tilings.

The monster group and the j-function are connected via string theory. In a particular 24-dimensional string theory world, the j-function’s coefficients capture how strings can oscillate, while the monster controls the underlying symmetry.

the monster group isn’t just some anomalous object forced into existence by abstract considerations. It is the symmetry group of a natural space, and it is closely connected to modular forms, which number theorists have been studying for centuries. The development gave rise to entirely new areas of mathematics and physics, and it earned Richard Borcherds, of the University of California, Berkeley, a Fields Medal in 1998.

Duncan happened to describe the new moonshine to Ono one evening, over dinner with their families. Ono had never heard of the O’Nan group, but he immediately recognized the modular forms involved. “These forms are like old friends to me,” he wrote by email.

Relating Physics and AI via Mathematics

Source: Quanta, Dec 2014

The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called “renormalization,” which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, “a cat” regardless of its color, size or posture in a given video.

“They actually wrote down on paper, with exact proofs, something that people only dreamed existed,” said Ilya Nemenman, a biophysicist at Emory University. “Extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

how to map the mathematics of one procedure onto the other, proving that the two mechanisms for summarizing features of the world work essentially the same way.

“But we still know that there is a coarse-grained description because our own brain can operate in the real world. It wouldn’t be able to if the real world were not summarizable.”

Tishby sees it as a hint that renormalization, deep learning and biological learning fall under the umbrella of a single idea in information theory. All the techniques aim to reduce redundancy in data. Step by step, they compress information to its essence, a final representation in which no bit is correlated with any other. Cats convey their presence in many ways, for example, but deep neural networks pool the different correlations and compress them into the form of a single neuron. “What the network is doing is squeezing information,” Tishby said. “It’s a bottleneck.”

By laying bare the mathematical steps by which information is stripped down to its minimal form, he said, “this paper really opens up a door to something very exciting.”

The Stacks Project

Source: Peter Woit website, Aug 2013

a self-contained exposition of all the material there, which makes it different from a research textbook or the experience you’d have reading a bunch of papers.

We were quite neurotic setting it up – everything has a proof, other results are referenced explicitly, and it’s strictly linear, which is to say there’s a strict ordering of the text so that all references are always to earlier results.

Of course the field itself has different directions, some of which are represented in the stacks project, but we had to choose a way of presenting it which allowed for this idea of linearity (of course, any mathematician thinks we can do that for all of mathematics).

… dynamic generation of dependency graphs.

The graphs show the logical dependencies between these tags, represented by arrows between nodes. You can see this structure in the above picture already.

So for example, if tag ABCD refers to Zariski’s Main Theorem, and tag ADFG refers to Nakayama’s Lemma, then since Zariski depends on Nakayama, there’s a logical dependency, which means the node labeled ABCD points to the node labeled ADFG in the entire graph.

Of course, we don’t really look at the entire graph, we look at the subgraph of results which a given result depends on. And we don’t draw all the arrows either, we only draw the arrows corresponding to direct references in the proofs. Which is to say, in the subgraph for Zariski, there will be a path from node ABCD to node ADFG, but not necessarily a direct link.