Category Archives: Math

Computer-Aided Proof for Tiling Problem

Source: Quanta, Jul 2017

a new proof by Michaël Rao, a 37-year-old mathematician at the University of Lyon in France, finally completes the classification of convex polygons that tile the plane by conquering the last holdouts: pentagons, which have resisted sorting for 99 years.

In his new computer-assisted proof, Rao identified 371 possible scenarios for how corners of pentagons might come together in a tiling, and then he checked them all. In the end, his algorithm determined that only the 15 known pentagon families can do it. His proof closes the field of convex polygons that tile the plane at 15 pentagons, three types of hexagons — all identified by Reinhardt in his 1918 thesis — and all quadrilaterals and triangles. (No tilings by convex polygons with seven or more sides exist.)

Advertisements

Translating Mathematical Ideas Across Domains

Source: Quanta Magazine, Jun 2017

My latest story is about the mathematician June Huh, who came into the field late and by an unorthodox path. Huh’s approach to mathematics has been similarly surprising: He and his two collaborators, Eric Katz and Karim Adiprasito, solved an important problem called the Rota conjecture by figuring out how to translate ideas from one area of math to a realm where those ideas wouldn’t seem to belong.

Breakthroughs in mathematics often come by surprising means — problems that seemed hopeless become solvable when mathematicians find a new interpretation for established ideas. This means more than finding a new use for an old tool. It’s really about extending a pattern of argument that arose in one setting into another.

The correct analogue of an established mathematical idea is often not obvious, and finding it can take awhile. But once established, it becomes quite powerful. 

Complexity of Mathematical Theories

Source: Quanta Magazine, Sep 2017

an unexpected link between the sizes of infinite sets and a parallel effort to map the complexity of mathematical theories.

In the 1960s, the mathematician Paul Cohen explained why. Cohen developed a method called “forcing” that demonstrated that the continuum hypothesis is independent of the axioms of mathematics — that is, it couldn’t be proved within the framework of set theory. (Cohen’s work complemented work by Kurt Gödel in 1940 that showed that the continuum hypothesis couldn’t be disproved within the usual axioms of mathematics.)

For a model theorist, a “theory” is the set of axioms, or rules, that define an area of mathematics. You can think of model theory as a way to classify mathematical theories — an exploration of the source code of mathematics. “I think the reason people are interested in classifying theories is they want to understand what is really causing certain things to happen in very different areas of mathematics,” said H. Jerome Keisler, emeritus professor of mathematics at the University of Wisconsin, Madison.

In 1967, Keisler introduced what’s now called Keisler’s order, which seeks to classify mathematical theories on the basis of their complexity. He proposed a technique for measuring complexity and managed to prove that mathematical theories can be sorted into at least two classes: those that are minimally complex and those that are maximally complex. “It was a small starting point, but my feeling at that point was there would be infinitely many classes,” Keisler said.

It isn’t always obvious what it means for a theory to be complex. Much work in the field is motivated in part by a desire to understand that question. Keisler describes complexity as the range of things that can happen in a theory — and theories where more things can happen are more complex than theories where fewer things can happen.

A little more than a decade after Keisler introduced his order, Shelah published an influential book, which included an important chapter showing that there are naturally occurring jumps in complexity — dividing lines that distinguish more complex theories from less complex ones. After that, little progress was made on Keisler’s order for 30 years.

Malliaris and Shelah proved that p and t are equal by cutting a path between model theory and set theory that is already opening new frontiers of research in both fields.

A non-math major (undergraduate) might win a Fields Medal (Math)

Source: Quanta, Jun 2017

n 2009, at Hironaka’s urging, Huh applied to a dozen or so graduate schools in the U.S. His qualifications were slight: He hadn’t majored in math, he’d taken few graduate-level classes, and his performance in those classes had been unspectacular. His case for admission rested largely on a recommendation from Hironaka. Most admissions committees were unimpressed. Huh got rejected at every school but one, the University of Illinois, Urbana-Champaign, where he enrolled in the fall of 2009.

Huh’s inadvertent proof of Read’s conjecture, and the way he combined singularity theory with graphs, could be seen as a product of his naïve approach to mathematics. He learned the subject mainly on his own and through informal study with Hironaka. People who have observed his rise over the last few years imagine that this experience left him less beholden to conventional wisdom about what kinds of mathematical approaches are worth trying. “If you look at mathematics as a kind of continent divided into countries, I think in June’s case nobody really told him there were all these borders. He’s definitely not constrained by any demarcations,” said Robbert Dijkgraaf, the director of IAS.

Crossing Boundaries

Some of the biggest leaps in understanding occur when someone extends a well-established theory in one area to seemingly unrelated phenomena in another. Think, for example, about gravitation. People have always understood that objects fall to the ground when released from a height; the heavens became far more intelligible when Newton realized the same dynamic explained the motion of the planets.

In math the same kind of transplantation occurs all the time. In his widely cited 1994 essay “On Proof and Progress in Mathematics,” the influential mathematician William Thurston explained that there are dozens of different ways to think of the concept of the “derivative.” One is the way you learn it in calculus — the derivative as a measure of infinitesimal change in a function. But the derivative appears in other guises: as the slope of a line tangent to the graph of a function, or as the instantaneous speed given by a function at a specific time. “This is a list of different ways of thinking about or conceiving of the derivative, rather than a list of different logical definitions,” Thurston wrote.

 

Does the Brain Use Structures to Think?

Source: Newsweek, Jun 2017

Scientists studying the brain have discovered that the organ operates on up to 11 different dimensions, creating multiverse-like structures that are “a world we had never imagined.”

By using an advanced mathematical system, researchers were able to uncover architectural structures that appears when the brain has to process information, before they disintegrate into nothing.

Their findings, published in the journal Frontiers in Computational Neuroscience, reveals the hugely complicated processes involved in the creation of neural structures, potentially helping explain why the brain is so difficult to understand and tying together its structure with its function.

In the latest study, researchers honed in on the neural network structures within the brain using algebraic topology—a system used to describe networks with constantly changing spaces and structures. This is the first time this branch of math has been applied to neuroscience.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures—the trees in the forest—and see the empty spaces—the clearings—all at the same time,” study author Kathryn Hess said in a statement.

In the study, researchers carried out multiple tests on virtual brain tissue to find brain structures that would never appear just by chance. They then carried out the same experiments on real brain tissue to confirm their virtual findings.

They discovered that when they presented the virtual tissue with stimulus, groups of neurons form a clique. Each neuron connects to every other neuron in a very specific way to produce a precise geometric object. The more neurons in a clique, the higher the dimensions.

In some cases, researchers discovered cliques with up to 11 different dimensions.

The structures assembled formed enclosures for high-dimensional holes that the team have dubbed cavities. Once the brain has processed the information, the clique and cavity disappears.

“The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” said one of the researchers, Ran Levi.

Henry Markram, director of Blue Brain Project, said the findings could help explain why the brain is so hard to understand. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly,” he said.

“We found a world that we had never imagined. There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

The findings indicate the brain processes stimuli by creating these complex cliques and cavities, so the next step will be to find out whether or not our ability to perform complicated tasks requires the creation of these multi-dimensional structures.

In an email interview with Newsweek , Hess says the discovery brings us closer to understanding “one of the fundamental mysteries of neuroscience: the link between the structure of the brain and how it processes information.”

By using algebraic topology, she says, the team was able to discover “the highly organized structure hidden in the seemingly chaotic firing patterns of neurons, a structure which was invisible until we looked through this particular mathematical filter.”

Hess says the findings suggest that when we examine brain activity with low-dimensional representations, we only get a shadow of the real activity taking place. This means we can see some information, but not the full picture. “So, in a sense our discoveries may explain why it has been so hard to understand the relation between brain structure and function,” she explains.

“The stereotypical response pattern that we discovered indicates that the circuit always responds to stimuli by constructing a sequence of geometrical representations starting in low dimensions and adding progressively higher dimensions, until the build-up suddenly stops and then collapses: a mathematical signature for reactions to stimuli.

“In future work we intend to study the role of plasticity—the strengthening and weakening of connections in response to stimuli—with the tools of algebraic topology. Plasticity is fundamental to the mysterious process of learning, and we hope that we will be able to provide new insight into this phenomenon,” she added.

Near Impossible Mathematics (A Near Miss)

Source: Quanta, Jun 2017

There’s no precise definition of a near miss. There can’t be. A hard and fast rule doesn’t make sense in the wobbly real world. For now, Kaplan relies on a rule of thumb when looking for new near-miss Johnson solids: “the real, mathematical error inherent in the solid is comparable to the practical error that comes from working with real-world materials and your imperfect hands.” In other words, if you succeed in building an impossible polyhedron—if it’s so close to being possible that you can fudge it—then that polyhedron is a near miss.

Then there’s the missing-square puzzle. In this one (above), a right triangle is cut up into four pieces. When the pieces are rearranged, a gap appears. Where’d it come from? It’s a near miss. Neither “triangle” is really a triangle. The hypotenuse is not a straight line, but has a little bend where the slope changes from 0.4 in the blue triangle to 0.375 in the red triangle. The defect is almost imperceptible, which is why the illusion is so striking.

Sometimes near misses arise within the realm of mathematics, almost as if mathematics is playing a trick on itself. In the episode “Treehouse of Horror VI” of The Simpsons, mathematically inclined viewers may have noticed something surprising: the equation 178212 + 184112 = 192212.

It seemed for a moment that the screenwriters had disproved Fermat’s Last Theorem, which states that an equation of the form xn + yn = zn has no integer solution when n is larger than 2. If you punch those numbers into a pocket calculator, the equation seems valid. But if you do the calculation with more precision than most hand calculators can manage, you will find that the twelfth root of the left side of the equation is 1921.999999955867 …, not 1922, and Fermat can rest in peace. It is a striking near miss—off by less than a 10-millionth.

Gromov Quotes

Source: Simons Foundation, Dec 2014

“What is mathematics? The question is absurd,” he said.

Gromov about his notion of “a bug on a leaf” and what he calls the “ergologic” of how the brain does math. These are the sorts of things he thinks a lot about, at least lately.

Gromov’s first bombshell was the homotopy principle, or “h-principle,” a general way of solving partial differential equations. “The geometric intuition behind the h-principle is something like this,” explained Larry Guth of the Massachusetts Institute of Technology. “If you had a sweater and you wanted to put it into a box, then because the sweater is soft, it is easy to put it into the box, and there are lots of ways to do it. But if you had to write a list of totally precise instructions about exactly how to put the sweater into the box, it would actually be hard and kind of complicated.”

In mathematics, the question was whether some high-dimensional object could be embedded into a given space. “And the only way to deal with high-dimensional objects, at least traditionally,” said Guth, “was to write down equations that say precisely where everything goes, and it’s hard to do that. Like the situation with the sweater, the only way that we could describe how to put the sweater into the box was to write a completely precise list of instructions about exactly how to do it, and it makes it look as if it is complicated.

But Gromov found a good way of capturing the idea that the sweater is very soft, hence you can do almost anything and it will fit into the box.”

Dusa McDuff of Barnard College was similarly impressed when she got to know Gromov at Stony Brook. “Gromov has a reputation as being very wild, having very wild ideas. He builds these amazing, powerful arguments out of almost nothing.”

Another of Gromov’s revolutionary contributions, achieved at Stony Brook, involved what he called the “symplectic circus act,” passing a big ball through a small hole (with a family of motions preserving the symplectic structure). The crucial question was: Could this feat be done or not?

As Blaine Lawson of Stony Brook recalls, “Gromov said the answer to this question was the key to symplectic geometry. And he finally figured it out some seven years later. And that paper spawned a field” — a field that has since attracted many of the brightest young geometers and topologists, a field spanning geometry and topology, now called symplectic topology. Historically, noted Lawson, symplectic geometry had been around for a very long time. “But people thought about the subject from the point of view of classical mechanics, dating back to the nineteenth century. The main change in the field came with Gromov, when he came up with his amazing theorems. He started a revolution.”

Gromov also revolutionized a certain type of group theory, the area of hyperbolic groups, importing analysis and differential geometry, and developing (albeit to a “superficial” extent, according to Gromov) William Thurston’s “fantastic” topological vision of geometric group theory. “The way I see the picture,” said Gromov, “is that … we took two different, but sometimes overlapping, routes: Thurston concentrated on the most beautiful and difficult aspects of the field (hyperbolic 3-manifolds) and myself on the most general ones (hyperbolic groups).”

Guth read Gromov’s paper “Filling Riemannian Manifolds” many, many times, and wrote a translation (for fellow mathematicians) as well as two overview expository essays. “When I was studying Misha’s work, I thought I had a good idea about how he thought about things and his contributions,” said Guth. “And then later, whenever I talked with him, I was always completely surprised and kind of shocked by whatever he would say. It turned out I had no understanding about what he thought about things.”

“Misha has a very loose and free-ranging imagination,” said Lawson. “He loves to play with things and see what’s going on, and when he does finally get to the real point of things he just runs away with it. It’s a real spirit of originality. And he’s very much like that in person. As an individual he’s full of ideas that just flow and flow and flow.