Category Archives: Math

Hilbert’s 24th Problem

Source: Royal Society Publishing, Jan 2019

In 2000, Rüdiger Thiele [1] found in a notebook of David Hilbert, kept in Hilbert’s Nachlass at the University of Göttingen, a small note concerning a 24th problem. As Hilbert wrote, he had considered including this problem in his famous problem list for the International Congress of Mathematicians in Paris in 1900.

An English translation was given by Thiele as follows:

The 24th problem in my Paris lecture was to be: Criteria of simplicity, or proof of the greatest simplicity of certain proofs. Develop a theory of the method of proof in mathematics in general. Under a given set of conditions there can be but one simplest proof. Quite generally, if there are two proofs for a theorem, you must keep going until you have derived each from the other, or until it becomes quite evident what variant conditions (and aids) have been used in the two proofs. Given two routes, it is not right to take either of these two or to look for a third; it is necessary to investigate the area lying between the two routes. Attempts at judging the simplicity of a proof are in my examination of syzygies and syzygies between syzygies. The use or the knowledge of a syzygy simplifies in an essential way a proof that a certain identity is true. Because any process of addition [is] an application of the commutative law of addition etc. [and because] this always corresponds to geometric theorems or logical conclusions, one can count these [processes], and, for instance, in proving certain theorems of elementary geometry (the Pythagoras theorem, [theorems] on remarkable points of triangles), one can very well decide which of the proofs is the simplest.

(a) What is a proof?

Even the notion of mathematical proof is still a controversial issue (see §3a(i)). Its traditional understanding is, on the one hand, challenged by recent developments of computer-assisted proofs, notably in the context of ‘big proofs’, see, e.g. the collection of papers in [14] or [15].

Besides formal proofs, one should not neglect alternative concepts like ‘Proofs without Words’ and visual representations in general (see §3a(iii))

(c) A theory of the method of proof

Hilbert and his school developed in the 1920s proof theory as a new discipline in the emerging field of mathematical logic. This proof theory was conceived, first of all, to serve Hilbert’s Programme, the attempt to provide finitist consistency proofs of formalized mathematical theories. It succeeded, at least, in the aim of transforming mathematical proofs themselves into mathematical objects which can be studied by mathematical tools.

it is rather questionable whether length, relative to the specific proof system under consideration, would even be a good measure for simplicity. For the very example of Pythagoras’ theorem, mentioned by Hilbert, Loomis collected more than 350 different proofs [32], and it is more than doubtful that one would like to pick the shortest of all these proofs as the simplest.

Thiele [1, §9] draws our attention to the relation of simplicity to complexity. In the specific area of computational complexity, we are confronted with questions related to Hilbert’s 24th problem, for instance, by asking for a notion of the simplest algorithm.

Alan Cain, in Visual thinking and simplicity of proof [48], draws the attention to ‘how spatial thinking interacts with simplicity in [informal] proof’ by analysing some concrete examples of spatial proofs in mathematics in comparsion with non-spatial ones. It is argued that diagrams and spatial thinking can contribute to simplicity in a number of ways.

Michael Kinyon, in Proof simplification and automated theorem proving [52], gives a detailed analysis of a proof in algebra which can be simplified by an automated theorem prover using length of proof as one particular measure.

AI Helps Mathematics

Source: Quanta, May 2020

a successful first approach to solving symbolic math problems with neural networks. Their method didn’t involve number crunching or numerical approximations. Instead, they played to the strengths of neural nets, reframing the math problems in terms of a problem that’s practically solved: language translation.

… produce precise solutions to complicated integrals and differential equations — including some that stumped popular math software packages with explicit problem-solving rules built in.

The new program exploits one of the major advantages of neural networks: They develop their own implicit rules. As a result, “there’s no separation between the rules and the exceptions,” said Jay McClelland, a psychologist at Stanford University who uses neural nets to model how people learn math. In practice, this means that the program didn’t stumble over the hardest integrals. In theory, this kind of approach could derive unconventional “rules” that could make headway on problems that are currently unsolvable, by a person or a machine — mathematical problems like discovering new proofs, or understanding the nature of neural networks themselves.

To allow a neural net to process the symbols like a mathematician, Charton and Lample began by translating mathematical expressions into more useful forms. They ended up reinterpreting them as trees — a format similar in spirit to a diagrammed sentence. Mathematical operators such as addition, subtraction, multiplication and division became junctions on the tree. So did operations like raising to a power, or trigonometric functions. The arguments (variables and numbers) became leaves. The tree structure, with very few exceptions, captured the way operations can be nested inside longer expressions.

Another possible direction for the neural net to explore is the development of automated theorem generators. Mathematicians are increasingly investigating ways to use AI to generate new theorems and proofs, though “the state of the art has not made a lot of progress,” Lample said. “It’s something we’re looking at.”

Another possible direction for the neural net to explore is the development of automated theorem generators. Mathematicians are increasingly investigating ways to use AI to generate new theorems and proofs, though “the state of the art has not made a lot of progress,” Lample said. “It’s something we’re looking at.”

John Conway (1937-2020)

Source: Quanta, Apr 2020

In modern mathematics, many of the biggest advances are great elaborations of theory. Mathematicians move mountains, but their strength comes from tools, highly sophisticated abstractions that can act like a robotic glove, enhancing the wearer’s strength.

John Conway was a throwback, a natural problem-solver whose unassisted feats often left his colleagues stunned.

“Every top mathematician was in awe of his strength. People said he was the only mathematician who could do things with his own bare hands,” said Stephen Miller, a mathematician at Rutgers University. “Mathematically, he was the strongest there was.”

Conway had the tendency — perhaps unparalleled among his peers — of jumping into an area of mathematics and completely changing it.

In a 1979 paper called “Monstrous Moonshine,” Conway and Simon Norton conjectured a deep and surprising relationship between properties of the monster group and properties of a distant object in number theory called the j-function.

They predicted that the dimensions in which the monster group operates match, almost exactly, the coefficients of the j-function.
A decade later, Borcherds proved Conway and Norton’s “moonshine” conjecture, helping him win a Fields Medal in 1998.

Nine years before moonshine, Conway’s style of hands-on mathematics led him to a breakthrough in an entirely different area. In the field of topology, mathematicians study the properties of knots, which are like closed loops of string.

Mathematicians are interested in classifying all types of knots. For example, if you attach the ends of an unknotted shoelace you get one type of knot. If you tie an overhand knot in the shoelace and then connect the ends, you get another.

In the 19th century, a trio of British and American scientists — Thomas Kirkman, Charles Little and Peter Tait — labored to create a kind of periodic table of knots. Over the course of six years they classified the first 54 knots.

Conway, in a 1970 paper, came up with a more efficient way of doing the same job. His description — known as Conway notation — made it much easier to diagram the tangles and overlaps in a knot.

“What Little did in six years, it took him an afternoon,” said Marc Lackenby, a mathematician at the University of Oxford who studies knot theory.

And that wasn’t all. In the same paper, Conway made another major contribution to knot theory. Mathematicians studying knots have different types of tests they apply, which typically act as invariants, meaning that if the results come out as different for two knots, then the knots are different.

One of the most venerable tests in knot theory is the Alexander polynomial — a polynomial expression that’s based on the way a given knot crosses over itself. It’s a highly effective test, but it’s also slightly ambiguous. The same knot could yield multiple different (but very closely related) Alexander polynomials.

Conway managed to refine the Alexander polynomial, ironing out the ambiguity. The result was the invention of the Conway polynomial, which is now a basic tool learned by every knot theorist.

“He’s famous for coming in and doing things his own way. He definitely did that with knots, and it had a lasting influence,” Lackenby said.

He’s perhaps best known for creating the “Game of Life,” a mesmerizing computer program in which collections of cells evolve into new configurations based on a few simple rules.

 

 

 

 

 

 

 

 

 

 

Great memories of Conway available here:  https://www.scottaaronson.com/blog/?p=4732

Hanging out with Conway @ Princeton

Metaphors when Doing Math

Source: Quanta, Mar 2020

Metaphors abound when people talk about doing math. You’re exploring a continent, or building a treehouse, or baking a cake. Or wandering in the fog. That’s partly because the primary experience of high-level mathematics is just about impossible for non-mathematicians to perceive directly. But even among experts, effective communication often requires allusion. Mathematical ideas are subtle and complicated. Expressing those ideas is like trying to put a powerful emotion into words or, to draw yet another analogy, like narrating a dream you’re rapidly forgetting.

“They are these vague, inchoate thoughts that are not even formulated at the level of language,” Futer told me.

Related Resource: Argumatronic, Sep 2018

Conceptual metaphor is how we structure our understanding of nearly everything we can’t directly experience. For example, we structure our understanding of time in terms of spatial relationships. We think of times as being “ahead” or “behind” us, as if we were standing on a physical timeline. One interesting thing here is that, while structuring understanding of time in terms of space is universal, the orientation is not universal: in some cultures, the future is “ahead” and in some it is “behind”. An important thing to understand is that the inferences about what it means for something to be “ahead” of you versus “behind” you hold for those cultures’ interpretations of time, mapped directly from the spatial inferences.

The essence of metaphor is understanding and experiencing one kind of thing in terms of another.
– George Lakoff and Mark Johnson, Metaphors We Live By

 

 

 

Mathematical Rainbows

Source: Quanta, Mar 2020

Ringel’s conjecture is a problem in combinatorics where you connect dots (vertices) with lines (edges) to form graphs. It predicts that there’s a special relationship between a type of large graph with 2n + 1 vertices and a proportionally smaller type of graph with + 1 vertices.

Ringel’s conjecture says that copies of any tree you make can be used to perfectly cover, or tile, the edges of the corresponding complete graph, the same way you could use identical tiles to cover a kitchen floor.

 

Statistical Analysis of Medical Screening

Source: Tofspot.blogspot.com, Mar 2020

Of the 950,000 folks who are not infected. nearly all (95%) get a clean bill of health and of the 50,000 infected souls, nearly all (95%) get a red card. But along the way 2500 infected people go undetected, while 47,000 uninfected get red carded nonetheless.

Perhaps they ate a poppy seed bagel that morning, or the test was run late at night by a dog-tired technician. In any case, you will notice that half of all those getting flagged are not in fact infected.

Consequently, the media reports that the infection rate is 9.5% [(47,500+47,500)/1,000,000] rather than 5%, although what they really mean is that the test-positive rate is 9.5%.

This also affects estimates of the mortality rates. It’s not called the Dreaded Red Squamish for nothing.

But the denominator has been inflated by the false positives, so deaths will be divided by 95,000 rather than the [unknown] 50,000. There will be 47,000 “recoveries” of people who never actually had the disease. OTOH, some of the undetected 2,500 may also shuffle off the coil of mortality, but these will be assigned to collateral conditions (heart disease, asthma, etc.)

Computing, Quantum Mechanics and Mathematics are Entangled

Source: Quanta, Mar 2020

The new proof establishes that quantum computers that calculate with entangled quantum bits or qubits, rather than classical 1s and 0s, can theoretically be used to verify answers to an incredibly vast set of problems. The correspondence between entanglement and computing came as a jolt to many researchers.

The proof’s co-authors set out to determine the limits of an approach to verifying answers to computational problems. That approach involves entanglement. By finding that limit the researchers ended up settling two other questions almost as a byproduct: Tsirelson’s problem in physics, about how to mathematically model entanglement, and a related problem in pure mathematics called the Connes embedding conjecture.

Undecidable Problems

Turing defined a basic framework for thinking about computation before computers really existed. In nearly the same breath, he showed that there was a certain problem computers were provably incapable of addressing. It has to do with whether a program ever stops.

Typically, computer programs receive inputs and produce outputs. But sometimes they get stuck in infinite loops and spin their wheels forever. When that happens at home, there’s only one thing left to do.

“You have to manually kill the program. Just cut it off,” Yuen said.

Turing proved that there’s no all-purpose algorithm that can determine whether a computer program will halt or run forever. You have to run the program to find out.

In technical terms, Turing proved that this halting problem is undecidable — even the most powerful computer imaginable couldn’t solve it.

After Turing, computer scientists began to classify other problems by their difficulty. Harder problems require more computational resources to solve — more running time, more memory. This is the study of computational complexity.

Ultimately, every problem presents two big questions: “How hard is it to solve?” and “How hard is it to verify that an answer is correct?”

Interrogate to Verify

When problems are relatively simple, you can check the answer yourself. But when they get more complicated, even checking an answer can be an overwhelming task. However, in 1985 computer scientists realized it’s possible to develop confidence that an answer is correct even when you can’t confirm it yourself.

The method follows the logic of a police interrogation.

In computer science terms, the two parties in an interrogation are a powerful computer that proposes a solution to a problem — known as the prover — and a less powerful computer that wants to ask the prover questions to determine whether the answer is correct. This second computer is called the verifier.

To take a simple example, imagine you’re colorblind and someone else — the prover — claims two marbles are different colors. You can’t check this claim by yourself, but through clever interrogation you can still determine whether it’s true.

Put the two marbles behind your back and mix them up. Then ask the prover to tell you which is which. If they really are different colors, the prover should answer the question correctly every time. If the marbles are actually the same color — meaning they look identical — the prover will guess wrong half the time.

“If I see you succeed a lot more than half the time, I’m pretty sure they’re not” the same color, Vidick said.

By asking a prover questions, you can verify solutions to a wider class of problems than you can on your own.

researchers showed that by interrogating two provers separately about their answers, you can quickly verify solutions to an even larger class of problems than you can when you only have one prover to interrogate.

Prior to the new work, mathematicians had wondered whether they could get away with approximating infinite-dimensional matrices by using large finite-dimensional ones instead. 

 Now, because the Connes embedding conjecture is false, they know they can’t.

“Their result implies that’s impossible,” said Slofstra.

SS Chern – A Mathematician

Shiing-Shen Chern (/ɜːrn/Chinese陳省身pinyinChén XǐngshēnMandarin: [tʂʰən.ɕiŋ.ʂən]; October 28, 1911 – December 3, 2004) was a Chinese-American mathematician and poet.

He made fundamental contributions to differential geometry and topology. He has been called the “father of modern differential geometry” and is widely regarded as a leader in geometry and one of the greatest mathematicians of the twentieth century, winning numerous awards and recognition including the Wolf Prize and the inaugural Shaw Prize.[1][2][3][4][5][6][7]

In memory of Shiing-Shen Chern, the International Mathematical Union established the Chern Medal in 2010 to recognize “an individual whose accomplishments warrant the highest level of recognition for outstanding achievements in the field of mathematics”.[8]

Quanta Highlights Math Achievements

Sources: Quanta, 2019

https://www.quantamagazine.org/with-category-theory-mathematics-escapes-from-equality-20191010/

Lurie’s books are the single, authoritative text on infinity categories. They are completely rigorous, but hard to completely grasp. They’re especially poorly suited to serving as reference manuals — it’s difficult to look up specific theorems, or to check that a specific application of infinity categories that one might encounter in someone else’s paper really works out.

https://www.quantamagazine.org/foundations-built-for-a-general-theory-of-neural-networks-20190131/

Neural networks aim to mimic the human brain — and one way to think about the brain is that it works by accreting smaller abstractions into larger ones. Complexity of thought, in this view, is then measured by the range of smaller abstractions you can draw on, and the number of times you can combine lower-level abstractions into higher-level abstractions — like the way we learn to distinguish dogs from birds.

power in taking small pieces and combining them at greater levels of abstraction instead of attempting to capture all levels of abstraction at once.

https://www.quantamagazine.org/random-surfaces-hide-an-intricate-order-20190702/

Mathematicians have proved that a random process applied to a random surface will yield consistent patterns.

https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/

generative modeling asks how likely it is, given condition X, that you’ll observe outcome Y.

Cognitive Technologies … Sentences –> Symbols

Source: Dominic Cummings blog, Jun 2019

Language and writing were cognitive technologies created thousands of years ago which enabled us to think previously unthinkable thoughts. Mathematical notation did the same over the past 1,000 years. For example, take a mathematics problem described by the 9th Century mathematician al-Khwarizmi (who gave us the word algorithm):

Once modern notation was invented, this could be written instead as:

x2 + 10x = 39

Michael Nielsen uses a similar analogy. Descartes and Fermat demonstrated that equations can be represented on a diagram and a diagram can be represented as an equation. This was a new cognitive technology, a new way of seeing and thinking: algebraic geometry. Changes to the ‘user interface’ of mathematics were critical to its evolution and allowed us to think unthinkable thoughts (Using Artificial Intelligence to Augment Human Intelligence, see below).

Similarly in the 18th Century, there was the creation of data graphics to demonstrate trade figures. Before this, people could only read huge tables. This is the first data graphic:

screenshot 2019-01-29 00.28.21

The Jedi of data visualisation, Edward Tufte, describes this extraordinary graphic of Napoleon’s invasion of Russia as ‘probably the best statistical graphic ever drawn’. It shows the losses of Napoleon’s army: from the Polish-Russian border, the thick band shows the size of the army at each position, the path of Napoleon’s winter retreat from Moscow is shown by the dark lower band, which is tied to temperature and time scales (you can see some of the disastrous icy river crossings famously described by Tolstoy). NB. The Cabinet makes life-and-death decisions now with far inferior technology to this from the 19th Century (see below).

screenshot 2019-01-29 10.37.05

If we look at contemporary scientific papers they represent extremely compressed information conveyed through a very old fashioned medium, the scientific journal. Printed journals are centuries old but the ‘modern’ internet versions are usually similarly static. They do not show the behaviour of systems in a visual interactive way so we can see the connections between changing values in the models and changes in behaviour of the system. There is no immediate connection. Everything is pretty much the same as a paper and pencil version of a paper. In Media for Thinking the Unthinkable, Victor shows how dynamic tools can transform normal static representations so systems can be explored with immediate feedback. This dramatically shows how much more richly and deeply ideas can be explored. With Victor’s tools we can interact with the systems described and immediately grasp important ideas that are hidden in normal media.