Category Archives: Math

Mathematics Has a Biological Origin

Source: The Conversation, Aug 2023

mathematics is a realisation in symbols of the fundamental nature and creativity of the mind.

Why is arithmetic universally true?

Humans have been making symbols for numbers for more than 5,500 years. More than 100 distinct notation systems are known to have been used by different civilisations, including Babylonian, Egyptian, Etruscan, Mayan and Khmer.

What is mathematics?

Taken together, these four principles structure our perception of the world so that our experience is ordered and cognitively manageable. They are like coloured spectacles that shape and constrain our experience in particular ways.

When we peer through these spectacles at the abstract universe of possibilities, we “see” numbers and arithmetic.

our results show that arithmetic is biologically-based and a natural consequence of how our perception is structured.

Although this structure is shared with other animals, only humans have invented mathematics. It is humanity’s most intimate creation, a realisation in symbols of the fundamental nature and creativity of the mind.

In this sense, mathematics is both invented (uniquely human) and discovered (biologically-based). The seemingly miraculous success of mathematics in the physical sciences hints that our mind and the world are not separate, but part of a common unity.

Knowledge Graph for Mathematics

Source: Semantic Arts, Jun 2023

his blog post is for anyone interested in mathematics and knowledge representation as associated with career progression in today’s changing information eco-system. Mathematics and knowledge representation have a strong common thread; they both require finding good abstractions and simple, elegant solutions, and they both have a foundation in set theory. It could be used as the starting point for an accessible academic research project that deals with the foundation of mathematics and will also develop commercially marketable knowledge representation skills.

Hypothesis: Could the vast body of mathematical knowledge be put into a knowledge graph? Let’s explore, because doing so could provide a searchable data base of mathematical concepts and help identify previously unrecognized connections between concepts.

Every piece of data in a knowledge graph is a semantic triple of the form:

subject – predicate – object.

A brief look through mathematical documentation reveals the frequent appearance of semantic triples of the form:

A implies B, where A and B are statements.

“A implies B” is itself a statement, equivalent to “If A then B”. Definitions, axioms, and theorems can be stated using these if/then statements. The if/then statements build on each other, starting with a foundation of definitions and axioms (statements so fundamental they are made without proof). Furthermore, the predicate “implies” is transitive, meaning an “implies” relationship can be inferred from a chain of other “implies” relationships.

…. hence the possibility of programmatically discovering relationships between statements.

Numbers

Source: Quanta, Jun 2023

Why is ChatGPT Bad @ Math?

Source: ReTable, Mar 2023

this Artificial Intelligent (AI) chatbot called ChatGTP is a language model whose abilities are constrained by the quality and quantity of data it has been trained on. To understand the reason for this incapability and how ChatGPT works, it is better to take a closer look at ChatGPT’s (Chat Generative Pre-trained Transformer) underlying technology. ChatGPT is a text-based language model that has been developed based on limited data.

ChatGPT is Better at Generating Human-Like Responses Than Doing Perfect Math Calculations

Another point is that since ChatGPT is a text-based program, it has been trained to communicate with and generate human language. The  AI language model of ChatGPT has been structured to develop and form itself based on human feedback. This concept is called “next word prediction” or “language model”.

As an AI language model, ChatGPT is designed to process and generate natural language responses that sound like they were written by a human. This is achieved through the use of large amounts of training data, which allows the model to learn the patterns and structures of human language.

On the other hand, perfect math calculations require a high degree of accuracy and precision, and the ability to perform complex mathematical operations quickly and efficiently. While AI models like ChatGPT can certainly perform math calculations, they may not always be as accurate or efficient as dedicated math software or hardware.

Furthermore, the primary goal of ChatGPT is to simulate human-like conversations, which often involve more than just providing factual information. Conversations can involve humor, sarcasm, emotions, and other human-like qualities that cannot be captured through math calculations alone.

What is Language Model?

Essentially, a language model is a computational model that is trained on a large corpus of natural language data, such as text or speech.

The goal of a language model is to be able to predict the next word or sequence of words in a sentence or phrase, based on the context of the previous words. This is accomplished through the use of statistical and machine learning algorithms, which allow the model to learn the patterns and structures of human language.

The Language Model used by ChatGPT can be defined as determining the probability of which word comes next based on text data and statistics. In this way the AI language model can generate a relevant and satisfying response to your question.

With the language model, the chatbot forms its answer based on your words using transformer technology, which means it is sensitive to what you write and how you express yourself. In other words, ChatGPT is a text-based language model, not a calculator or a math genius.  Just like us, its knowledge and ability are limited to the scope of the data it has.

Can We Trust ChatGPT?

After having mentioned all of these limitations of ChatGPT, it is a very natural question “Can we trust ChatGPT in math?”.

The power of ChatGPT and such AI language models comes from their ability to generate human-like responses, not their accuracy. From that point, the probability of inaccuracy in math while using ChatGPT shouldn’t be a surprise; however, inadequate expressions or grammatical errors would be such a shock.

ChatGPT – Bad @ Word Problems

Source: NewsWise, Feb 2023

Shakarian, an associate professor at Arizona State University who runs Lab V-2 in the Ira A. Fulton Schools of Engineering — the lab examines challenges in the field of artificial intelligence — is not as sold on ChatGPT’s capability of higher-level reasoning. In a paper that was accepted to the Association for the Advancement of Artificial Intelligence for its spring symposium, Shakarian detailed results of a study in which he tested ChatGPT on 1,000 mathematical word problems.

“Our initial tests on ChatGPT, done in early January, indicate that performance is significantly below the 60% accuracy for state-of-the-art algorithm for math word problem-solvers,” Shakarian said. “

Q: So, what can someone do with ChatGPT?

A: I think the practical applications in my view are probably going to be more in the creative and artistic space, as well as entertainment, where accuracy is not something that is going to be the most important thing. 

Q: What were you trying to find out with your paper and what did the results tell you?

A: When ChatGPT first came out, there were all kinds of comments about how it was bad at math.

There is a line of research in the field of natural language processing where people have studied how to create algorithms to solve mathematical word problems. Take a word problem that a junior high student would see that would maybe lead to a system of equations, nothing too bad, like two trains going at different speeds (to the same place). You can use algebra to solve those simultaneous questions.

One key aspect about these math word problems is that they require multiple steps of inference. This simply means that once you take a look at the problem, there’s kind of a translation step, which is taking the words and turning it into the equations. These are all multiple steps we’ve done in high school, and we wanted to see if ChatGPT could correctly do these steps. What we can conclude is one of the limitations with ChatGPT is it’s just not capable of doing good multistep logical inference. And this makes sense because the underlying technology really wasn’t designed for that.

 

MIT Swept the Putnam Math Competition in 2019, 2021, & 2022

Source: MIT news (links below by date)

2019

For the first time in Putnam’s history, all five of the top spots in the contest, known as Putnam Fellows, came from a single school — MIT.

MIT students also dominated the rest of the scoreboard: nine of the next 11, eight of the next 12, and 33 of the following 80 honorable mention rankings. Among the top 192 test-takers overall, 76 were MIT students.

Among the three top scorers — Sah, Zhang, and Zhu — two earned a nearly perfect score, and one (who prefers not to be named) earned a perfect score of 120 points. This is only the fifth time in Putnam’s history that a test-taker received a perfect score.

Administered by the Mathematical Association of America, the competition included 150 MIT students among 4,229 test-takers from 570 U.S. and Canadian institutions. The six-hour exam, taken over two sessions on the first Saturday of December each year, consists of 12 problems worth 10 points each. Fewer than a fourth of all participants of this competition scored more than 10 points total, and the median score was 2.

2021

For the second time in the history of the annual William Lowell Putnam Mathematical Competition, all five of the top spots in the contest, known as Putnam Fellows, came from a single school — MIT.

The grueling six-hour exam, which features 12 proof-based math problems, was taken by 2,975 undergraduates from 427 institutions on Dec. 4, 2021.

Among the top 105 test-takers overall, 63 were MIT students.

In the history of the Putnam, only eight students achieved Putnam Fellow all four years, including three from MIT, and a Harvard student who is now a math professor at MIT, Bjorn Poonen.

about 150 MIT students took the six-hour exam, which consists of 12 problems worth 10 points each. The top score this year was 119 out of 120 points. In comparison, the median score on the exam was four out of 120 points.

2022

Emmy Noether

Source: PLUS, Dec 2018

At the end of 1915 Albert Einstein published his general theory of relativity. It describes the force of gravity which keeps us tied to the Earth and planets in their orbits around the Sun. In fact, since gravity is the only force that reaches over long distances, general relativity describes the Universe as a whole at the scale of planets, stars and galaxies.

One thing that was much on Einstein’s mind when he was formulating general relativity was the behaviour of energy.  … energy cannot be created out of nothing, and neither can it be destroyed.

… conservation of energy drops out of Newton’s mechanics … from Newton’s second law of motion you can quite easily derive an equation which says that the energy within a physical system always remains the same.

General relativity didn’t prove quite so amenable, however. Einstein’s final formulation did contain an equation which, so Einstein thought, expressed the conservation of energy in just the same way as similar equations expressed it within other theories.

From conservation to symmetry

While investigating Hilbert’s claim about general relativity Noether came up with her impactful result. She showed that if in a physical theory energy is conserved, then the theory also remains the same over time: the laws of nature it describes are the same today as they were a 100 years ago and will be tomorrow. It’s a stunning result. The fact that energy cannot be created or destroyed is equivalent to the laws of nature being immutable over time.

Energy is not the only thing Noether’s theorem applies to. Two other quantities we know are conserved are momentum (an object’s speed times its mass) and angular momentum, something similar to momentum for an object moving in a circle. Conservation of momentum is also illustrated by Newton’s cradle.

Conservation of angular momentum is what you see when an ice skater who is spinning on the spot moves their limbs flat to their body or curls up in a ball (see video below). There is now less resistance to the skater’s motion, and since the energy is the same as before, the skater speeds up.

Noether’s result shows that conservation of momentum comes from a theory being the same in some place A as it is in any other place B.

Conservation of angular momentum, so Noether showed, comes from the theory being the same whether you are facing East, West, North, South or anywhere in between. This invariance — under shifts in time and space or rotation — is what we would naively expect from nature.

The associated conservation laws, however, aren’t quite so obvious. They weren’t properly discovered until the time of Newton and after.

It’s important to note that Noether’s result is entirely mathematical in nature.

She considered mathematical set-ups that could, but don’t need to, describe physical theories. These set-ups contain expressions that could represent energy, momentum and angular momentum, as well as transformations that could represent shifts in time and space, or rotations.

Noether’s proof shows how these are related mathematically without referring to physical interpretations.

In maths, when something remains the same under a transformation, we say that it is symmetric under the transformation. This chimes with our usual notion of symmetry in things we can see, such as the picture of a butterfly. We say it’s symmetric because it remains the same when you reflect it in the vertical axis running down its centre.

Noether’s result establishes a deep connection between symmetries and conservation laws and is completely general:

for every symmetry (not just the ones mentioned above)
there is some quantity that is conserved.

To prove it she used an area of maths that what especially developed to understand symmetries, called group theory. In Noether’s own words, her mathematical treatment can be seen as “the greatest possible group theoretic generalisation of general relativity”.

Why do we care?

Noether’s result had such a big impact because it showed how important symmetries are in physics. When conversation laws first appeared they gave physicists another angle from which to investigate physical systems. Noether’s result goes a step further. It puts mathematicians’ understanding of symmetries, which was well advanced even a hundred years ago, at physicists’ disposal.

Modern physicists have taken this idea to an extreme. Rather than formulating a theory first and then looking for its symmetries later, they first decide what symmetries their theory should posses and then see how reality fits in with that.

The approach has had startling success. Several fundamental particles, including the famous Higgs boson, were predicted to exist based on the assumption that certain (rather abstract) symmetries exist, and only later discovered in experiments. The hope is that symmetry will eventually guide us to the hotly sought after theory of everything (you can find out more in this article).

And what about Einstein?

What we have just described is just one of two results proved in Noether’s 1918 paper Invariante Variationsprobleme (“invariant variational problems”) — and it doesn’t apply to general relativity.

The symmetries we alluded to above are global transformations in the sense that they do the same thing to every point in the space they act on. If you shift every point along by a fixed distance in a fixed direction, or rotate it through a fixed angle about a fixed axis, then every point experiences exactly the same thing.

Noether’s first theorem only applies to theories whose symmetries are all global. If that’s the case, then each of the symmetries corresponds to a conservation law.

The symmetries of general relativity, however, aren’t global. The theory also remains invariant under local transformations that do different things to different points. In this case Noether’s first theorem doesn’t apply: for every symmetry there isn’t a straight-forward conservation law.

To deal with general relativity (and other so-called generally covariant theories) Noether proved a second theorem — and together with the first, this theorem proves Hilbert right: energy conservation does have a different status in general relativity.

“Theorem 2 gives, in terms of group theory, the proof of a related Hilbertian assertion about the failure of proper energy laws in general relativity,” Noether wrote. The exact nature of energy conservation in general relativity is tricky, so we will leave it for another time.

Einstein, for his part, was impressed by Noether’s insight. In a letter to Hilbert he wrote, “Yesterday I received from Miss Noether a very interesting paper on invariant forms. I am impressed that one can comprehend these matters from so general a viewpoint. It would not have done the old guard at Göttingen any harm had they picked up a thing or two from her.”

Freeman Dyson

Source: Plus, Jul 2013

these three bright young men, Richard Feynman and Julian Schwinger and Sin-Itiro Tomonaga, and each of them solved the problem [of the infinities] in his own way. All of them got the same answer, which obviously was right. I had the tools of quantum field theory, which they didn’t have. So I was able to put them all together and demonstrate it was all quite simple. I polished up the mathematics so that it did give finite answers.

Feynman had his completely original way of doing things which he didn’t think of as quantum field theory, but it really is. Usually the way physics is done, since Newton, is you write down equations, the laws of physics, then you calculate the results. Feynman skipped all that, he just wrote down the pictures [known as Feynman diagrams] and then wrote down the answers. There were no equations. These Feynman diagrams were his invention and turned out to be in fact a pictorial description of quantum fields.

Plus: Where did Feynman’s concept come from? Where do revolutionary ideas come from?

Dyson: It’s true of almost every great idea that you really don’t know afterwards where it came from. Our brains are random, that’s of course nature’s trick for being creative. I have identical twin grandsons, they have all the same genes but they don’t have the same brains: they develop independently. So these two identical young men have totally different brains, all the internal structure is essentially random. And that’s how our minds turn out to be so powerful: they don’t have to be programmed, they can invent things just by random chance. I think that’s where [great ideas] come from. All really good ideas are accidental. There’s some random arrangement of things buzzing around in somebody’s head, and it suddenly clicks.

Related Resource: PLUS, Sep 2002

Calculations in QED are carried out via “Feynman diagrams” which are successive approximations to QED. These are such excellent approximations that they give the most accurate numbers in the whole of science.

The Feynman diagram to the right represents the simplest approximation to what happens when a (negatively charged) electron and a (positively charged) positron collide. The two particles are annihilated, resulting in the formation of a photon, and the subsequent formation of a new electron-positron pair.

The first correction to this approximation is represented on the right. Two photons are created and annihilated.

 Although the successive calculations very quickly become very difficult, the approximations are so accurate that physicists hardly ever have to do more than the first one or two.

PLUS, Sep 2003

One of the first memories I have was when I was still being put down for a nap in the afternoons. I was in the crib and not able to climb out, and I was calculating the infinite series, 1+1/2+1/4+1/8+1/16… and discovered that it came out to 2. I remember that very vividly. It was a big moment when I found that out. I just loved calculating. It’s something you’re born with – it certainly didn’t come from outside as far as I know.

Dick Feynman, who was my mentor as a physicist, had very little math. He never really thought in terms of mathematics; he had a very concrete imagination. He drew pictures instead of making calculations, and somehow got the right answers.

But of course he was really a unique character, not only in his work but also in his personality. He had this remarkable vision of things, he called it the space-time approach. It meant he could reconstruct the whole of physics from his own point of view, without much in the way of equations. Instead of writing down an equation and solving it he would just write down the answer, which other people can’t do. It was combined with some sort of geometrical pictures he had in his head. There have been others like that – he’s not the only one – but it is unusual.

I work completely from a mathematical point of view, I was just the opposite. I have to have an equation that I can solve. So I do it the old-fashioned way, and then of course the fun was to understand what he was doing from my point of view, which turned out to be very interesting – I was able to translate his style into old-fashioned mathematics so other people could use it.

There is a theorem proved by Kurt Godel in 1931, which is the Incompleteness Theorem for mathematics. The theorem says that if you have any finite formulation of mathematics, that is, a finite set of equations and a finite set of rules of logical inference, then you can write down statements within the language that is defined that way, and you can prove that they cannot be proved and that they cannot be disproved.

So given any finite system of mathematics, there are statements within the system that you can’t decide whether they are true or untrue by using the rules, which is a very powerful and profound result. It means that any formulation of mathematics is incomplete; there are always questions that you can’t answer within the system.

In fact it means that mathematics is inexhaustible – given any particular set of rules there are questions that you can’t answer. You always have to invent new rules in order to decide new questions. So it’s guaranteed that mathematics will never come to an end, which I think is a delightful set of affairs! It came as a big shock to the mathematicians in 1931, because they had had dreams of solving all the problems. The fact that you can’t solve all the problems I think is much better.

The question is does that also apply to physics and I think it does, because in some sense physics includes mathematics, and anything you can say about the physical world you can say in terms of mathematics. Therefore if mathematics is inexhaustible then physics is also inexhaustible. I think that’s also consoling to physicists, or it should be, that they won’t ever come to an end of problems either.

Bill Thurston – A Visual Mathematician

Source: Quora, Jul 201

My brother-in-law, the great mathematician Bill Thurston, said he figured out how to do it when he was a graduate student. Nobody believed him until he started discovering a huge number of theorems. How did he find them? He said he just visualized the geometry in 4 dimensions, and he could notice the regularities that gave him his insights.

Related Resources:

thirty years ago, the mathematician William Thurston articulated a grand vision: a taxonomy of all possible finite three-dimensional shapes.

Thurston, a Fields medalist … had an uncanny ability to imagine the unimaginable: not just the shapes that live inside our ordinary three-dimensional space, but also the far vaster menagerie of shapes that involve such complicated twists and turns that they can only fit into higher-dimensional spaces. Where other mathematicians saw inchoate masses, Thurston saw structure: symmetries, surfaces, relationships between different shapes.

At the core of Thurston’s vision was a marriage between two seemingly disparate ways of studying three-dimensional shapes: geometry, the familiar realm of angles, lengths, areas and volumes, and topology, which studies all the properties of a shape that don’t depend on precise geometric measurements — the properties that remain unchanged if the shape gets stretched and distorted like Silly Putty.

Thurston’s key insight was that it is in the union of geometry and topology that three-dimensional shapes, or “three-manifolds,” can be understood.

Thurston’s mathematical “children” manifest his style, wrote Richard Brown of Johns Hopkins University. “They seem to see mathematics the way a child views a carnival: full of wonder and joy, fascinated with each new discovery, and simply happy to be a part of the whole scene.”

John Carlos Baez, Jun 2013

Structure (Order) from Chaos

Source: Quanta, Apr 2022

The Kahn-Kalai conjecture is very broad — it’s written in the abstract language of sets and their elements — but it can be understood by considering a simple case. First, imagine a graph: a set of points, or vertices, connected by lines, or edges. To make a random graph, take a biased coin — one that lands on heads with a probability of 1%, or 30%, or any other percentage between zero and 100 — and flip it once for a given pair of vertices.

If the coin lands on heads, connect those vertices with an edge; if the coin lands on tails, don’t. Repeat this process for every possible pair of vertices.

Mathematicians want to know when such a graph is likely to have some sort of interesting structure. Perhaps it will contain a triangle. Or maybe it will have a Hamiltonian cycle, a chain of edges that passes through every vertex exactly once. It’s possible to think about any property, so long as it is “increasing” — that is, if adding more edges to a graph that already contains the property will not destroy the property.

If the probability of the coin turning up heads is low, edges will be rare, and properties like Hamiltonian cycles are not likely to arise.

But if you dial up the probability, something strange happens. Each property has what’s called a threshold: a probability at which the structure emerges, often very abruptly.

Just as ice crystals form when the temperature dips below zero degrees Celsius, the emergence of a particular property suddenly becomes extremely likely as more edges get added to the graph. When edges are added to a random graph of N vertices with a probability of less than log(N)/N, for instance, the graph is unlikely to contain a Hamiltonian cycle. But when that probability is adjusted to be just a hair greater than log(N)/N, a Hamiltonian cycle becomes extremely likely.

Mathematicians want to determine such thresholds for various properties of interest. “Thresholds are maybe the most basic thing you’d try to understand,” Fox said. “I look at a random object; does it have the property that I’m interested in?” Yet while the threshold has been calculated for Hamiltonian cycles and some other specific structures, in most cases it remains very difficult to determine a precise threshold, or even a good estimate of one.

So mathematicians often rely on an easier computation, one that provides a minimum possible value, or lower bound, for the threshold. This “expectation threshold” is calculated by essentially taking a weighted average. “The nice thing about this expectation threshold is it’s very easy to calculate,” said David Conlon, a mathematician at the California Institute of Technology. “Generally speaking, you can calculate this expectation threshold in like two lines for almost anything.”

In 2006, Kahn and Kalai posited that this was actually the worst-case scenario. Their eponymous conjecture states that the gap between the expectation threshold and the true threshold will never be greater than a logarithmic factor. The conjecture, according to Conlon, “essentially takes what is the central question in random graphs and gives a general answer for it.”

But that’s just a simple case. The conjecture pertains to far more than random graphs. If true, it holds for random sequences of numbers, for generalizations of graphs called hypergraphs, and for even broader types of systems. That’s because Kahn and Kalai wrote their statement in terms of abstract sets.

Random graphs constitute one specific case — a random graph can be thought of as a random subset of the set of all possible edges — but there are many other objects that fall within the conjecture’s purview. “Weirdly, when you’re dealing with graphs, proving it in that context would be very hard,” Conlon said. “But somehow, jumping to this abstract setting reveals the navel of the thing.”

It was this generality that made the statement seem so unbelievable. “It was a very brave conjecture,” said Shachar Lovett, a theoretical computer scientist at the University of California, San Diego.

For one thing, it would instantaneously streamline a huge effort in combinatorics — trying to calculate thresholds for different properties. “Questions where seemingly the proofs needed to be very long and complicated suddenly just disappear,” said Alan Frieze, a mathematician at Carnegie Mellon University. “The proofs became just trivial applications of this [conjecture].”

The Sunflower Path

The methods that would eventually lead to the new proof of the Kahn-Kalai conjecture began with a breakthrough on a seemingly unrelated problem. In many ways, the story starts with the sunflower conjecture, a question posed by the mathematicians Paul Erdős and Richard Rado in 1960. The sunflower conjecture considers whether collections of sets can be constructed in ways that resemble the petals of a sunflower.

In 2019, Lovett was part of a team that came very close to a full solution of the sunflower problem. At the time, the work seemed completely separate from the Kahn-Kalai conjecture, which involves considerations of probability. “I didn’t see any connection with our conjecture,” said Kalai. Neither did Lovett, who said that “we weren’t aware of these [other] questions. We cared about sunflowers.”

Park and Pham rewrote the Kahn-Kalai conjecture in a way that let them make use of covers.

The original conjecture puts constraints on what the probability of a weighted coin landing on heads should be in order to guarantee that a random graph or set contains some property.

In particular, it says that the probability has to be at least the expectation threshold for the property multiplied by a logarithmic factor. Park and Pham turned this around: If such a property is not likely to emerge, then the probability assigned to the weighted coin is lower than the expectation threshold multiplied by a logarithmic factor.

That’s where covers come in: When a small cover can be constructed for a subset of structures (like a collection of Hamiltonian cycles), it means that the subset’s contribution to the expectation threshold is small. (Remember that the expectation threshold is calculated by taking a kind of weighted average over all possible structures of a given type.)

So what Park and Pham now needed to show was that if a random set is unlikely to contain one target structure, there must exist a small cover for all such target structures. The bulk of their proof was dedicated to constructing that small cover.

They did this by using a similar piece-by-piece sampling process to the one used in the previous results, while also introducing what Fox called a “very clever counting argument.” One week after their sleepless night in March, they posted their elegant six-page paper online.

“Their proof is super simple. They take the basic idea we developed and [the ideas from] these other papers and add a twist to it,” Lovett said. “And with this new twist, everything somehow becomes much, much easier.”

Frieze agreed. “I cannot explain it, but amazingly it’s true,” he said.

Just like the fractional result, the Kahn-Kalai conjecture, now proved true, automatically implies a cornucopia of related conjectures. But more than that, “this is a powerful proof technique [that] will probably lead to new things,” said Noga Alon, a mathematician at Princeton University. “They had to do it in the right way.”

Park and Pham have now started to apply their method to other problems. They’re particularly interested in getting a more precise understanding of the gap between the expectation threshold and the real threshold.

By proving the Kahn-Kalai conjecture, they’ve shown that this gap is at most a logarithmic factor — but sometimes the gap is smaller, or even nonexistent. At the moment, there’s no broader mechanism for classifying when each of these scenarios might be true; mathematicians have to work it out case by case.

Now, “we think that with this efficient technique we have, we can hopefully be much more precise in pinning down these thresholds,” Pham said.