Category Archives: Math

the Dodecahedron

Source: Quanta, Aug 2020

Suppose you stand at one of the corners of a Platonic solid. Is there some straight path you could take that would eventually return you to your starting point without passing through any of the other corners?

For the four Platonic solids built out of squares or equilateral triangles — the cube, tetrahedron, octahedron and icosahedron — mathematicians recently figured out that the answer is no. Any straight path starting from a corner will either hit another corner or wind around forever without returning home.

But with the dodecahedron, which is formed from 12 pentagons, mathematicians didn’t know what to expect.

Now Jayadev Athreya, David Aulicino and Patrick Hooper have shown that an infinite number of such paths do in fact exist on the dodecahedron. Their paper, published in May in Experimental Mathematics, shows that these paths can be divided into 31 natural families.

The solution required modern techniques and computer algorithms. “Twenty years ago, [this question] was absolutely out of reach; 10 years ago it would require an enormous effort of writing all necessary software, so only now all the factors came together,” wrote Anton Zorich, of the Institute of Mathematics of Jussieu in Paris, in an email.

Hidden Symmetries
Although mathematicians have speculated about straight paths on the dodecahedron for more than a century, there’s been a resurgence of interest in the subject in recent years following gains in understanding “translation surfaces.” These are surfaces formed by gluing together parallel sides of a polygon, and they’ve proved useful for studying a wide range of topics involving straight paths on shapes with corners, from billiard table trajectories to the question of when a single light can illuminate an entire mirrored room.

In all these problems, the basic idea is to unroll your shape in a way that makes the paths you are studying simpler. So to understand straight paths on a Platonic solid, you could start by cutting open enough edges to make the solid lie flat, forming what mathematicians call a net. One net for the cube, for example, is a T shape made of six squares.

a highly redundant representation of the dodecahedron, with 10 copies of each pentagon. And it’s massively more complicated: It glues up into a shape like a doughnut with 81 holes.

To tackle this giant surface, the mathematicians rolled up their sleeves — figuratively and literally. After working on the problem for a few months, they realized that the 81-holed doughnut surface forms a redundant representation not just of the dodecahedron but also of one of the most studied translation surfaces.

Called the double pentagon, it is made by attaching two pentagons along a single edge and then gluing together parallel sides to create a two-holed doughnut with a rich collection of symmetries.

Because the double pentagon and the dodecahedron are geometric cousins, the former’s high degree of symmetry can elucidate the structure of the latter.

The relationship between these surfaces meant that the researchers could tap into an algorithm for analyzing highly symmetric translation surfaces developed by Myriam Finster of the Karlsruhe Institute of Technology in Germany.

By adapting Finster’s algorithm, the researchers were able to identify all the straight paths on the dodecahedron from a corner to itself, and to classify these paths via the dodecahedron’s hidden symmetries.

How Close Are Computers to Automating Mathematical Reasoning?

Source: Quanta, Aug 2020

A proof is a step-by-step logical argument that verifies the truth of a conjecture, or a mathematical proposition. (Once it’s proved, a conjecture becomes a theorem.) It both establishes the validity of a statement and explains why it’s true.

A proof is strange, though. It’s abstract and untethered to material experience. “They’re this crazy contact between an imaginary, nonphysical world and biologically evolved creatures,” said the cognitive scientist Simon DeDeo of Carnegie Mellon University, who studies mathematical certainty by analyzing the structure of proofs. “We did not evolve to do this.”

Conjectures arise from inductive reasoning — a kind of intuition about an interesting problem — and proofs generally follow deductive, step-by-step logic. They often require complicated creative thinking as well as the more laborious work of filling in the gaps, and machines can’t achieve this combination.

A formidable open challenge in the field asks how much proof-making can actually be automated: Can a system generate an interesting conjecture and prove it in a way that people understand?

a system that can predict a useful conjecture and prove a new theorem will achieve something new — some machine version of understanding, Szegedy said. And that suggests the possibility of automating reason itself.

Mathematicians, logicians and philosophers have long argued over what part of creating proofs is fundamentally human, and debates about mechanized mathematics continue today, especially in the deep valleys connecting computer science and pure mathematics.

For computer scientists, theorem provers are not controversial. They offer a rigorous way to verify that a program works, and arguments about intuition and creativity are less important than finding an efficient way to solve a problem.

Many just don’t see a need for theorem solvers in their work. “They have a system, and it’s pencil and paper, and it works,” said Kevin Buzzard, a mathematician at Imperial College London who three years ago pivoted his work from pure math to focus on theorem provers and formal proofs.

“computer proofs may not be as alien as we think,” DeDeo said. Recently, together with Scott Viteri, a computer scientist now at Stanford University, he reverse-engineered a handful of famous canonical proofs (including one from Euclid’s Elements) and dozens of machine-generated proofs, written using a theorem prover called Coq, to look for commonalities. They found that the networked structure of machine proofs was remarkably similar to the structure of proofs made by people. That shared trait, he said, may help researchers find a way to get proof assistants to, in some sense, explain themselves.

Others say theorem provers can be useful teaching tools, in both computer science and mathematics. At Johns Hopkins University, the mathematician Emily Riehl has developed courses in which students write proofs using a theorem prover. “It forces you to be very organized and think clearly,” she said. “Students who write proofs for the first time can have trouble knowing what they need and understanding the logical structure.”

Theorem provers also offer a way to keep the field honest. In 1999, the Russian American mathematician Vladimir Voevodsky discovered an error in one of his proofs. From then until his death in 2017, he was a vocal proponent of using computers to check proofs.

Timothy Gowers, a mathematician and Fields medalist at the University of Cambridge, wants to go even further: He envisions a future in which theorem provers replace human referees at major journals. “I can see it becoming standard practice that if you want your paper to be accepted, you have to get it past an automatic checker,” he said.

Talking to Computers

But before computers can universally check or even devise proofs, researchers first have to clear a significant hurdle: the communication barrier between the language of humans and the language of computers.

ITPs, the second category, have vast data sets containing up to tens of thousands of theorems and proofs, which they can scan to verify that a proof is accurate. Unlike ATPs, which operate in a kind of black box and just spit out an answer, ITPs require human interaction and even guidance along the way, so they’re not as inaccessible. “A human could sit down and understand what the proof-level techniques are,” said Huang. (These are the kinds of machine proofs DeDeo and Viteri studied.)

ITPs have become increasingly popular in recent years. In 2017, the trio behind the Boolean Pythagorean triples problem used Coq, an ITP, to create and verify a formal version of their proof; in 2005 Georges Gonthier at Microsoft Research Cambridge used Coq to formalize the four-color theorem. Hales also used ITPs called HOL Light and Isabelle on the formal proof of the Kepler conjecture. (“HOL” stands for “higher-order logic.”)

Efforts at the forefront of the field today aim to blend learning with reasoning. They often combine ATPs with ITPs and also integrate machine learning tools to improve the efficiency of both. They envision ATP/ITP programs that can use deductive reasoning — and even communicate mathematical ideas — the same way people do, or at least in similar ways.

The Limits of Reason

Josef Urban thinks that the marriage of deductive and inductive reasoning required for proofs can be achieved through this kind of combined approach. His group has built theorem provers guided by machine learning tools, which allow computers to learn on their own through experience. Over the last few years, they’ve explored the use of neural networks — layers of computations that help machines process information through a rough approximation of our brain’s neuronal activity. In July, his group reported on new conjectures generated by a neural network trained on theorem-proving data.

Urban was partially inspired by Andrej Karpathy, who a few years ago trained a neural network to generate mathematical-looking nonsense that looked legitimate to nonexperts. Urban didn’t want nonsense, though — he and his group instead designed their own tool to find new proofs after training on millions of theorems. Then they used the network to generate new conjectures and checked the validity of those conjectures using an ATP called E.

The network proposed more than 50,000 new formulas, though tens of thousands were duplicates. “It seems that we are not yet capable of proving the more interesting conjectures,” Urban said.

Human Imagination with Machine Intelligence

Source: Quanta, Aug 2020

A team of mathematicians has finally finished off Keller’s conjecture, but not by working it out themselves. Instead, they taught a fleet of computers to do it for them.

Keller’s conjecture, posed 90 years ago by Ott-Heinrich Keller, is a problem about covering spaces with identical tiles. It asserts that if you cover a two-dimensional space with two-dimensional square tiles, at least two of the tiles must share an edge. It makes the same prediction for spaces of every dimension — that in covering, say, 12-dimensional space using 12-dimensional “square” tiles, you will end up with at least two tiles that abut each other exactly.

Over the years, mathematicians have chipped away at the conjecture, proving it true for some dimensions and false for others. As of this past fall the question remained unresolved only for seven-dimensional space.

But a new computer-generated proof has finally resolved the problem. The proof, posted online last October, is the latest example of how human ingenuity, combined with raw computing power, can answer some of the most vexing problems in mathematics.

The authors of the new work — Joshua Brakensiek of Stanford University, Marijn Heule and John Mackey of Carnegie Mellon University, and David Narváez of the Rochester Institute of Technology — solved the problem using 40 computers. After a mere 30 minutes, the machines produced a one-word answer: Yes, the conjecture is true in seven dimensions. And we don’t have to take their conclusion on faith.

The answer comes packaged with a long proof explaining why it’s right. The argument is too sprawling to be understood by human beings, but it can be verified by a separate computer program as correct.

The Mysterious Seventh Dimension

It’s easy to see that Keller’s conjecture is true in two-dimensional space. Take a piece of paper and try to cover it with equal-sized squares, with no gaps between the squares and no overlapping. You won’t get far before you realize that at least two of the squares need to share an edge. If you have blocks lying around it’s similarly easy to see that the conjecture is true in three-dimensional space. In 1930, Keller conjectured that this relationship holds for corresponding spaces and tiles of any dimension.

Early results supported Keller’s prediction. In 1940, Oskar Perron proved that the conjecture is true for spaces in dimensions one through six. But more than 50 years later, a new generation of mathematicians found the first counterexample to the conjecture: Jeffrey Lagarias and Peter Shor proved that the conjecture is false in dimension 10 in 1992.

Connect the Dots

As mathematicians chipped away at the problem over the decades, their methods changed. Perron worked out the first six dimensions with pencil and paper, but by the 1990s, researchers had learned how to translate Keller’s conjecture into a completely different form — one that allowed them to apply computers to the problem.

The original formulation of Keller’s conjecture is about smooth, continuous space. Within that space, there are infinitely many ways of placing infinitely many tiles. But computers aren’t good at solving problems involving infinite options — to work their magic they need some kind of discrete, finite object to think about.

In 1990, Keresztély Corrádi and Sándor Szabó came up with just such a discrete object. They proved that you can ask questions about this object that are equivalent to Keller’s conjecture — so that if you prove something about these objects, you necessarily prove Keller’s conjecture as well. This effectively reduced a question about infinity to an easier problem about the arithmetic of a few numbers.

As a general rule, to prove Keller’s conjecture in dimension n, you use dice with n dots and try to find a clique of size 2n. You can think of this clique as representing a kind of “super tile” (made up of 2n smaller tiles) that could cover the entire n-dimensional space.

So if you can find this super tile (that itself contains no face-sharing tiles), you can use translated, or shifted, copies of it to cover the entire space with tiles that don’t share a face, thus disproving Keller’s conjecture.

“If you succeed, you can cover the whole space by translation. The block with no common face will extend to the whole tiling,” said Lagarias, who is now at the University of Michigan.

Mackey disproved Keller’s conjecture in dimension eight by finding a clique of 256 dice (28), so answering Keller’s conjecture for dimension seven required looking for a clique of 128 dice (27). Find that clique, and you’ve proved Keller’s conjecture false in dimension seven. Prove that such a clique can’t exist, on the other hand, and you’ve proved the conjecture true.

To answer Keller’s conjecture for dimension seven, a computer would have to check every one of those combinations — either ruling them all out (meaning no clique of size 128 exists, and Keller is true in dimension seven) or finding just one that works (meaning Keller is false).

Hidden Efficiencies

Mackey recalls the day when, in his eyes, the project really came together. He was standing in front of a blackboard in his office at Carnegie Mellon University discussing the problem with two of his co-authors, Heule and Brakensiek, when Heule suggested a way of structuring the search so that it could be completed in a reasonable amount of time.

“There was real intellectual genius at work there in my office that day,” Mackey said. “It was like watching Wayne Gretzky, like watching LeBron James in the NBA Finals. I have goose bumps right now [just thinking about it].”

The computers actually delivered a lot more than a one-word answer. They supported it with a long proof — 200 gigabytes in size — justifying their conclusion.

The proof is much more than a readout of all the configurations of variables the computers checked. It’s a logical argument which establishes that the desired clique couldn’t possibly exist. The four researchers fed the Keller proof into a formal proof checker — a computer program that traced the logic of the argument — and confirmed it works. 

“You don’t just go through all the cases and not find anything, you go through all the cases and you’re able to write a proof that this thing doesn’t exist,” Mackey said. “You’re able to write a proof of unsatisfiability.”

The Mathematical Structure of Particle Collisions

Source: Quanta, Aug 2020

three recent papers from a group of physicists led by Pierpaolo Mastrolia of the University of Padua in Italy and Sebastian Mizera of the Institute for Advanced Study in Princeton, New Jersey, have revealed an underlying mathematical structure in the equations. The structure provides a new way of collapsing interminable terms into just dozens of essential components. Their method may help bring about new levels of predictive accuracy, which theorists desperately need if they are to move beyond the leading but incomplete model of particle physics.

The new method skirts the traditional mathematical slog by directly computing “intersection numbers,” which some hope could eventually lead to a more elegant description of the subatomic world.

An Infinite Loop

When physicists model particle collisions they use a tool called a Feynman diagram, a simple schematic invented by Richard Feynman in the 1940s.

To get a feel for these diagrams, consider a simple particle event: Two quarks streak in, exchange a single gluon as they “collide,” then bounce away on their separate trajectories.

In a Feynman diagram the quarks’ paths are represented by “legs,” which join to form “vertices” when particles interact. Feynman developed rules for turning this cartoon into an equation which calculates the probability that the event actually takes place: You write a specific function for each leg and vertex — generally a fraction involving the particle’s mass and momentum — and multiply everything together. For straightforward scenarios like this one, the calculation might fit on a cocktail napkin. 

Physicists don’t need to calculate every last integral in a complicated Feynman diagram because the vast majority can be lumped together.

with intersection numbers, physicists may have found a way of elegantly plucking out the essential information from a sprawling calculation of Feynman integrals.

A Geometric Fingerprint 

Mastrolia and Mizera’s work is rooted in a branch of pure math called algebraic topology, which classifies shapes and spaces. Mathematicians pursue this classification with “cohomology” theories, which allow them to extract algebraic fingerprints from complicated geometric spaces.

“It’s kind of a summary, an algebraic gadget that incorporates the essence of the space you want to study,” said Clément Dupont, a mathematician at the University of Montpellier in France.

Thousands of integrals can be reduced to just dozens of “master integrals,” which are weighted and added together. But exactly which integrals can be subsumed under which master integrals is itself a hard computational question. Researchers use computers to essentially guess at millions of relationships and laboriously extract the combinations of integrals that matter.

Feynman diagrams can be translated into geometric spaces that are amenable to analysis by cohomology. Each point within these spaces might represent one of a multitude of scenarios that could play out when two particles collide.

In 2017, Mizera was struggling to analyze how objects in string theory collide when he stumbled upon tools pioneered by Israel Gelfand and Kazuhiko Aomoto in the 1970s and 1980s as they worked with a type of cohomology called “twisted cohomology.” Later that year Mizera met Mastrolia, who realized that these techniques could work for Feynman diagrams too. Last year they published three papers that used this cohomology theory to streamline calculations involving simple particle collisions.

Their method takes a family of related physical scenarios, represents it as a geometric space, and calculates the twisted cohomology of that space. “This twisted cohomology has everything to say about the integrals we are interested in,” Mizera said.

In particular, the twisted cohomology tells them how many master integrals to expect and what their weights should be. The weights emerge as values they call “intersection numbers.” In the end, thousands of integrals shrink to a weighted sum of dozens of master integrals.

The cohomology theories that produce these intersection numbers may do more than just ease a computational burden — they could also point to the physical significance of the most important quantities in the calculation.

when a virtual gluon splits into two virtual quarks, the quarks’ possible lifetimes can vary. In the associated geometric space, each point can stand for a different quark lifetime. When researchers compute the weights, they see that scenarios with the longest-lasting virtual particles — that is, cases in which the particles become essentially real — shape the outcome the most.

“That’s the amazing thing about this method,” said Caron-Huot. “It reconstructs everything starting from just these rare, special events.”

Last week Mizera, Mastrolia and colleagues published another preprint showing that the technique has matured enough to handle real-world two-loop diagrams. A forthcoming paper by Caron-Huot will push the method further, perhaps bringing three-loop diagrams to heel.

If successful, the technique could help usher in the next generation of theoretical predictions. And, a few researchers suspect, it may even foreshadow a new perspective on reality.

Logical Complexity of Proofs

Source: RJ Lipton blog, Aug 2020

Robert Reckhow with his advsior Stephen Cook famously started the formal study of the complexity of proofs with their 1979 paper. They were interested in the length of the shortest proofs of propositional statements.

Cook and Reckhow were motivated by issues like: How hard is it to prove that a graph has no clique of a certain size? Or how hard to prove that some program halts on all inputs of length {n}? All of these questions ask about the length of proofs in a precise sense. Proofs have been around forever, back to Euclid at least, but Cook and Reckhow were the first to formally study the lengths of proofs.

Proofs Complexity

There are several measures of complexity for proofs. One is the length. Long proofs are difficult to find, difficult to write up, difficult to read, and difficult to check. Another less obvious measure is the logical structure of a proof. What does this mean?

Our idea is that a proof can be modeled by a formula from propositional logic. The {P} is what we are trying to prove and the letters {A} and so on are for statements we already know.

  • {(A \rightarrow P)} This is a direct proof.
  • {(\neg P \rightarrow \neg A)} This is a proof by contradiction.
  • {( A \vee \neg A \rightarrow P)} This is proof that uses a statement {A} that may be true or false.

Proofs in Trouble

A sign of a proof in danger is, in my opinion, is not just the length. A better measure I think is the logical flow of proof. I know of no actual proof that uses this structure:

\displaystyle (A \rightarrow B) \rightarrow ((A \vee C) \rightarrow (B \vee C))

Do you? Even if your proof is only a few lines or even pages, if the high level flow was the above tautology I would be worried.

Another example is {P \rightarrow P}.

This of course is a circular proof. It seems hard to believe we would actually do this, but it has happen. The key is that no one says: I will assume the theorem to prove it. The flaw is disguised better than that.

I cannot formally define this measure. Perhaps it is known, but I do think that it would be an additional measure. For actual proofs, ones we use every day, perhaps it would be valuable. I know I have looked at an attempted proof of X and noticed the logical flow in this sense was too complex. So complex that it was wrong. The author of the potential proof was me.

Symplectic Geometry

Source: Quanta, Jul 2020

Geometric spaces can be floppy like a tarp or rigid like a tent. “The tarp is very malleable but then you get, whatever, a bunch of sticks or scaffolding to shape it,” said Emmy Murphy of Northwestern University. “It makes it a more concrete thing.”

The least structured spaces are just collections of connected points (like the tarp). A line is a one-dimensional space of this sort. The surface of a ball is a two-dimensional version. The lack of structure in these spaces means it’s easy to deform them without fundamentally changing them: Wiggle the line and inflate, indent or twist the ball, and they’re both still the same in the eyes of topologists, who study these unstructured spaces.

You can also add more structure to a space. This structure enhances the information the space contains, but also limits the ways you can deform it.

A symplectic structure is another structure you could add. It provides a way of measuring area in the space and allows you to change the space’s shape only if area measurements stay constant.

… the name symplectic, which derives from the Greek word sumplektikós, the equivalent of the Latin-based “complex,” both of which mean “braided together” — evoking the way the symplectic structure and the complex numbers are intertwined.

Symplectic geometry studies transformations of spaces that preserve the symplectic structure, keeping area measurements constant. This allows for some freedom, but not too much, in the types of transformations you can employ. As a result, symplectic geometry occupies a kind of middle ground between the floppy topology of a tarp and the rigid geometry of a tent.

for many mathematicians, the appeal of symplectic geometry has little to do with the ways it relates to physics or other areas of math. It lies in the marvel that it exists at all.

“We start finding beauty in the structure itself, regardless of how it connects to anything else,” Murphy said.

Multiple Proofs for Math Theorems

Source: Quanta, Jul 2020

The prime number theorem provides a way to approximate the number of primes less than or equal to a given number n. This value is called π(n), where π is the “prime counting function.”

For example, π(10) = 4 since there are four primes less than or equal to 10 (2, 3, 5 and 7). Similarly, π(100) = 25 , since 25 of the first 100 integers are prime. Among the first 1,000 integers, there are 168 primes, so π(1,000) = 168, and so on.

Note that as we considered the first 10, 100 and 1,000 integers, the percentage of primes went from 40% to 25% to 16.8%. These examples suggest, and the prime number theorem confirms, that the density of prime numbers at or below a given number decreases as the number gets larger.

Over time, number theorists helped establish a culture in which mathematicians worked on proving and re-proving theorems not just to verify statements, but also to improve their skills at proving theorems and their understanding of the math involved.

This goes beyond the prime number theorem. Paulo Ribenboim cataloged at least 7 proofs of the infinitude of primes.

Steven Kifowit and Terra Stamps identified 20 proofs demonstrating that the harmonic series, 1+ 12 + 13 + 14 + 15 + 16 + 17 + …, does not equal a finite number, and Kifowit later followed up with 28 more. Bruce Ratner cites more than 371 different proofs of the Pythagorean theorem, including some gems provided by Euclid, Leonardo da Vinci and U.S. President James Garfield, who was a member of Congress from Ohio at the time.

This habit of re-proving things is now so ingrained, mathematicians can literally count on it. Tom Edgar and Yajun An noted that there have been 246 proofs of a statement known as the quadratic reciprocity law following Gauss’ original proof in 1796. Plotting the number of proofs over time, they extrapolated that we could expect the 300th proof of this theorem around the year 2050.

In Math, Maps Matter

Source: Quanta, Jun 2020

often in math it’s not clear what’s possible and what’s not.

Sometimes a problem can seem hopeless, only for a mathematician to realize that the ingredients of a solution have been hiding in plain sight. This is what happened with Vesselin Dimitrov’s recent proof of a problem called the Schinzel-Zassenhaus conjecture, which Quanta covered in our article “Mathematician Measures the Repulsive Force Within Polynomials.”

Mathematicians had long failed to prove the conjecture, and many believed that it would take a new mathematical invention to get there. But Dimitrov cracked the problem by finding a novel way of combining techniques that have been around for more than 40 years.

So how do mathematicians know if a problem is currently impossible or just really hard? Obviously, there’s no clear way to tell, so they have to rely on clues. And the biggest hint that a problem is out of reach is simply that lots of people have failed to solve it.

Another way to tell is to see whether a problem resembles another. If mathematicians have solved one problem, it boosts their confidence that they can solve another that looks kind of like it.

But some problems look entirely unlike any solved problems. For example, two of the biggest open problems in the field of number theory are the twin primes conjecture and the Goldbach conjecture. They look a lot like each other, but they’re also distinct from anything else mathematicians have managed to prove.

But the resemblance between Goldbach and twin primes suggests they might both yield to the same idea. “It’s my belief that we might solve both at the same time, even if they seem to be quite far from any island I know how to reach with my math techniques,” Maynard said.

When problems are so far off the map that mathematicians can’t even imagine how to reach them, the challenge is more than coming up with a better boat — it’s coming up with a better map. If you don’t know where an island is located, no amount of ingenuity will get you there. But once you’ve located it, you might find a surprising route that will bring you to its shores.

This was the case with the most celebrated mathematical result of the 21st century — Grigori Perelman’s 2003 proof of the Poincaré conjecture, a problem about determining when a three-dimensional shape is equivalent to the three-dimensional sphere. The problem had stymied mathematicians for a century. Then in the early 1980s, William Thurston placed the Poincaré conjecture in a broader theoretical landscape — and from there, mathematicians began to discover new ways to approach it.

“I think one of the reasons we were stonewalled was not because we didn’t have the right techniques, but because the problem wasn’t put in the right conceptual framework,” McMullen said. “The changed question suggested the changed techniques.

In other words, if a new map reveals a surprising sea route to your destination, it might occur to you to build a ship.

“L-functions and Modular Forms Database” – a detailed atlas of connections between math objects

Source: MIT, May 2016

 a new online resource that provides detailed maps of previously uncharted mathematical terrain.

The “L-functions and Modular Forms Database,” or LMFDB, is a detailed atlas of mathematical objects that maps out the connections between them. The LMFDB exposes deep relationships and provides a guide to previously uncharted territory that underlies current research in several branches of physics, computer science, and mathematics. This coordinated effort is part of a massive collaboration of researchers around the globe.

The LMFDB provides a sophisticated Web interface that allows both experts and amateurs to easily navigate its contents. Each object has a “homepage” and links to related objects, or “friends.” The LMFDB also includes an integrated knowledge database that explains its contents and the mathematics behind it. “We are mapping the mathematics of the 21st century,” says project member Brian Conrey, director of the American Institute of Mathematics. “The LMFDB is both an educational resource and a research tool, one that will become indispensable for future exploration.”

One of the great triumphs in mathematics of the late 20th century was Andrew Wiles’ proof of Fermat’s Last Theorem, a proposition by Pierre de Fermat that went unproven for more than 300 years despite the efforts of generations of mathematicians.

The key to Wiles’ proof was establishing a long-conjectured relationship between two mathematical worlds: elliptic curves and modular forms. Elliptic curves arise naturally in many parts of mathematics and can be described by a simple cubic equation; they form the basis of cryptographic protocols used by most of the major Internet companies, including Google, Facebook, and Amazon.

Modular forms are more mysterious objects: complex functions with an almost unbelievable degree of symmetry. Elliptic curves and modular forms are connected via their L-functions.

The remarkable relationship between elliptic curves and modular forms is made fully explicit in the LMFDB, where users can travel from one world to another with the click of a mouse and view the L-functions that connect the two worlds.

Over 1,000 online Science and Math Seminars from 115 institutions worldwide

Source: MIT, May 2020

researchseminars.org, a website the MIT team formally launched this week, that serves as a sort of crowdsourced Ticketmaster for science talks. Instead of featuring upcoming shows and concerts, the new site lists more than 1,000 free, upcoming seminars hosted online by more than 115 institutions around the world.