Category Archives: Math

Network Effects – Metcalfe & Reed

Source: A16Z Slideshare, Mar 2016

 

Fullscreen capture 262018 61550 PM.bmp

Advertisements

Using Computers to Explore Mathematics

Source: Quanta, Jan 2018

Schwartz uses computer experiments in much of his work — he’s on the vanguard in that respect. As he explains, computers complement human mathematical thought in several ways: They draw out patterns that provide hints which lead to proofs that might not have been apparent to the mind alone.

I like doing computer experiments, and so I feel that sometimes I have a chance of making progress. The modern computer is a new tool, and I think of these simple things as excuses for data gathering. Like, I’m just going to program the computer and run some experiments and see if I can turn up some hidden patterns that nobody else had seen just because they hadn’t yet done these experiments.

they’re unbelievably good scratch paper. Mathematicians, even the old greats like Gauss and Euler, were trying to gather experimental evidence. They’ll try to work out special cases on paper by hand to give them some idea of what might be happening. In a sense, the computer lets you do a lot more of that. It lets you gather much more experimental evidence about what might be true.

It’s also a visualization tool. It reveals things you’d have no idea would be true without it. A good example where you really need the computer is something like the Mandelbrot set. If you don’t have a computer, you could draw a couple points by hand. But then when people started doing these computer experiments, it just revealed this wealth of information about what’s going on. The Mandelbrot set, Julia sets, and all this stuff which would have been impossible to see without a lot of computation and plotting.

mathematics is extremely good for highly symmetric objects. In a sense, mathematics is about miracles. A recent big example of this is the solution to the Kepler conjecture in eight dimensions by Maryna Viazovska. [The conjecture deals with the densest possible way to pack a collection of spheres together.] It’s curious that the Kepler conjecture in eight dimensions is much easier to solve than the Kepler conjecture in three dimensions. The reason is because there’s this miraculous sphere packing in eight dimensions which is extremely, extremely symmetrical. These special configurations in eight dimensions are like these miraculous things. Whereas without the presence of extraordinary symmetries, in a way, math doesn’t know what to do. So then the computer is very useful, because it lets you search through the possibilities.

I played around with different ways of representing the data. Eventually I got the idea to draw a kind of higher dimensional representation, and suddenly this beautiful pattern emerged. I never would have guessed it.

The computer can gather a large amount of information, it can organize things graphically, it can serve as your external memory, so it can help you recognize these underlying patterns that might be too remote to see unaided. The computer is a shotgun approach, you just try a bunch of stuff. In a way, you’re not using your brain at all, at least initially. But then you get some feedback — there’s something that you see — then you tune the experiment to the feedback you see. And if it’s successful, you really end up knowing something you never would have found on your own.

Game of Life (cellular automata) is Turing-Complete

Source: Stanford University, date indeterminate

IN THE OCTOBER 1970 ISSUE OF SCIENTIFIC AMERICAN, MARTIN GARDNER POPULARIZED CONWAY’S GAME OF LIFE BY DESCRIBING THE GAME’S CAPACITY TO “OPEN UP A WHOLE NEW FIELD OF MATHEMATICAL RESEARCH, THE FIELD OF CELLULAR AUTOMATA.” MANY SOON DISCOVERED THE GREAT POWER OF CELLULAR AUTOMATA AS MODELS OF COMPUTATION. IT WAS DETERMINED THAT CELLULAR AUTOMATA, AS THEY APPEAR IN THE GAME OF LIFE, HAVE THE SAME COMPUTATIONAL CAPACITY AS TURING MACHINES. THE CHURCH-TURING THESIS THAT STATES:

“No method of computation carried out by a mechanical process can be more powerful than a Turing machine.”

THEREFORE, AS THE GAME OF LIFE IS TURING COMPLETE, IT IS ONE OF THE MOST POWERFUL MODELS OF COMPUTATION. IN OTHER WORDS, NO MECHANICAL FORM OF COMPUTATION CAN SOLVE A PROBLEM THAT A TURING MACHINE OR CELLULAR AUTOMATA CANNOT SOLVE, GIVEN SUFFICIENT TIME AND SPACE. THE FOLLOWING REASONS LEAD RESEARCHERS TO DETERMINE THE GAME OF LIFE HAS ALL THE COMPUTATIONAL CAPABILITY OF TURING MACHINES, MEANING IT IS TURING COMPLETE:

THE CELLULAR FORMATION OF A “GLIDER” CAN INTERACT WITH OTHER CELLULAR AUTOMATA TO CONSTRUCT LOGIC GATES LIKE AND, OR, AND NOT. THEREFORE, IT IS POSSIBLE TO BUILD A PATTERN OF CELLULAR AUTOMATA THAT ACTS LIKE A FINITE STATE MACHINE USING TWO COUNTERS. THE FORMATION THAT ACTS LIKE A FINITE STATE MACHINE HAS THE SAME COMPUTATION POWER AS A UNIVERSAL TURING MACHINE, ALSO KNOWN AS BEING TURING COMPLETE. THIS MEANS THAT THE GAME OF LIFE IS AS COMPUTATIONALLY POWERFUL TO ANY COMPUTER WITH UNLIMITED MEMORY AND NO TIME CONSTRAINTS.

PRACTICAL APPLICATIONS

Cellular automata yield a surprisingly diverse range of practical applications. We focus here on one particular example of a real-life application of cellular automata – the design of a public-key cryptosystem.

A public-key cryptosystem is a cryptographic protocol that relies on two keys – an enciphering key E, which is made public, and a deciphering key D, which is kept private. Such a system should contain two important properties:

1) Security. For attackers who have no access to the private key, the protocol should be so designed such that it would take a prohibitively large amount of time to recover the plain text from the cipher text.

2) Signature. The receiver should be able to tell who the sender is. In other words, if person A, say Alice, wants to send a message, she should be able to sign her message in a way that nobody but she could.

Public-Key Crypotography.

In the system, each piece of plain text consists of binary digit blocks of a certain length, say N. In turn, each set of, for example, k bits in the block can be viewed as an element of some ground set S. For instance, we can have a cryptosystem where each block contained 5 bits and each bit takes a value in a field of 2 elements.

Suppose now each block contains N bits and represents m elements of S. To satisfy the security requirement, we require an invertible function which maps Sm to Sm that satisfies the following conditions:

a) Easy to compute (for enciphering)

b) Hard to obtain its inverse without certain key pieces of information which only the receiver has access to (deterring would-be attackers)

The complexity of cellular automata lends itself nicely to applications in cryptography. In particular, cellular automaton rules that are invertible are prime candidates for the invertible function we require to construct a public-key cryptosystem. To succinctly represent the rules whilst preserving a large neighborhood-size, we can associate the ground set S with a mathematical structure such as a field or a ring. This way, addition and multiplication are well defined on S, and we can thus represent cellular automaton rules as polynomial functions.

Here is a sketch of how a cellular automaton cryptosystem might work. Let the ground set be a field. The enciphering key, E, is a composition of inhomogeneous and time-varying linear invertible rules which is made public (the observant reader might note that the fact that the rules are inhomogeneous and time-varying distinguishes the resulting cellular automaton from elementary cellular automata). The deciphering key, D, is the set of individual rules in the composite enciphering function.

The crucial factor that enables such a cellular automaton cryptosystem to work is the fact that it is extremely computationally expensive to unravel the original state from the cipher state without prior knowledge of the individual rules used in the composite enciphering function. This guarantees the security of the system.

Signing a message in our cryptosystem works in exactly the same way as signing a message in the RSA cryptosystem. All the sender has to do is to apply the secret decryption key D to her name and then include that coded signature at the end of her message. This way, when the receiver receives the message, he can simply use the public key E to decipher the coded signature and see if the name matches the presumed sender.

200 Terabyte Math Proof

Source: Nature.com, May 2016

Three computer scientists have announced the largest-ever mathematics proof: a file that comes in at a whopping 200 terabytes1, roughly equivalent to all the digitized text held by the US Library of Congress. The researchers have created a 68-gigabyte compressed version of their solution — which would allow anyone with about 30,000 hours of spare processor time to download, reconstruct and verify it — but a human could never hope to read through it.

Computer-assisted proofs too large to be directly verifiable by humans have become commonplace, and mathematicians are familiar with computers that solve problems in combinatorics — the study of finite discrete structures — by checking through umpteen individual cases. Still, “200 terabytes is unbelievable”, says Ronald Graham, a mathematician at the University of California, San Diego. The previous record-holder is thought to be a 13-gigabyte proof2, published in 2014.

The puzzle that required the 200-terabyte proof, called the Boolean Pythagorean triples problem, has eluded mathematicians for decades. … The problem asks whether it is possible to colour each positive integer either red or blue, so that no trio of integers aband c that satisfy Pythagoras’ famous equation a2 + b2 = c2 are all the same colour. For example, for the Pythagorean triple 3, 4 and 5, if 3 and 5 were coloured blue, 4 would have to be red.

In a paper posted on the arXiv server on 3 May, Heule, Oliver Kullmann of Swansea University, UK, and Victor Marek of the University of Kentucky in Lexington have now shown that there are many allowable ways to colour the integers up to 7,824 — but when you reach 7,825, it is impossible for every Pythagorean triple to be multicoloured1. There are more than 102,300 ways to colour the integers up to 7,825, but the researchers took advantage of symmetries and several techniques from number theory to reduce the total number of possibilities that the computer had to check to just under 1 trillion. It took the team about 2 days running 800 processors in parallel on the University of Texas’s Stampede supercomputer to zip through all the possibilities. The researchers then verified the proof using another computer program.

Although the computer solution has cracked the Boolean Pythagorean triples problem, it hasn’t provided an underlying reason why the colouring is impossible, or explored whether the number 7,825 is meaningful, says Kullmann.

That echoes a common philosophical objection to the value of computer-assisted proofs: they may be correct, but are they really mathematics? If mathematicians’ work is understood to be a quest to increase human understanding of mathematics, rather than to accumulate an ever-larger collection of facts, a solution that rests on theory seems superior to a computer ticking off possibilities.

That did ultimately occur in the case of the 13-gigabyte proof from 2014, which solved a special case of a question called the Erdős discrepancy problem. A year later, mathematician Terence Tao of the University of California, Los Angeles, solved the general problem the old-fashioned way3 — a much more satisfying resolution.

James Simons – Flatiron Institute

Source: The New Yorker, Dec 2017

The Flatiron Institute, which is in an eleven-story fin-de-siècle building on the corner of Twenty-first Street and Fifth Avenue, is devoted exclusively to computational science—the development and application of algorithms to analyze enormous caches of scientific data. 

The institute’s aim is to help provide top researchers across the scientific spectrum with bespoke algorithms that can detect even the faintest tune in the digital cacophony. … At the Flatiron, a nonprofit enterprise, the goal is to apply Renaissance’s analytical strategies to projects dedicated to expanding knowledge and helping humanity. 

The Flatiron doesn’t conduct any new experiments. Most of its fellows collaborate with universities that generate fresh data from “wet” labs—the ones with autoclaves and genetically engineered mice—and other facilities. The institute’s algorithms and computer models are meant to help its fellows uncover information hidden in data sets that have already been collected: inferring the location of new planets from the distorted space-time that surrounds them; identifying links to mutations among apparently functionless parts of chromosomes. 

Simons has placed a big bet on his hunch that basic science will yield to the same approach that made him rich. He has hired ninety-one fellows in the past two years, and expects to employ more than two hundred, making the Flatiron almost as big as the Institute for Advanced Study, in Princeton, New Jersey. He is not worried about the cost. “I originally thought seventy-five million a year, but now I’m thinking it’s probably going to be about eighty,”

“We’re spending four hundred and fifty million a year,” Simons said. “Gradually, the Simons Foundation International will take over much of the spending.”

He encouraged interaction and debate among the researchers. “Everything was collaboration at Renaissance, or a great deal of it,” he said. “It was a very open atmosphere.” Former colleagues agree that Simons was an exceptional manager. He understood what scientists enjoyed, and often arranged quirky bonding exercises: at one point, Renaissance employees competed to see who could ride a bicycle along a particular path at the slowest speed without falling over.

Taste in science is very important,” Simons told me. “To distinguish what’s a good problem and what’s a problem that no one’s going to care about the answer to anyway—that’s taste. And I think I have good taste.”

A new research center could “prospect for interesting data sets where people intuit that there’s more structure than can be gotten out now, but that aren’t so complicated that it’s hopeless.”

Sharing had been an important part of Renaissance’s culture. “Everyone knew what everyone else was doing,” Simons said. “They could pitch in and say, ‘Try that!’ ” He wants information to flow among groups at the Flatiron Institute, too, so there are plenty of chalkboards in the hallways, and communal areas—coffee nooks, tuffets arranged in rows—where fellows can “sit around and schmooze.” He observed, “An algorithm that’s good for spike sorting—some version of it might conceivably be good for star sorting, or for looking at other things in another field.” One day in June, I passed a chalkboard that was covered with an equation written by David Spergel, the head of the astronomy division. It demonstrated that the way a supernova explosion drives galactic winds also captures the behavior of the movement of waves in oceans and, by implication, the movement of fluids in cells.

Another researcher pointed out that, as powerful as computational science has become, it still relies on the kind of experimental science that the institute does not fund. In an e-mail, the researcher noted, “The predictions from the computation can only ever be as good as the data that has been generated. (I think!)”

Maxwell’s Equations: From 20 to 4

Source: ETHW website, date indeterminate

Maxwell’s Equations refer to a set of four relations that describe the properties and interrelations of electric and magnetic fields. The equations are shown in modern notation in Figure 2. The electric force fields are described by the quantities E (the electric field) and D = εE (the electric displacement), the latter including how the electrical charges in a material become polarized in an electric field. The magnetic force fields are described by H (the magnetic field) and B = µH (the magnetic flux density), the latter accounting for the magnetization of a material.

The equations can be considered in two pairs. The first pair consists of Equation 1 and Equation 2. Equation 1 describes the electric force field surrounding a distribution of electric charge ρ. It shows that the electric field lines diverge from areas of positive charge and converge onto areas of negative charge (Figure 3). Equation 2 shows that magnetic field lines curl to form closed loops (Figure 4), with the implication that every north pole of a magnet is accompanied by a south pole. The second pair, Equation 3 and Equation 4, describes how electric and magnetic fields are related. Equation 3 describes how a time-varying magnetic field will cause an electric field to curl around it. Equation 4 describes how a magnetic field curls around a time-varying electric field or an electric current flowing in a conductor.

Original 20 equations

Modern-Day 4 Equations

Heaviside championed the Faraday-Maxwell approach to electromagnetism and simplified Maxwell’s original set of 20 equations to the four used today. Importantly, Heaviside rewrote Maxwell’s Equations in a form that involved only electric and magnetic fields. Maxwell’s original equations had included both fields and potentials.

In an analogy to gravity, the field corresponds to the gravitational force pulling an object onto the Earth, while the potential corresponds to the shape of the landscape on which it stands. By configuring the equations only in terms of fields, Heaviside simplified them to his so-called Duplex notation, with the symmetry evident in the equations of Figure 2. He also developed the mathematical discipline of vector calculus with which to apply the equations. Heaviside analysed the interaction of electromagnetic waves with conductors and derived the telegrapher’s equations of Kirchhoff from Maxwell’s theory to describe the propagation of electrical signals along a transmission line.

Related Resources:

1.  “Maxwell’s Original Equations”, 2011

2.  Maxwell’s Scientific Papers, 8 Equations, date indeterminate

3. IEEE, Dec 2014

Maxwell’s own description of his theory was stunningly complicated. College students may greet the four Maxwell’s equations with terror, but Maxwell’s formulation was far messier. To write the equations economically, we need mathematics that wasn’t fully mature when Maxwell was conducting his work. Specifically, we need vector calculus, a way of compactly codifying the differential equations of vectors in three dimensions.

Maxwell’s theory today can be summed up by four equations. But his formulation took the form of 20 simultaneous equations, with 20 variables. The dimensional components of his equations (the x, y, and z directions) had to be spelled out separately. And he employed some counterintuitive variables. 

The net result of all of this complexity is that when Maxwell’s theory made its debut, almost nobody was paying attention.

It was Heaviside, working largely in seclusion, who put Maxwell’s equations in their present form. 

The key was eliminating Maxwell’s strange magnetic vector potential. “I never made any progress until I threw all the potentials overboard,” Heaviside later said. The new formulation instead placed the electric and magnetic fields front and center.

One of the consequences of the work was that it exposed the beautiful symmetry in Maxwell’s equations. One of the four equations describes how a changing magnetic field creates an electric field (Faraday’s discovery), and another describes how a changing electric field creates a magnetic field (the famous displacement current, added by Maxwell).

4. Mathematical Representations in Science

NP-Complete = Circuit Problem

Source: MIT website,

Any NP-complete problem can be represented as a logic circuit — a combination of the elementary computing elements that underlie all real-world computing. Solving the problem is equivalent to finding a set of inputs to the circuit that will yield a given output.

Suppose that, for a particular class of circuits, a clever programmer can devise a method for finding inputs that’s slightly more efficient than solving a generic NP-complete problem. Then, Williams showed, it’s possible to construct a mathematical function that those circuits cannot implement efficiently.

It’s a bit of computational jiu-jitsu: By finding a better algorithm, the computer scientist proves that a circuit isn’t powerful enough to perform another computational task. And that establishes a lower bound on that circuit’s performance.

First, Williams proved the theoretical connection between algorithms and lower bounds, which was dramatic enough, but then he proved that it applied to a very particular class of circuits.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

Although he had been writing computer programs since age 7 — often without the benefit of a computer to run them on — Williams had never taken a computer science class. Now that he was finally enrolled in one, he found it boring, and he was not shy about saying so in class.

Eventually, his frustrated teacher pulled a heavy white book off of a shelf, dumped it dramatically on Williams’s desk, and told him to look up the problem described in the final chapter. “If you can solve that,” he said, “then you can complain.”

The book was “Introduction to Algorithms,” co-written by MIT computer scientists Charles Leiserson and Ron Rivest and one of their students, and the problem at the back was the question of P vs. NP, which is frequently described as the most important outstanding problem in theoretical computer science.