Category Archives: Math

Tennis Ball Arrangement

Source: The Conversation, Feb 2016

say you have 128 tennis balls. How many different ways can you arrange them so that each ball touches at least one other? You can stack them, lay them out in various grids, stack the layers and so on. There are probably a lot of configurations, right?

This question was answered recently by a team of researchers at Cambridge University. The number of possible arrangements is on the order of 10²⁵⁰; that’s a 1 with 250 zeroes after it. To give a sense of how large this number is, note that there are only about 10⁸⁰ atoms in the universe. In fact, if we packed the known universe with protons, there would be only about 10¹²⁶ of them. So if we could somehow encode each configuration of the tennis balls on an atom (or even a subatomic particle), we would be able to get through only about the cube root of the total number of possibilities.

Since it’s impossible to actually count all the arrangements of the balls, the team used an indirect approach. They took a sample of all the possible configurations and computed the probability of each of them occurring. Extrapolating from there, the team was able to deduce the number of ways the entire system could be arranged, and how one ordering was related to the next. The latter is the so-called configurational entropy of the system, a measure of how disordered the particles in a system are.

When a Parent is A Cryptographer, and wants to include a Child in a Research Paper

Source: Weizmann, Mar 1999

Yael Naor: Supported by her parents

Sharing Ideas Across Fields

Source: Quanta, Nov 2015

Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem.

Mathematicians in these disciplines greeted the news with a combination of delight and hand-wringing. The solution, which Casazza and Tremain called “a major achievement of our time,” defied expectations about how the problem would be solved and seemed bafflingly foreign. Over the past two years, the experts in the Kadison-Singer problem have had to work hard to assimilate the ideas of the proof. Spielman, Marcus and Srivastava “brought a bunch of tools into this problem that none of us had ever heard of,” Casazza said. “A lot of us loved this problem and were dying to see it solved, and we had a lot of trouble understanding how they solved it.”

“The people who have the deep intuition about why these methods work are not the people who have been working on these problems for a long time,” said Terence Tao, of the University of California, Los Angeles, who has been following these developments. Mathematicians have held several workshops to unite these disparate camps, but the proof may take several more years to digest, Tao said. “We don’t have the manual for this magic tool yet.”

no one realized just how ubiquitous the Kadison-Singer problem had become until Casazza found that it was equivalent to the most important problem in his own area of signal processing. The problem concerned whether the processing of a signal can be broken down into smaller, simpler parts. Casazza dived into the Kadison-Singer problem, and in 2005, he, Tremain and two co-authors wrote a paper demonstrating that it was equivalent to the biggest unsolved problems in a dozen areas of math and engineering. A solution to any one of these problems, the authors showed, would solve them all.

Spielman, Marcus and Srivastava suspected that the answer was yes, and their intuition did not just stem from their previous work on network sparsification. They also ran millions of simulations without finding any counterexamples. “A lot of our stuff was led by experimentation,” Marcus said. “Twenty years ago, the three of us sitting in the same room would not have solved this problem.”

The proof, which has since been thoroughly vetted, is highly original, Naor said. “What I love about it is just this feeling of freshness,” he said. “That’s why we want to solve open problems — for the rare events when somebody comes up with a solution that’s so different from what was before that it just completely changes our perspective.”

using ideas from the proof of the Kadison-Singer problem, Nima Anari, of the University of California, Berkeley, and Shayan Oveis Gharan, of the University of Washington in Seattle, have shown that this algorithm performs exponentially better than people had realized. The new result is “major, major progress,” Naor said.

The proof of the Kadison-Singer problem implies that all the constructions in its dozen incarnations can, in principle, be carried out — quantum knowledge can be extended to full quantum systems, networks can be decomposed into electrically similar ones, matrices can be broken into simpler chunks. The proof won’t change what quantum physicists do, but it could have applications in signal processing, since it implies that collections of vectors used to digitize signals can be broken down into smaller frames that can be processed faster. The theorem “has potential to affect some important engineering problems,” Casazza said.

When mathematicians import ideas across fields, “that’s when I think these really interesting jumps in knowledge happen.”

 

Can Creativity be Automated?

Source: Boston Globe, Feb 2009

A little-known discipline of science called computational intractability studies the boundaries of our understanding … but of the everyday computational realm.

We know answers exist, but it turns out that calculating the solutions to such kinds of problems could take too long, even if all the world’s most powerful computers were to work together on them. Individual instances can be solved, but there is no general way to attack such problems efficiently, which means the “universe could have degenerated into black holes while your computer is still running,” said Scott Aaronson, a theoretical computer scientist at MIT.

By knowing what problems we can’t solve – and scientists are busy figuring out whether problems that seem intractable actually are – researchers can pursue new ways to approach problems, or come up with approximate solutions to problems.

“There are some things we just take for granted cannot be done – and intractability research tries to make it precise,” said Sampath Kannan, director for the Division of Computing and Communication Foundations at the National Science Foundation.

“As we increasingly become an information society, it’s increasingly important to understand” the limits of computation, he said.

Recognizing that value, the National Science Foundation granted $10 million last year to Arora’s center at Princeton.

Sometimes what is intractable looks deceptively straightforward. Take the challenge of making housing assignments. Imagine that there are 100 spots in dorms, and a pool of 400 students to choose from. The dean has a list of pairs of students who are incompatible and cannot appear on the final list.

“The total number of ways of choosing students 100 from the 400 applicants is greater than the number of atoms in the known universe!” the Clay Mathematics Institute wrote in its $1 million challenge. “Thus no future civilization could ever hope to build a supercomputer capable of solving the problem by brute force.”

even with complete data about complex systems, it can be hard to find the rules and patterns that are hidden in the data, simply because of the limits of computation.

“I think it does go a long way to explaining why prediction problems are hard,” Aaronson said.

“A huge reason that we’re interested in these sorts of questions . . . is the question whether creativity can be automated,” Aaronson said. “T

A huge reason that we’re interested in these sorts of questions . . . is the question whether creativity can be automated,” Aaronson said. “The kind of creativity that proves theorems or solves puzzles, or finds patterns in data.”

With repercussions ranging from cryptography to artificial intelligence, humanity has a lot riding on the answer to the question – which remains unsolved.

With repercussions ranging from cryptography to artificial intelligence, humanity has a lot riding on the answer to the question – which remains unsolved.

P vs NP problem (Beyond Computation)

Using Creativity to Find New Mathematical Theorems

Source: Quanta Magazine, Mar 2017

Royen said he hopes the “surprisingly simple proof … might encourage young students to use their own creativity to find new mathematical theorems,” since “a very high theoretical level is not always required.”

The “feeling of deep joy and gratitude” that comes from finding an important proof has been reward enough. “It is like a kind of grace,” he said. “We can work for a long time on a problem and suddenly an angel — [which] stands here poetically for the mysteries of our neurons — brings a good idea.”

Mathematics and Analogies

Source: Wikipedia, date indeterminate

  • Theorems = things
  • Proofs = processes

A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.