Turing’s Cathederal

Source: IAS, 2012

The history of digital computing can be divided into an Old Testament whose prophets, led by Gottfried Wilhelm Leibniz, supplied the logic, and a New Testament whose prophets, led by John von Neumann, built the machines.

Alan Turing, whose “On Computable Numbers, with an Application to the Entscheidungsproblem” was published shortly after his arrival in Princeton as a twenty-four-year-old graduate student in October 1936, formed the bridge between the two.

In this talk, George Dyson, a Director’s Visitor in 2002–03 and the author of Turing’s Cathedral: The Origins of the Digital Universe (Pantheon, 2012), discusses the role of the Institute’s Electronic Computer Project as modern stored-program computers were developed after WWII. Turing’s one-dimensional model of universal computation led directly to von Neumann’s two-dimensional implementation, and the world has never been the same since.

Tennis Ball Arrangement

Source: The Conversation, Feb 2016

say you have 128 tennis balls. How many different ways can you arrange them so that each ball touches at least one other? You can stack them, lay them out in various grids, stack the layers and so on. There are probably a lot of configurations, right?

This question was answered recently by a team of researchers at Cambridge University. The number of possible arrangements is on the order of 10²⁵⁰; that’s a 1 with 250 zeroes after it. To give a sense of how large this number is, note that there are only about 10⁸⁰ atoms in the universe. In fact, if we packed the known universe with protons, there would be only about 10¹²⁶ of them. So if we could somehow encode each configuration of the tennis balls on an atom (or even a subatomic particle), we would be able to get through only about the cube root of the total number of possibilities.

Since it’s impossible to actually count all the arrangements of the balls, the team used an indirect approach. They took a sample of all the possible configurations and computed the probability of each of them occurring. Extrapolating from there, the team was able to deduce the number of ways the entire system could be arranged, and how one ordering was related to the next. The latter is the so-called configurational entropy of the system, a measure of how disordered the particles in a system are.

When a Parent is A Cryptographer, and wants to include a Child in a Research Paper

Source: Weizmann, Mar 1999

Yael Naor: Supported by her parents

Sharing Ideas Across Fields

Source: Quanta, Nov 2015

Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem.

Mathematicians in these disciplines greeted the news with a combination of delight and hand-wringing. The solution, which Casazza and Tremain called “a major achievement of our time,” defied expectations about how the problem would be solved and seemed bafflingly foreign. Over the past two years, the experts in the Kadison-Singer problem have had to work hard to assimilate the ideas of the proof. Spielman, Marcus and Srivastava “brought a bunch of tools into this problem that none of us had ever heard of,” Casazza said. “A lot of us loved this problem and were dying to see it solved, and we had a lot of trouble understanding how they solved it.”

“The people who have the deep intuition about why these methods work are not the people who have been working on these problems for a long time,” said Terence Tao, of the University of California, Los Angeles, who has been following these developments. Mathematicians have held several workshops to unite these disparate camps, but the proof may take several more years to digest, Tao said. “We don’t have the manual for this magic tool yet.”

no one realized just how ubiquitous the Kadison-Singer problem had become until Casazza found that it was equivalent to the most important problem in his own area of signal processing. The problem concerned whether the processing of a signal can be broken down into smaller, simpler parts. Casazza dived into the Kadison-Singer problem, and in 2005, he, Tremain and two co-authors wrote a paper demonstrating that it was equivalent to the biggest unsolved problems in a dozen areas of math and engineering. A solution to any one of these problems, the authors showed, would solve them all.

Spielman, Marcus and Srivastava suspected that the answer was yes, and their intuition did not just stem from their previous work on network sparsification. They also ran millions of simulations without finding any counterexamples. “A lot of our stuff was led by experimentation,” Marcus said. “Twenty years ago, the three of us sitting in the same room would not have solved this problem.”

The proof, which has since been thoroughly vetted, is highly original, Naor said. “What I love about it is just this feeling of freshness,” he said. “That’s why we want to solve open problems — for the rare events when somebody comes up with a solution that’s so different from what was before that it just completely changes our perspective.”

using ideas from the proof of the Kadison-Singer problem, Nima Anari, of the University of California, Berkeley, and Shayan Oveis Gharan, of the University of Washington in Seattle, have shown that this algorithm performs exponentially better than people had realized. The new result is “major, major progress,” Naor said.

The proof of the Kadison-Singer problem implies that all the constructions in its dozen incarnations can, in principle, be carried out — quantum knowledge can be extended to full quantum systems, networks can be decomposed into electrically similar ones, matrices can be broken into simpler chunks. The proof won’t change what quantum physicists do, but it could have applications in signal processing, since it implies that collections of vectors used to digitize signals can be broken down into smaller frames that can be processed faster. The theorem “has potential to affect some important engineering problems,” Casazza said.

When mathematicians import ideas across fields, “that’s when I think these really interesting jumps in knowledge happen.”


AI and Human Curiosity

Source:  HBR, Apr 2017

Curiosity has been hailed as one of the most critical competencies for the modern workplace. It’s been shown to boost people’s employability. Countries with higher curiosity enjoy more economic and political freedom, as well as higher GDPs. It is therefore not surprising that, as future jobs become less predictable, a growing number of organizations will hire individuals based on what they could learn, rather than on what they already know.

Since no skill can be learned without a minimum level of interest, curiosity may be considered one of the critical foundations of talent. As Albert Einstein famously noted, “I have no special talent. I am only passionately curious.

Curiosity is only made more important for people’s careers by the growing automation of jobs. At this year’s World Economic Forum, ManpowerGroup predicted that learnability, the desire to adapt one’s skill set to remain employable throughout one’s working life, is a key antidote to automation. Those who are more willing and able to upskill and develop new expertise are less likely to be automated.

AI is constrained in what it can learn. Its focus and scope are very narrow compared to that of a human, and its insatiable learning appetite applies only to extrinsic directives — learn X, Y, or Z. This is in stark contrast to AI’s inability to self-direct or be intrinsically curious. In that sense, artificial curiosity is the exact opposite of human curiosity; people are rarely curious about something because they are told to be. Yet this is arguably the biggest downside to human curiosity: It is free-flowing and capricious, so we cannot boost it at will, either in ourselves or in others.

computers can constantly learn and test ideas faster than we can, so long as they have a clear set of instructions and a clearly defined goal. However, computers still lack the ability to venture into new problem domains and connect analogous problems, perhaps because of their inability to relate unrelated experiences. For instance, the hiring algorithms can’t play checkers, and the car design algorithms can’t play computer games. In short, when it comes to performance, AI will have an edge over humans in a growing number of tasks, but the capacity to remain capriciously curious about anything, including random things, and pursue one’s interest with passion may remain exclusively human.

1-hour BBC documentary on a Qantas London-Sydney A380 flight

IQ is Highly Heritable

Source: Association for Psychological Science, 2016

 the heritability of intelligence has been shown consistently to increase linearly throughout the life course in more than three decades of research in longitudinal as well as crosssectional analyses and in adoption as well as twin studies (McGue, Bouchard, Iacono, & Lykken, 1993; Plomin, 1986; Plomin & Deary, 2015). For example, as summarized in Figure 3, an analysis of cross-sectional data for 11,000 pairs of twins—larger than all previous twin studies combined—showed that the heritability of intelligence increases significantly from 41% in childhood (age 9) to 55% in adolescence (age 12) and to 66% in young adulthood (age 17; Haworth et al., 2010). 

Some evidence suggests that heritability might increase to as much as 80% in later adulthood independent of dementia (Panizzon et al., 2014); other results suggest a decline to about 60% after age 80 (Lee, Henry, Trollor, & Sachdev, 2010), but another study suggests no change in later life (McGue & Christensen, 2013).