Category Archives: Nobel

Could Einstein Have Won Many More Nobel Prizes, beyond his 1921 “photo-electric” prize?

Source: HuffPost, Mar 2014

When he finally won the 1921 Prize (awarded in 1922), he did not win for his most famous achievement, Relativity Theory, which was still deemed too speculative and uncertain to endorse with the Prize.

Instead, he won for his 1905 proposal of the law of the photoelectric effect and for general “services to theoretical physics.” It was a political decision by the Nobel committee;

Einstein was so renowned that their failure to select him had become an embarrassment. However, the only part of his brilliant portfolio that they either understood or trusted sufficiently to name for the award was this relatively minor implication of his 1905 paper on particles of light.

The final irony in this selection was that, among the many controversial theories that Einstein had proposed in the previous 17 years, the only one not accepted by almost all of the leading theoretical physicists of the time was precisely his theory of light quanta (or photons), which he had used to find the law of the photoelectric effect!

two other “no-brainers,” the two theories of relativity. The Special Theory, proposed in 1905, introduced the Principle of Relativity, which states that the law of physics must all be the same for bodies in uniform relative motion.

This implied the radical notion that time itself does not pass uniformly for all observers. However, here it must be noted, that the equations of Special Relativity were first written down by Hendrik Lorentz, the great Dutch physicist whom Einstein admired the most of all his contemporaries.
Lorentz just failed to give them the radical interpretation with which Einstein endowed them; he also failed to notice that they implied that energy and mass were interchangeable (E = mc2). Einstein would have been happy to share Special Relativity with Lorentz, so let’s split this one 50-50 between the two.

On the other hand, General Relativity, germinated in 1907 and completed in 1915, is all Albert.

Like the photon, no one on the planet even had an inkling of this idea before Einstein. Einstein realized that the question of the relativity of motion was tied up with the theory of Gravity. From this simple seed of an idea, arose arguably the most beautiful and mathematically profound theory in all of physics, Einstein’s Field Equations.

These predict that matter curves space and that the geometry of our universe is non-Euclidean in general. This one is probably worth two Nobel prizes, but let’s just mark it down for one.

Here we exhaust what most working physicists would immediately recognize as Einstein’s works of genius, and we’re only at 2.5 Nobels. But it is a remarkable fact that Einstein’s work on early atomic theory — what we now call quantum theory — is vastly underrated.

This is partially because Einstein himself downplayed it. He famously rejected the final version of the theory, dismissing it with the famous phrase, “God does not play dice.“ But if one looks at what he actually did, the Nobels keep piling up:

After Einstein proposed his particulate theory of light in 1905, he came up with a mathematical proof that particle and wave properties were present in one formula that described the fluctuations of the intensity of light. In 1923, the French physicist Louis de Broglie hypothesized that electrons actually had wavelike properties similar to light. He freely admitted his debt to Einstein for this idea, but when he got the Nobel Prize for “wave-particle” duality in 1929, it was not shared. It should have been. Another half for Albert, at 3.5 and counting.

In 1916, three years after Niels Bohr introduced his “solar system“ model of the atom, where the electrons could only travel in certain “allowed orbits” with quantized energy, Einstein returned to thinking about how atoms would absorb light, with the benefit of Bohr’s picture. He realized that once an atom had absorbed some light, it would eventually give that light energy back by a process called spontaneous emission. Without any particular event to cause it, the electron would jump down to a lower energy orbit, emitting a photon. This was the first time that it was proposed that the theory of atoms had such random, uncaused events, a notion which became a second pillar of quantum theory. In the same work he introduced the principle of stimulated emission, the basic idea behind the laser. One full Prize, please.

In 1924, Einstein received a paper about particles of light out of the blue from the unknown Indian physicist, Satyendra Nath Bose. Although Bose did not clearly state his revolutionary idea, reading between the lines, Einstein detected a completely new principle of quantum theory — the idea that all fundamental particles are indistinguishable. Einstein applied the principle to atoms and discovered that a simple gas of atoms, if sufficiently cooled, would cease to obey all the laws that physicists and chemists had discovered for gases over the centuries. It turns out that this discovery underlies some of the most dramatic quantum effects, such as superconductivity. No knowledgeable physicist would dispute that Einstein deserved a full Nobel Prize for this discovery, but I am sure that Einstein would have wanted to share it with Bose (who never did receive the Prize).

LIGO doubts??

Source: New Scientist, Oct 2018

We believe that LIGO has failed to make a convincing case for the detection of any gravitational wave event,” says Andrew Jackson, the group’s spokesperson. According to them, the breakthrough was nothing of the sort: it was all an illusion.

The big news of that first sighting broke on 11 February 2016. In a press conference, senior members of the collaboration announced that their detectors had picked up the signature of gravitational waves emitted as a pair of distant black holes spun into one another.

The misgivings of Jackson’s group, based at the Niels Bohr Institute in Copenhagen, Denmark, began with this press conference. The researchers were surprised at the confident language with which the discovery was proclaimed and decided to inspect things more closely.

Their claims are not vexatious, nor do they come from ill-informed troublemakers. Although the researchers don’t work on gravitational waves, they have expertise in signal analysis, and experience of working with large data sets such as the cosmic microwave background radiation, the afterglow of the big bang that is spread in a fine pattern across the sky. “These guys are credible scientists,” says Duncan Brown at Syracuse University in New York, a gravitational wave expert who recently left the LIGO collaboration.

when Jackson and his team looked at the data from the first detection, their doubts grew. At first, Jackson printed out graphs of the two raw signals and held them to a window, one on top of the other. He thought there was some correlation between the two. He and his team later got hold of the underlying data the LIGO researchers had published and did a calculation. They checked and checked again. But still they found that the residual noise in the Hanford and Livingston detectors had characteristics in common. “We came to a conclusion that was very disturbing,” says Jackson. “They didn’t separate signal from noise.

Jackson is suspicious of LIGO’s noise analysis. One of the problems is that there is no independent check on the collaboration’s results. That wasn’t so with the other standout physics discovery of recent years, the Higgs boson. The particle’s existence was confirmed by analysing multiple, well-controlled particle collisions in two different detectors at CERN near Geneva, Switzerland. Both detector teams kept their results from each other until the analysis was complete.

New Scientist has learned, for instance, that the collaboration decided to publish data plots that were not derived from actual analysis. The paper on the first detection in Physical Review Letters used a data plot that was more “illustrative” than precise, says Cornish. Some of the results presented in that paper were not found using analysis algorithms, but were done “by eye”.

Brown, part of the LIGO collaboration at the time, explains this as an attempt to provide a visual aid. “It was hand-tuned for pedagogical purposes.” He says he regrets that the figure wasn’t labelled to point this out.

This presentation of “hand-tuned” data in a peer-reviewed, scientific report like this is certainly unusual. New Scientist asked the editor who handled the paper, Robert Garisto, whether he was aware that the published data plots weren’t derived directly from LIGO’s data, but were “pedagogical” and done “by eye”, and whether the journal generally accepts illustrative figures. Garisto declined to comment.

There were also questionable shortcuts in the data LIGO released for public use. The collaboration approximated the subtraction of the Livingston signal from the Hanford one, leaving correlations in the data – the very correlations Jackson noticed. There is now a note on the data release web page stating that the publicly available waveform “was not tuned to precisely remove the signal”.

Whatever the shortcomings of the reporting and data release, Cornish insists that the actual analysis was done with processing tools that took years to develop and significant computing power to implement – and it worked perfectly.

However, anyone outside the collaboration has to take his word for that. “It’s problematic: there’s not enough data to do the analysis independently,” says Jackson. “It looks like they’re being open, without being open at all.”

Brown agrees there is a problem. “LIGO has taken great strides, and are moving towards open data and reproducible science,” he says. “But I don’t think they’re quite there yet.”

The Danish group’s independent checks, published in three peer-reviewed papers, found there was little evidence for the presence of gravitational waves in the September 2015 signal. On a scale from certain at 1 to definitely not there at 0, Jackson says the analysis puts the probability of the first detection being from an event involving black holes with the properties claimed by LIGO at 0.000004. That is roughly the same as the odds that your eventual cause of death will be a comet or asteroid strike – or, as Jackson puts it,”consistent with zero”. The probability of the signal being due to a merger of any sort of black holes is not huge either. Jackson and his colleagues calculate it as 0.008.

Critique of Randomized Controlled Trials (focus of 2019 Economics Nobel prize)

Source: A Fine Theorem blog, Oct 2019

 

Related Resource: The Mandarin, Sep 2017

Getting an A+ on a Harvard PhD Final Exam

Source: Marginal Revolution, Oct 2019

when it came to the first-year Macro final (I don’t mean the comprehensive exams), Andy Abel wrote a problem with dynamic programming, which was Andy’s main research area at the time.

Abhijit showed that the supposed correct answer was in fact wrong, that the equilibrium upon testing was degenerate, and he re-solved the problem correctly, finding some multiple equilibria if I recall correctly, all more than what Abel had seen and Abel wrote the problem.  Abhijit got an A+ (Abel, to his credit, was not shy about reporting this).

Related Resource: MIT News, Oct 2019

Esther Duflo and Abhijit Banerjee, MIT economists whose work has helped transform antipoverty research and relief efforts, have been named co-winners of the 2019 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, along with another co-winner, Harvard University economist Michael Kremer.

Salam: the First ****** Nobel Laureate

Source: PhysicsWorld, Aug 2018
<movie link>

Salam remains poorly known and largely unrecognized in Pakistan. The reason why, this film argues, was his religious beliefs. Most Pakistanis are Sunnis but Salam was an Ahmadi, part of a minor Islamic movement founded in the late 19th century by the religious leader Mirzā Ghulām Ahmad (1835–1908).

Such was the opposition in Pakistan to the Ahmadis that in 1974 the country’s parliament declared them non-Muslims – heretics essentially – and the constitution was altered to reflect that fact. Attacks on Ahmadis increased and, a decade later, Pakistan’s then president, General Zia-ul Haq, even barred members of the sect from calling themselves Muslims. They were also forbidden from professing their creed in public or even letting their places of worship be described as mosques.

the central message of this largely chronological account of Salam’s life is that Pakistan’s decision to effectively legislate against its own citizens and to outcast the Ahmadis profoundly affected him. As Salam wrote in his diary: “Declared non Muslim. Cannot cope.” Having previously been largely a “cultural Muslim”, Salam now saw his religious beliefs re-awakened.

The camera comes to rest on Salam’s name on the gravestone, above which is the epithet: “First ****** Nobel Laureate”. With a streak of white paint covering the offending word, what should be a cause for celebration – Salam’s status as a Muslim – has literally been airbrushed by someone from history.

Economics Nobel Prizes

Source: SagePub, Jun 2019

In economics, the University of Chicago holds the top spot with 32 laureates, followed by Harvard (30), MIT (28), Stanford (25), Berkeley (23), Yale (21), and Princeton (19). In terms of the graduate department awarding the PhD degree to economics Nobel laureates, about half have come from only five universities: MIT (11), Harvard (10), Chicago (9), Carnegie Mellon (4) and the London School of Economics (4). Six other universities have produced two each.

Robert Solow (Nobel 1987) published (in 1956) a formal model of economic growth in which the output to capital ratio was not fixed, and the growth rate of population (and consequently the labor force) drove economic growth. Technological progress entered the process via a specified steady rate of productivity growth. His first order differential equation explaining the rate of economic growth appealed to the mathematical formalism of economists (and has lasted as a staple for more than 50 years) because it is founded on notions essential to popular classical economic theories.

Paul Romer, recipient of the 2018 Prize, extended the basic growth model by including technological progress as an endogenous factor, wherein economic policies, such as trademark and patent laws, could affect and accelerate the rate of technological progress (Henderson, 2018).

Won both the Nobel Prize and the Ig Nobel

Source: Wikipedia, date indeterminate

 

Geim was awarded the 2010 Nobel Prize in Physics jointly with Konstantin Novoselov for his work on graphene.[21][22] He is Regius Professor of Physics and Royal Society Research Professor at the Manchester Centre for Mesoscience and Nanotechnology

In addition to the 2010 Nobel Prize, he received an Ig Nobel Prize in 2000 for using the magnetic properties of water scaling to levitate a small frog with magnets. This makes him the first, and thus far only, person to receive both the prestigious science award and its tongue-in-cheek equivalent.

Geim’s research in 1997 into the possible effects of magnetism on water scaling led to the famous discovery of direct diamagnetic levitation of water, and led to a frog being levitated.[51] For this experiment, he and Michael Berry received the 2000 Ig Nobel Prize.[5] “We were asked first whether we dared to accept this prize, and I take pride in our sense of humor and self-deprecation that we did”.[30]

He said of the range of subjects he has studied: “Many people choose a subject for their PhD and then continue the same subject until they retire. I despise this approach. I have changed my subject five times before I got my first tenured position and that helped me to learn different subjects.”[34]

He named his favourite hamster, H.A.M.S. ter Tisha, co-author in a 2001 research paper.

A colleague of Geim said that his award shows that people can still win a Nobel by “mucking about in a lab”.[76]

On winning both a Nobel and Ig Nobel, he has stated that

“Frankly, I value both my Ig Nobel prize and Nobel prize at the same level and for me Ig Nobel prize was the manifestation that I can take jokes, a little bit of self-deprecation always helps.”[11]

Predicting a Nobel Prize for “Mathematics of Ideas”

Source: SlideShare, Oct 2017
https://www.slideshare.net/secret/vcqrQrMaokNa80&#8221;

This Oct 2017 Slideshare predicts that Paul Romer’s award of the Nobel Memorial Prize in Economic Sciences can lead to a Nobel Memorial Prize in Economic Sciences for “mathematics of ideas

Slide 27 (D8).

More on Romer’s Endogenous Growth Theory

Conversations with Economists, Summer 1999

the strength of Solow’s model was that he brought technology explicitly into the analysis in both his empirical paper and his theoretical paper. He had an explicit representation for technology, capital and labour. Those are the three elements that you have to think about if you want to think about growth. That was the good part.

The downside was that because of the constraints imposed on him by the existing toolkit, the only way for him to talk about technology was to make it a public good. That is the real weakness of the Solow model.

What endogenous growth theory is all about is that it took technology and reclassified it, not as a public good, but as a good which is subject to private control. It has at least some degree of appropriability or excludability associated with it, so that incentives matter for its production and use. But endogenous growth theory also retains the notion of nonrivalry that Solow captured.

As he suggested, technology is a very different kind of good from capital and labour because it can be used over and over again, at zero marginal cost. The Solow theory was a very important first step. The natural next step beyond was to break down the public good characterisation of technology into this richer characterisation – a partially excludable nonrival good. To do that you have to move away from perfect competition and that is what the recent round of growth theory has done. We needed all of the tools that were developed between the late 1950s and the 1980s to make that step.

,,, 

We know from Solow, and this observation has withstood the test of time, that even if investment in capital contributes directly to growth, it is technology that causes the investment in the capital and indirectly causes all the growth. Without technological change, growth would come to a stop.

… 

Too many words and no enough math?

Yes, and words are often ambiguous.

when you come to nonrival goods, we do not know what the right institutions are. It is an area that I think is very exciting because there is a lot of room for institutional innovation. One strategy is to work out a rough trade-off where you allow patent rights but you make them be narrow and have a finite duration. You would allow partial excludability — less than full but stronger than zero excludability. We often talk as if that is the general solution. But in fact, this is not the general solution. You have to break the question down by type of nonrival good.

There are some nonrival goods like the quadratic formula or pure mathematical algorithms that traditionally have been given no property rights whatsoever. There are other forms of nonrival goods like books. You will get a copy right for this book of interviews, which is a very strong form of protection. The text that you write and my words – you can take them and put a copyright on them so that nobody else can re-use them. I can not even re-use my own words without getting permission from you (laughter). So that is a very strong form of intellectual property protection.

What we need is a much more careful differentiation of different types of nonrival goods and an analysis of why different institutional structures and degrees of property protection are appropriate for different kinds of goods.

Patent rights or legal property rights are only a part of the story. We create other mechanisms, like subsidies for R and D. We create whole institutions like universities which are generally nonprofit and government supported, that are designed to try and encourage the production of ideas. The analysis of institutions for nonrival goods is more subtle than many people realise.

 If you want to encourage the production of ideas, one way is to subsidise the ideas themselves. But another way is to subsidise the inputs that go into the production of ideas.

In a typical form of second-best analysis, you may want to introduce an additional distortion – subsidies for scientists and engineers – to offset another – the fact that the social returns from new ideas are higher than the private returns. You create a much larger pool of scientists and engineers. This lowers the price of scientists and engineers to anybody who wants to hire their services to produce new ideas.

So in general, the optimal design of institutions is an unresolved problem. We have seen a lot of experimentation during the last 100 years. I have made the claim that the economies that will really do well in the next 100 years will be the ones that come up with the best institutions for simultaneously achieving the production of new ideas and their widespread use. I am quite confident that we will see new societal or institutional mechanisms that will get put in place for encouraging new ideas.

The real test is, does the theory give us some guidance in constructing institutions that will encourage growth? Does it help us understand what kinds of things led to difference between the growth performance of the UK and the US in the last one hundred years? If the theory gives us that kind of guidance then it has been successful and can help us design policies to improve the quality of peoples lives and that is an extremely important contribution.

Marketplace.Org, Oct 2018

Romer’s big breakthrough was this: He took models of economic growth and added a missing, magic ingredient.

Paul Romer put the production of ideas at the center stage of economic growth,” said MIT’s Scott Stern. Stern said ideas are different from a lot of other goods because they don’t get used up. “Either I eat an apple of you eat the apple, but we can all use calculus,” he said.

Sharing ideas, like the technology behind the internet, creates growth. Ideas need to be protected, at least a little, to reward people for coming up with them. So you have things like patents on new drugs, for example.

Romer put all this together in a concrete economic model with a major implication, said Chad Jones, a former colleague of Romer’s and a professor at Stanford University. Jones summarizes: “Economic growth is the result of these innovative efforts by entrepreneurs and scientists and researchers, and so anything that influences their effort can therefore affect our living standards in the long run,” he said.

Nobel Laureate Fringe Benefits

Source: WSJ, Oct 2018

Being named a Nobel laureate comes with recognition for breakthrough lifetime achievements, money, a gold medal and a trip to Stockholm. For some winners, it also brings free parking. Prize recipients at universities including Brown, Duke and the University of Southern California have won prime spots—many marked, “Reserved for Nobel Laureate.”

Danish scientist Niels Bohr may have gotten one of the most novel awards a few years after winning the 1922 Nobel Prize for physics: free beer for life.

The founder of Carlsberg, the Danish brewer, left his house as a lifetime residence “for a deserving man or woman within the fields of science, literature or art.” Dr. Bohr, known for his model of the hydrogen atom, moved in with his family in 1931, after the previous resident died.

The house came with an inexhaustible supply of the pilsner, company spokesman Kasper Elbjørn said, both in bottle and on tap.

The Lawrence Berkeley National Laboratory in Berkeley, Calif., has for decades named streets for its winners. The hilly 202-acre facility is dotted with signs for a dozen such stars, including Alvarez Road, Lee Road, McMillan Road, Perlmutter Road and Segrè Road. A few streets are still available for Nobel Prize recipients, a spokeswoman said.