Category Archives: MIT

Bob Metcalfe: Visionary vs. Stubborn

Source: MIT, Jun 2016

the only difference between being a visionary and being stubborn is whether you are right or not

Advertisements

Eric Schmidt (Former Google CEO) : MIT Innovation Fellow

Source: MIT, Feb 2018

Today, MIT President L. Rafael Reif announced that Eric Schmidt, who until January was the executive chairman of Google’s parent company, Alphabet, will join MIT as a visiting innovation fellow for one year, starting in Spring.

Schmidt will figure prominently in MIT’s plans to bring human and machine intelligence to the next level, serving as an advisor to the newly launched MIT Intelligence Quest, an Institute-wide initiative to pursue hard problems on the horizon of intelligence research.

“I am thrilled that Dr. Schmidt will be joining us,” says MIT President L. Rafael Reif. “As MIT IQ seeks to shape transformative new technologies to serve society, Eric’s brilliant strategic and tactical insight, organizational creativity, and exceptional technical judgment will be a tremendous asset. And for our students, his experience in driving some of the most important innovations of our time will serve as an example and an inspiration.”

In his role as a visiting innovation fellow, Schmidt will work directly with MIT scholars to explore the complex processes involved in taking innovation beyond invention to address urgent global problems. In addition, Schmidt will engage with the MIT community through events, lectures, and individual sessions with student entrepreneurs.

MIT’s Intelligence Quest

Source: TechCrunch, Feb 2018

This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”

“The Core is basically reverse-engineering human intelligence,” dean of the MIT School of Engineering Anantha Chandrakasan tells TechCrunch, “which will give us new insights into developing tools and algorithms, which we can apply to different disciplines. And at the same time, these new computer science techniques can help us with the understanding of the human brain. It’s very tightly linked between cognitive science, near science and computer science.”

The Bridge, meanwhile, is designed to provide access to AI and ML tools across its various disciplines. That includes research from both MIT and other schools, made available to students and staff.

“Many of the products are moonshoots,” explains James DiCarlo, head of the Department of Brain and Cognitive Sciences. “They involve teams of scientists and engineers working together. It’s essentially a new model and we need folks and resources behind that.”

Funding for the initiative will be provided by a combination of philanthropic donations and partnerships with corporations. But while the school has had blanket partnerships in the past, including, notably, the MIT-IBM Watson AI Lab, the goal here is not to become beholden to any single company. Ideally the school will be able to work alongside a broad range of companies to achieve its large-scale goals.

“Imagine if we can build machine intelligence that grows the way a human does,” adds professor of Cognitive Science and Computation, Josh Tenenbaum. “That starts like a baby and learns like a child. That’s the oldest idea in AI and it’s probably the best idea… But this is a thing we can only take on seriously now and only by combining the science and engineering of intelligence.”

Robert Solow – 1987 Economics Nobel Laureate

Source: UPenn library, Apr  1988

In the late 1950s Solow formulated a theory of economic growth that emphasized the importance of technology. He stated that technology-broadly defined as the application of new knowledge to the production process-is chiefly responsible for expanding an economy over the long term, even more so than increases in capital or labor. And since basic and applied research is often the prelude to the birth of new technologies, the work of researchers has increasingly been perceived to have economic-not merely intellectual and cultural-significance.

But most remarkable, and startling even to the discoverer, was the finding, reported in the 1957 article (’‘Technical change and the aggregate production function’ ‘), that seven-eighths of the doubling in gross output per hour of work in the US economy between 1909 and 1949 was due to’ ‘technical change in the broadest sense” (which includes improvements in education of the labor force). Only one-eighth was due to increased injections of capital.

Karl-Goran Maler, Stockholm School of Economics, Sweden, a member of the Nobel committee, noted, ‘‘Solow showed us that in the long run it is not increase in quantity that is important. It is the increase in quality through better technology and increased efficiency.

NP-Complete = Circuit Problem

Source: MIT website,

Any NP-complete problem can be represented as a logic circuit — a combination of the elementary computing elements that underlie all real-world computing. Solving the problem is equivalent to finding a set of inputs to the circuit that will yield a given output.

Suppose that, for a particular class of circuits, a clever programmer can devise a method for finding inputs that’s slightly more efficient than solving a generic NP-complete problem. Then, Williams showed, it’s possible to construct a mathematical function that those circuits cannot implement efficiently.

It’s a bit of computational jiu-jitsu: By finding a better algorithm, the computer scientist proves that a circuit isn’t powerful enough to perform another computational task. And that establishes a lower bound on that circuit’s performance.

First, Williams proved the theoretical connection between algorithms and lower bounds, which was dramatic enough, but then he proved that it applied to a very particular class of circuits.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

Although he had been writing computer programs since age 7 — often without the benefit of a computer to run them on — Williams had never taken a computer science class. Now that he was finally enrolled in one, he found it boring, and he was not shy about saying so in class.

Eventually, his frustrated teacher pulled a heavy white book off of a shelf, dumped it dramatically on Williams’s desk, and told him to look up the problem described in the final chapter. “If you can solve that,” he said, “then you can complain.”

The book was “Introduction to Algorithms,” co-written by MIT computer scientists Charles Leiserson and Ron Rivest and one of their students, and the problem at the back was the question of P vs. NP, which is frequently described as the most important outstanding problem in theoretical computer science.

Biggest Regrets (MIT students)

Source: Quora, date indeterminate

One of the biggest regrets that I observed among the people I knew was that we didn’t take advantage of the opportunities available to us during our time at MIT.

At MIT, we had unparalleled opportunity to talk to incredibly intelligent, insightful, and interesting minds around us; not just fellow students, but also faculty and leadership at the Institute. People also had a great can-do attitude; if you had an exciting idea that you wanted to bring to life, chances were that you’d be able to find multiple people to help you do it.

However, for many of us, this opportunity was easy to take for granted, and there was always something urgent vying for our attention—a problem set due the next day, an upcoming week of midterms, rehearsals for this weekend’s a cappella performance. As a result, we entirely neglected this opportunity and didn’t realize that it would be gone until it was too late. This particular regret doesn’t just apply to MIT or even just to college in general—it applies to any time/place where a large number of interesting people are gathered. Take advantage of these gatherings, and talk to as many people as you can.

More personally, I wish that I had obsessed less about my grades. Perhaps I would feel differently about this if I were going into a field where my undergraduate GPA mattered (e.g. medicine, academia, anything that required additional schooling), but because I chose to enter the software industry, my grades were of little consequence as long as they were satisfactory. I felt that the amount of time and effort I spent worrying unnecessarily about my grades could have been better spent exploring more extracurriculars, working on side projects, or simply spending more time with my friends (I was often the one turning down Friday night group dinners to study).

Furthermore, this obsession with my grades made me more hesitant to take challenging classes, and I often dropped classes once it became clear that it would be difficult for me to get an A in them.

MIT Student Notes

Source: MIT, Sep 2017