Source: MIT News, Aug 2018
Source: MIT, Jul 2019
The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.
Source: Fast Company, Jun 2019
The computer that flew the astronauts to the Moon—the Apollo guidance computer—was a marvel of the 1960s: small, fast, nimble, and designed for the people who were using it, astronauts flying spaceships.
It also represented a huge leap for NASA. The risk was in the cutting-edge technology that MIT used to squeeze as much power and speed into the computer’s slim, briefcase-sized case, one of the boldest and riskiest bets of the whole Moon mission, and one that few knew about or appreciated at the time.
The MIT Instrumentation Lab tried to design the Apollo computer using transistors, which in the early 1960s were well-settled technology—reliable, understandable, relatively inexpensive. But 15 months into the design effort, it became clear that transistors alone couldn’t give the astronauts the computing power they needed to fly to the Moon. In November 1962, MIT’s engineers got NASA’s permission to use a very new technology: integrated circuits.
As part of its early evaluation, MIT bought 64 integrated circuits from Texas Instruments. The price was $1,000 each, or $9,000 apiece in 2019 dollars. Each had six transistors.
But integrated circuits would change what it was possible for the Apollo computer to do. They would increase its speed by 2.5 times while allowing a reduction in space of 40% (the computer didn’t get smaller; it just got packed with more capacity).
The Apollo computers were the most sophisticated general-purpose computers of their moment. They took in data from dozens of sensors, from radar, directly from Mission Control. They kept the spaceships oriented and on course. They were, in fact, capable of flying the whole mission on autopilot, while also telling the astronauts what was going on in real time.
MIT did two things to solve the problems of those first integrated circuits. Working with early chip companies—Fairchild Semiconductor, Texas Instruments, Philco—it drove the manufacturing quality of computer chips up by a factor of 1,000. MIT had a battery of a dozen acceptance tests for the computer chips it bought, and if even one chip in a lot of 1,000 failed one test, MIT packed up the whole lot and sent it back.
And MIT, on behalf of NASA, bought so many of the early chips that it drove the price down dramatically: from $1,000 a chip in that first order to $15 a chip in 1963, when MIT was ordering lots of 3,000. By 1969, those basic chips cost $1.58 each, except they had significantly more capability, and a lot more reliability, than the 1963 version.
MIT and NASA were able to do all that because for year after year, Apollo was the No. 1 customer for computer chips in the world.
Even as the share dropped, total purchasing soared. The 1965 volume was 20 times what it had been just three years earlier.
Inside the government, there was only NASA using the chips, and the Air Force’s Minuteman missile, a relatively small project compared with the Apollo computers.
Without knowing it, the world was witnessing the birth of “Moore’s Law,” the driving idea of the computer world that the capability of computer chips would double every two years, even as the cost came down.
In fact, Fairchild Semiconductor’s Gordon Moore wrote the paper outlining Moore’s Law in 1965, when NASA had been the No. 1 buyer of computer chips in the world for four years, and the only user of integrated circuits that Moore cites by name in that paper is “Apollo, for manned Moon flight.” Moore would go on to cofound and lead Intel, and help drive the digital revolution into all elements of society.
What was Moore doing when he conceived of Moore’s Law? He was director of research and development. Fairchild’s most significant customer: MIT’s Apollo computer.
It’s a part of the history of modern computing that Silicon Valley manages to skip over most of the time, but MIT, NASA, and the race to the Moon laid the very foundation of the digital revolution, of the world we all live in.
Source: MIT, May 2019
This very human tendency was the inspiration for FAIL!, an event series committed to destigmatizing failure.
At each conference, prominent scholars from MIT and Harvard University share 10-minute stories of personal, academic, and professional failures, followed by a Q&A session with the audience. By learning of the challenges and missteps of highly successful people, the organizers hope to reduce the discouragement and isolation attendees may feel when confronted with their own failures.
MIT professor of computer science Daniel Jackson, who recently published a book on resilience at MIT, opened this April’s FAIL! Conference by reflecting on the different types of failure. “There’s what I call ‘little-f failure’ and ‘big-F failure,’” he said. “Little-f failure is when you do something and you screw it up … Big-F failure is when your whole life comes to nothing.”
Big-F failures, he noted, are relatively rare, although fear of them can lead people to avoid taking worthwhile risks and limit their ability to lead full, meaningful lives. “Talking about fear and failure is the key to changing ourselves and the culture in which we live,” said Jackson, emphasizing the importance of events like FAIL! that create spaces to explore these topics.
Professor of humanities, sociology, and anthropology Susan Silbey, who was recently awarded MIT’s highest faculty honor, the Killian Award, spoke after Jackson. Although Silbey has had a celebrated career with seemingly few little-f failures, she struggled to find direction and mentorship as a graduate student.
“I started my PhD two months after I graduated college,” said Silbey. “In 1962 there weren’t very many women who joined PhD programs at the University of Chicago. That was quite extraordinary in that year. What was more extraordinary is that I did not graduate until 1978. Sixteen years. That is not the career of a star: That is a failure.” Silbey credited her eventual success to her love of learning and research, regardless of the topic she was studying.
Harvard Medical School professor of genetics George Church spoke at the FAIL! Conference held in November 2018. Those who know him as a founding father of synthetic biology would be surprised to learn that he spent six months homeless and failed out of graduate school at Duke University prior to being accepted to a PhD program at Harvard University, where he later graduated.
Church encouraged the audience to not only embrace their own failures, but to learn from the failures of others. “I’ve learned as much from my negative role models as I did from my positive ones,” he said. “They had trouble, and you’re trying to learn from their trouble without personally experiencing it.”
FAIL! is about being human,” adds Benedetti. “We all need inspiring and realistic role models. By sharing the challenges and vulnerabilities that many people try to hide, our brave speakers are helping to create an environment where students feel comfortable being themselves and expressing their creativity. We believe that FAIL! is providing a model of thoughtfulness and humility, which will inspire attendees to be better leaders.”
Source: MIT, May 2019
On April 27 the MIT Great Dome was transformed. This time, it was decked out as Captain America’s shield in honor of the release of Avengers: Endgame.
The hack even got the attention of the actor who plays Captain America, Sudbury native Chris Evans, who tweeted the article by the Boston Globe.
Why are these 4 US universities joint #1 Computer Science Departments?
(in alphabetical order)
Source: Moserware, May 2008
In addition to funding research organizations like SDC, RAND, and SRI, Lick is remembered for helping fund the new computer science departments at Berkeley, Stanford, Carnegie Mellon, and perhaps most famously, MIT.
The initial jumpstart provided Licklider might explain these 4 schools’ top ranking (as of 2019)
Lick’s funding at MIT went towards a system that would put together several foundational ideas needed for his man-computer symbiosis.
The goal was “Machine-Aided-Cognition,” but since computer time was hard to come by and they had to invent a lot of the concepts for time-sharing to allow multiple people to access a computer at once, the project also went by the name “Multiple Access Computer.” Regardless of which one you chose, the acronym was the same.
Related Resource: Forbes, Nov 2000
Lick would have $10 million a year to give away pretty much as he saw fit, with no peer review and no second-guessing from the higher-ups. So long as he was advancing command and control, broadly defined, he could set the program’s agenda, and he could choose which projects to fund. In effect, Lick was being offered $10 million a year to pursue his vision of human-computer symbiosis.