Category Archives: Uncategorized

Math Model of Innovation

Source: MIT Technology Review, Jan 2017

the first mathematical model that accurately reproduces the patterns that innovations follow. The work opens the way to a new approach to the study of innovation, of what is possible and how this follows from what already exists.

The adjacent possible is all those things—ideas, words, songs, molecules, genomes, technologies and so on—that are one step away from what actually exists. It connects the actual realization of a particular phenomenon and the space of unexplored possibilities.

But this idea is hard to model for an important reason. The space of unexplored possibilities includes all kinds of things that are easily imagined and expected but it also includes things that are entirely unexpected and hard to imagine. And while the former is tricky to model, the latter has appeared close to impossible.

each innovation changes the landscape of future possibilities. So at every instant, the space of unexplored possibilities—the adjacent possible—is changing.

“Though the creative power of the adjacent possible is widely appreciated at an anecdotal level, its importance in the scientific literature is, in our opinion, underestimated,” say Loreto and co.

even with all this complexity, innovation seems to follow predictable and easily measured patterns that have become known as “laws” because of their ubiquity. One of these is Heaps’ law, which states that the number of new things increases at a rate that is sublinear. In other words, it is governed by a power law of the form V(n) = knβ where β is between 0 and 1.

Words are often thought of as a kind of innovation, and language is constantly evolving as new words appear and old words die out.

Given a corpus of words of size n, the number of distinct words V(n) is proportional to n raised to the β power. In collections of real words, β turns out to be between 0.4 and 0.6.

Another well-known statistical pattern in innovation is Zipf’s law, which describes how the frequency of an innovation is related to its popularity. For example, in a corpus of words, the most frequent word occurs about twice as often as the second most frequent word, three times as frequently as the third most frequent word, and so on. In English, the most frequent word is “the” which accounts for about 7 percent of all words, followed by “of” which accounts for about 3.5 percent of all words, followed by “and,” and so on.

This frequency distribution is Zipf’s law and it crops up in a wide range of circumstances, such as the way edits appear on Wikipedia, how we listen to new songs online, and so on.

They begin with a well-known mathematical sand box called Polya’s Urn. It starts with an urn filled with balls of different colors. A ball is withdrawn at random, inspected and placed back in the urn with a number of other balls of the same color, thereby increasing the likelihood that this color will be selected in future.

This is a model that mathematicians use to explore rich-get-richer effects and the emergence of power laws. So it is a good starting point for a model of innovation. However, it does not naturally produce the sublinear growth that Heaps’ law predicts.

That’s because the Polya urn model allows for all the expected consequences of innovation (of discovering a certain color) but does not account for all the unexpected consequences of how an innovation influences the adjacent possible.

So Loreto, Strogatz, and co have modified Polya’s urn model to account for the possibility that discovering a new color in the urn can trigger entirely unexpected consequences. They call this model “Polya’s urn with innovation triggering.”

The exercise starts with an urn filled with colored balls. A ball is withdrawn at random, examined, and replaced in the urn.

If this color has been seen before, a number of other balls of the same color are also placed in the urn. But if the color is new—it has never been seen before in this exercise—then a number of balls of entirely new colors are added to the urn.

Loreto and co then calculate how the number of new colors picked from the urn, and their frequency distribution, changes over time. The result is that the model reproduces Heaps’ and Zipf’s Laws as they appear in the real world—a mathematical first. “The model of Polya’s urn with innovation triggering, presents for the first time a satisfactory first-principle based way of reproducing empirical observations,” say Loreto and co.

The team has also shown that its model predicts how innovations appear in the real world. The model accurately predicts how edit events occur on Wikipedia pages, the emergence of tags in social annotation systems, the sequence of words in texts, and how humans discover new songs in online music catalogues.

Interestingly, these systems involve two different forms of discovery. On the one hand, there are things that already exist but are new to the individual who finds them, such as online songs; and on the other are things that never existed before and are entirely new to the world, such as edits on Wikipedia.

Loreto and co call the former novelties—they are new to an individual—and the latter innovations—they are new to the world.

Curiously, the same model accounts for both phenomenon. It seems that the pattern behind the way we discover novelties—new songs, books, etc.—is the same as the pattern behind the way innovations emerge from the adjacent possible.

That raises some interesting questions, not least of which is why this should be. But it also opens an entirely new way to think about innovation and the triggering events that lead to new things. “These results provide a starting point for a deeper understanding of the adjacent possible and the different nature of triggering events that are likely to be important in the investigation of biological, linguistic, cultural, and technological evolution,” say Loreto and co.

The MIT mascot (Beaver) Visits LA

… Universal Studios

16640641_10154996206852363_6885511579343152502_n

 

16427539_10154996206842363_1224354997235885832_n

 

Collaboration Leads to Incremental Innovation

Source: New England Complex Systems Institute, date indeterminate

Current collaborative design approaches are as a result typically characterized by heavy reliance on expensive and time-consuming processes, poor incorporation of some important design concerns (typically later life-cycle issues such as environmental impact), as well as reduced creativity due to the tendency to incrementally modify known successful designs rather than explore radically different and potentially superior ones.

 

Deliberate Learning – Framing and Reflection

Source: MindShift, Sep 2016

Open AI

Source: The BackChannel, Dec 2015

a non-profit venture called OpenAI, announced today, that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear.

Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry.

The organization is trying to develop a human positive AI. And because it’s a non-profit, it will be freely owned by the world.

Musk: As in an AI extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other.

Altman: We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.

 

Google: An AI-First Company

Source: BackChannel, Jun 2016

recent results indicate that machine learning, powered by “neural nets” that emulate the way a biological brain operates, is the true path towards imbuing computers with the powers of humans, and in some cases, super humans.

As Pedro Domingos, author of the popular ML manifesto The Master Algorithm, writes, “Machine learning is something new under the sun: a technology that builds itself.” Writing such systems involves identifying the right data, choosing the right algorithmic approach, and making sure you build the right conditions for success. And then (this is hard for coders) trusting the systems to do the work.

While machine learning won’t replace humans, it will change humanity.

Machine learning requires a different mindset. People who are master coders often become that way because they thrive on the total control that one can have by programming a system. Machine learning also requires a grasp of certain kinds of math and statistics, which many coders, even gonzo hackers who can zip off tight programs of brobdingnagian length, never bothered to learn.

It also requires a degree of patience. “The machine learning model is not a static piece of code — you’re constantly feeding it data,” says Robson. “We are constantly updating the models and learning, adding more data and tweaking how we’re going to make predictions. It feels like a living, breathing thing. It’s a different kind of engineering.”

“It’s a discipline really of doing experimentation with the different algorithms, or about which sets of training data work really well for your use case,” says Giannandrea, who despite his new role as search czar still considers evangelizing machine learning internally as part of his job. “The computer science part doesn’t go away. But there is more of a focus on mathematics and statistics and less of a focus on writing half a million lines of code.”

 

Computers with Imagination

Source: MIT Technology Review, May 2016

… maybe the biggest problem for computers is that they don’t have any (imagination).

Vicarious is developing a new way of processing data, inspired by the way information seems to flow through the brain. The company’s leaders say this gives computers something akin to imagination, which they hope will help make the machines a lot smarter.

 

Vicarious has introduced a new kind of neural-network algorithm designed to take into account more of the features that appear in biology. An important one is the ability to picture what the information it’s learned should look like in difference scenarios—a kind of artificial imagination.

The company’s founders believe a fundamentally different design will be essential if machines are to demonstrate more humanlike intelligence. Computers will have to be able to learn from less data, and to recognize stimuli or concepts more easily.

Its mathematical innovations, Phoenix says, will more faithfully mimic the information processing found in the human brain. … George explained that imagination could help computers process language by tying words, or symbols, to low-level physical representations of real-world things.