Source: Matt Clancy/Substack, Nov 2021
Strange Dynamics of Combinatorial Innovation
In Weitzman (1998), innovation is a process where two pre-existing ideas or technologies are combined and, if you pour in sufficient R&D resources and get lucky, a new idea or technology is the result. Weitzman’s own example is Edison’s hunt for a suitable material to serve as the filament in the light bulb. Edison combined thousands of different materials with the rest of his lightbulb apparatus before hitting upon a combination that worked. But the lightbulb isn’t special: essentially any idea or technology can also be understood as a novel configuration of pre-existing parts.
An important point is that once you successfully combine two components, the resulting new idea becomes a component you can combine with others. To stretch Weitzman’s lightbulb example, once the lightbulb had been invented, new inventions that use lightbulbs as a technological component could be invented: things like desk lamps, spotlights, headlights, and so on.
That turns out to have a startling implication:
combinatorial processes grow slowly until they explode.
For Weitzman, innovation is a purposeful pairing of two components, but for Koppl, Devereaux, Herriot, and Kauffman, this is modeled as a random evolutionary process, where there is some probability any pair of components results in a new component, a lower probability that triple-combinations result in a new component, a still lower probability that quadruple-combinations result in a new component, and so on.
They show this simple process generates the same slow-then-fast growth of technology.
If random tinkering is allowed to happen, with or without a profit motive, then you can get a phase-change in the technological trajectory of a society once the set of combinatorial possibilities grows sufficiently large.
The reason technological progress does not accelerate in all times and places is because in addition to ideas, Weitzman assumes innovation requires R&D effort. In the beginning, we will usually have enough resources to fully fund the investigation of all possible new ideas.
So long as that’s true, the number of ideas is the main constraint on the rate of technological progress and we’ll see accelerating technological progress. But in the long run, the number of possible ideas explodes and growth becomes constrained by the resources we have available to devote to R&D, not by the supply of possible ideas.
Why might technological progress be exponential?
Jones (2021) maps out an alternative plausible scenario. So far we have assumed some ideas are “useful” and others are not, and progress is basically about increasing the number of useful ideas. But this is a bit dissatisfying. Ideas vary in how useful they are, not just if they’re useful or not. For example, as a source of light, the candle was certainly a useful invention. So was the light bulb. But it seems weird to say that the main value of a light bulb was that we now had two useful sources of light. Instead, the main value is that light bulbs are a better source of light than candles.
Instead, let’s think of an economy that is composed of lots and lots of distinct activities. Technological progress is about improving the productivity in each of these activities: getting better at supplying light, making food, providing childcare, making movies, etc. As before, we’re going to assume technological progress is combinatorial. But we’re now going to make a different assumption about the utility of different combinations. Instead of just assuming some proportion of ideas are useful and some are not, we’re going to assume all ideas vary in their productivity. Specifically, as an illustrative example, let’s assume the productivity of combinations is distributed according to a normal distribution centered at zero.
This assumption has a few attractive properties. First off, the normal distribution is a pretty common distribution. It’s what you get, for example, if you have a process where we take the average of lots of different random things, each of which might follow some other distribution. If technology is about combining lots of things and harnessing their interactions, then some kind of average over lots of random variables seems like not a bad assumption.
Second, this model naturally builds in the assumption that innovation gets progressively harder, because there are lots of new combinations with productivity a bit better than zero (“low hanging fruit”), but as these get discovered the share of combinations with better productivity get progressively less common. That seems sensible too.
the point of Jones’ paper is to show these processes balance each other out.
Under a range of common probability distributions (such as the standard normal, but also including others), finding a new technology that’s more productive than the current best gets explosively harder over time.
However, the range of options we have also grows explosively, and the two offset each other such that we end up with constant exponential technological progress. Which is a pretty close approximation to what we’ve observed over the last 100 years!
Jones has a different notion of R&D in mind.
In Jones’ model, we still need to spend real R&D resources to build new technologies. But it’s sort of a two-stage process, where we costlessly sort through the vast space of possibilities and then proceed to actually conduct R&D only on promising ideas.
As a mathematician, Poincaré is aware of the fact that the space of possible combinations is astronomical. Mathematical creation is about choosing the right combination of mathematical ideas from the set of possible combinations.
Long-run Growth and AI
But what if there are limits to this process? Human minds may have some unknown process of organizing combinations, to efficiently sort through them. But there are quite a lot of possible combinations. What if, eventually, it becomes impossible for human minds to efficiently sort through these possibilities? In that case, it would seem that technological progress must slow, possibly a lot.
This is essentially the kind of model developed in Agrawal, McHale, and Oettl (2018). In their model, an individual researcher (or team of researchers) has access to a fraction of all human knowledge, whether because it’s in their head or they can quickly locate the knowledge (for example with a search engine). As a general principle, they assume the more knowledge you have access to, the better it is for innovation.
assume research teams combine ideas they have access to in order to develop new technologies. And initially, the more possible combinations there are, the more valuable discoveries a research team can make.
But unlike in Jones (2021), Agrawal and coauthors build a model where the ability to comb through the set of possible ideas weakens as the set gets progressively larger.
Eventually, we end up in a position like Weitzman’s original model, where the set of possibilities is larger than can ever be explored, and so adding more potential combinations no longer matters. Except, in this case, this occurs due to a shortage of cognitive resources, rather than a shortage of economic resources that are necessary for conducting R&D.
As we suspected, they show that as we lose the ability to sort through the space of possible ideas, technological progress slows (though never stops in their particular model).
But if the problem here is we eventually run out of cognitive resources, then what if we can augment our resources with artificial intelligence?
Agrawal and coauthors are skeptical this problem can be overcome with artificial intelligence, at least in the long run.
They argue convincingly that no matter how good an AI might be, there is always a number of components where it becomes implausible for a super intelligence to search through all possible combinations efficiently.
If that’s true, then in the long run any acceleration in technological progress driven by the combinatorial explosion must eventually stop when our cognitive resources lose the ability to keep up with it.
To illustrate the probable difficulty of searching through all possible combinations of ideas, let’s think about a big number: 1080. That’s about how many atoms there are in the observable universe. That would seem like a difficult number of atoms for an artificial intelligence to efficiently search over. Yet if we have just 266 ideas, then the number of possible combinations is about equal to 1080, i.e., the number of atoms in the universe!undefined
Dynamics of Technological Progress
two kinds of process may work to stymie continued explosive growth.
Innovation might get harder. If the productivity of future inventions are like draws from some thin-tailed distribution (possibly a normal distribution), then finding better ways of doing things gets so hard so fast that this difficulty offsets the explosive force of combinatorial growth.
Exploring possible combinations might take resources. These resources might be cognitive or actual economic resources. But either way, while the space of ideas can grow combinatorially, the set of resources available for exploration probably can’t (at least, it hasn’t for a long while).
To start, it seems to me that it must take resources to explore the space of possible ideas, whether those resources are cognitive or economic.
It may be that, we are still in an era where human minds can efficiently organize and tag ideas in the combinatorial space so that we can search it efficiently (or maybe science provides a map of the combinatorial terrain).
the ultimate rate of technological progress depends on how rapidly we can increase our resources for exploring the space of ideas.
(If we need more cognitive resources, then that means resources in the form of artifical intelligence and better computers). And our ability to increase our resources with new ideas is a question that falls squarely in the domain of Jones (2021): how productive are new ideas?
suppose the productivity of new ideas follows a fat-tailed distribution.
That’s a world where extremely productive technologies – the kind that would be weird outliers in a thin-tailed world – are not that uncommon to discover.
Well, in that world, Jones (2021) shows that the growth rate of the economy will be faster than exponential, at least so long as it can efficiently search all possible combinations of ideas.
And faster than exponential growth in resources is precisely what we would need to keep exploring the growing combinatorial space.