Source: Rational Optimist, Nov 2015
Suppose Thomas Edison had died of an electric shock before thinking up the light bulb. Would history have been radically different? Of course not. No fewer than 23 people deserve the credit for inventing some version of the incandescent bulb before Edison, according to a history of the invention written by Robert Friedel, Paul Israel and Bernard Finn.
The same is true of other inventions. Elisha Gray and Alexander Graham Bell filed for a patent on the telephone on the very same day. By the time Google came along in 1996, there were already scores of search engines.
As Kevin Kelly documents in his book “What Technology Wants,” we know of six different inventors of the thermometer, three of the hypodermic needle, four of vaccination, five of the electric telegraph, four of photography, five of the steamboat, six of the electric railroad. The history of inventions, writes the historian Alfred Kroeber, is “one endless chain of parallel instances.”
Simultaneous discovery and invention mean that both patents and Nobel Prizes are fundamentally unfair things. And indeed, it is rare for a Nobel Prize not to leave in its wake a train of bitterly disappointed individuals with very good cause to be bitterly disappointed.
The economist Edwin Mansfield of the University of Pennsylvania studied the development of 48 chemical, pharmaceutical, electronic and machine goods in New England in the 1970s. He found that, on average, it cost 65% as much money and 70% as much time to copy products as to invent them. And this was among specialists with technical expertise. So even with full freedom to copy, firms would still want to break new ground. Commercial companies do basic research because they know it enables them to acquire the tacit knowledge that assists further innovation.
Technological advances are driven by practical men who tinkered until they had better machines; abstract scientific rumination is the last thing they do. As Adam Smith, looking around the factories of 18th-century Scotland, reported in “The Wealth of Nations”: “A great part of the machines made use in manufactures…were originally the inventions of common workmen,” and many improvements had been made “by the ingenuity of the makers of the machines.”
It was the economist Robert Solow who demonstrated in 1957 that innovation in technology was the source of most economic growth—at least in societies that were not expanding their territory or growing their populations. It was his colleagues Richard Nelson and Kenneth Arrow who explained in 1959 and 1962, respectively, that government funding of science was necessary, because it is cheaper to copy others than to do original research.
In 2003, the Organization for Economic Cooperation and Development published a paper on the “sources of economic growth in OECD countries” between 1971 and 1998 and found, to its surprise, that whereas privately funded research and development stimulated economic growth, publicly funded research had no economic impact whatsoever. None. This earthshaking result has never been challenged or debunked. It is so inconvenient to the argument that science needs public funding that it is ignored.
In 2007, the economist Leo Sveikauskas of the U.S. Bureau of Labor Statistics concluded that returns from many forms of publicly financed R&D are near zero and that “many elements of university and government research have very low returns, overwhelmingly contribute to economic growth only indirectly, if at all.”
The perpetual-innovation machine that feeds economic growth and generates prosperity is not the result of deliberate policy at all, except in a negative sense. Governments cannot dictate either discovery or invention; they can only make sure that they don’t hinder it.
Innovation emerges unbidden from the way that human beings freely interact if allowed. Deep scientific insights are the fruits that fall from the tree of technological change.