…
The scientist Stuart Kauffman has a suggestive name for the set of all those first-order combinations: “the adjacent possible.” The phrase captures both the limits and the creative potential of change and innovation. In the case of prebiotic chemistry, the adjacent possible defines all those molecular reactions that were directly achievable in the primordial soup. Sunflowers and mosquitoes and brains exist outside that circle of possibility. The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself. Yet is it not an infinite space, or a totally open playing field. The number of potential first-order reactions is vast, but it is a finite number, and it excludes most of the forms that now populate the biosphere. What the adjacent possible tells us is that at any moment the world is capable of extraordinary change, but only certain changes can happen.
The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore those boundaries. Each new combination ushers new combinations into the adjacent possible. Think of it as a house that magically expands with each door you open. You begin in a room with four doors, each leading to a new room that you haven’t visited yet. Those four rooms are the adjacent possible. But once you open one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn’t have reached from your original starting point. Keep opening new doors and eventually you’ll have built a palace.
Basic fatty acids will naturally self-organize into spheres lined with a dual layer of molecules, very similar to the membranes that define the boundaries of modern cells. Once the fatty acids combine to form those bounded spheres, a new wing of the adjacent possible opens up, because those molecules implicitly create a fundamental
division between the inside and outside of the sphere. This division is the very essence of a cell. Once you have an “inside,” you can put things there: food, organelles, genetic code. Small molecules can pass through the membrane and then combine with other molecules to form larger entities too big to escape back through the boundaries of the proto-cell. When the first fatty acids spontaneously formed those dual-layered membranes, they opened a door into the adjacent possible that would ultimately lead to nucleotide- based genetic code, and the power plants of the chloroplasts and mitochondria—the primary “inhabitants” of all modern cells.
The same pattern appears again and again throughout the evolution of life. Indeed, one way to think about the path of evolution is as a continual exploration of the adjacent possible. When dinosaurs such as the velociraptor evolved a new bone called the semi- lunate carpal (the name comes from its half-moon shape), it enabled them to swivel their wrists with far more flexibility. In the short term, this gave them more dexterity as predators, but it also opened a door in the adjacent possible that would eventually lead, many millions of years later, to the evolution of wings and flight. When our ancestors evolved opposable thumbs, they opened up a whole new cultural branch of the adjacent possible: the creation and use of finely crafted tools and weapons.
One of the things that I find so inspiring in Kauffman’s notion of the adjacent possible is the continuum it suggests between natural and man-made systems. He introduced the concept in part to illustrate a fascinating secular trend shared by both natural and human history: this relentless pushing back against the barricades of the adjacent possible. “Something has obviously happened in the past 4.8 billion years,” he writes. “The biosphere has expanded, indeed, more or less persistently exploded, into the ever-expanding adjacent possible…. It is more than slightly interesting that this fact is clearly true, that it is rarely remarked upon, and that we have no particular theory for this expansion.” Four billion years ago, if you were a carbon atom, there were a few hundred molecular configurations you could stumble into. Today that same carbon atom, whose atomic properties haven’t changed one single nanogram, can help build a sperm whale or a giant redwood or an H1N1 virus, along with a near-infinite list of other carbon-based life forms that were not part of the adjacent possible of prebiotic earth. Add to that an equally formidable list of human concoctions that rely on carbon—every single object on the planet made of plastic, for instance—and you can see how far the kingdom of the adjacent possible has expanded since those fatty acids self-assembled into the first membrane.
The history of life and human culture, then, can be told as the story of a gradual but relentless probing of the adjacent possible, each new innovation opening up new paths to explore. But some systems are more adept than others at exploring those possibility spaces. The mystery of Darwin’s paradox that we began with ultimately revolves around the question of why a coral reef ecosystem should be so adventurous in its exploration of the adjacent possible—so many different life forms sharing such a small space— while the surrounding waters of the ocean lack that same marvelous diversity. Similarly, the environments of big cities allow far more commercial exploration of the adjacent possible than towns or villages, allowing tradesmen and entrepreneurs to specialize in fields that would be unsustainable in smaller population centers.
The Web has explored the adjacent possible of its medium far faster than any other communications technology in history. In early 1994, the Web was a text-only medium, pages of words connected by hyperlinks. But within a few years, the possibility space began to expand. It became a medium that let you do financial transactions, which turned it into a shopping mall and an auction house and a casino. Shortly afterward, it became a true two-way medium where it was as easy to publish your own writing as it was to read other people’s, which engendered forms that the world had never seen before: user-authored encyclopedias, the blogosphere, social network sites. YouTube made the Web one of the most influential video delivery mechanisms on the planet. And now digital maps are unleashing their own cartographic revolutions.
You can see the fingerprints of the adjacent possible in one of the most remarkable patterns in all of intellectual history, what scholars now call “the multiple”: A brilliant idea occurs to a scientist or inventor somewhere in the world, and he goes public with his remarkable finding, only to discover that three other minds had independently come up with the same idea in the past year. Sunspots were simultaneously discovered in 1611 by four scientists living in four different countries. The first electrical battery was invented separately by Dean Von Kleist and Cuneus of Leyden in 1745 and 1746. Joseph Priestley and Carl Wilhelm Scheele independently isolated oxygen between 1772 and 1774. The law of the conservation of energy was formulated separately four times in the late 1840s. The evolutionary importance of genetic mutation was proposed by S. Korschinsky in 1899 and then by Hugo de Vries in 1901, while the impact of X-rays on mutation rates was independently uncovered by two scholars in 1927. The telephone, telegraph, steam engine, photograph vacuum tube, radio—just about every essential technological advance of modern life has a multiple lurking somewhere in its origin story.
In the early 1920s, two Columbia University scholars named William Ogburn and Dorothy Thomas decided to track down as many multiples as they could find, eventually publishing their survey in an influential essay with the delightful title “Are Inventions Inevitable?” Ogburn and Thomas found 148 instances of independent innovation, most them occurring within the same decade. Reading the list now, one is struck not just by the sheer number of cases, but how indistinguishable the list is from an unfiltered history of big ideas. Multiples have been invoked to support hazy theories about the “zeitgeist,” but they have a much more grounded explanation. Good ideas are not conjured out of thin air; they are built out of a collection of existing parts, the composition of which expands (and, occasionally, contracts) over time. Some of those parts are conceptual: ways of solving problems, or new definitions of what constitutes a problem in the first place. Some of them are, literally, mechanical parts. To go looking for oxygen, Priestley and Scheele needed the conceptual framework that the air was itself something worth studying and that it was made up of distinct gases; neither of these ideas became widely accepted until the second half of the eighteenth century. But they also needed the advanced scales that enabled them to measure the minuscule changes in weight triggered by oxidation, technology that was itself only a few decades old in 1774. When those parts became available, the discovery of oxygen entered the realm of the adjacent possible. Isolating oxygen was, as the saying goes, “in the air,” but only because a specific set of prior discoveries and inventions had made that experiment thinkable.
The adjacent possible is as much about limits as it is about openings. At every moment in the timeline of an expanding biosphere, there are doors that cannot be unlocked yet. In human culture, we like to think of breakthrough ideas as sudden accelerations on the timeline, where a genius jumps ahead fifty years and invents something that normal minds, trapped in the present moment, couldn’t possibly have come up with. But the truth is that technological (and scientific) advances rarely break out of the adjacent possible; the history of cultural progress is, almost without exception, a story of one door leading to another door, exploring the palace one room at a time. But of course, human minds are not bound by the finite laws of molecule formation, and so every now and then an idea does occur to someone that teleports us forward a few rooms, skipping some exploratory steps in the adjacent possible. But those ideas almost always end up being short-term failures, precisely because they have skipped ahead. We have a phrase for those ideas: we call them “ahead of their time.”
Consider the legendary Analytical Engine designed by nineteenth-century British inventor Charles Babbage, who is considered by most technology historians to be the father of modern computing, though he should probably be called the great-grandfather of modern computing, because it took several generations for the world to catch up to his idea. Babbage is actually in the pantheon for two inventions, neither of which he managed to build during his lifetime. The first was his Difference Engine, a fantastically complex fifteen-ton contraption, with over 25,000 mechanical parts, designed to calculate polynomial functions that were essential to The adjacent possible is as much about limits as it is about open creating the trigonometric tables crucial to navigation. Had Babbage actually completed his project, the Difference Engine would have been the world’s most advanced mechanical calculator. When the London Science Museum constructed one from Babbage’s plans to commemorate the centennial of his death, the machine returned accurate results to thirty-one places in a matter of seconds. Both the speed and precision of the device would have exceeded anything else possible in Babbage’s time by several orders of magnitude.
For all its complexity, however, the Difference Engine was well within the adjacent possible of Victorian technology. The second half of the nineteenth century saw a steady stream of improvements to mechanical calculation, many of them building on Babbage’s architecture. The Swiss inventor Per Georg Scheutz constructed a working Difference Engine that debuted at the Exposition Universelle of 1855; within two decades the piano-sized Scheutz design had been reduced to the size of a sewing machine. In 1884, an American inventor named William S. Burroughs founded the American Arithmometer Company to sell mass-produced calculators to businesses around the country. (The fortune generated by those machines would help fund his namesake grandson’s writing career, not to mention his drug habit, almost a century later.) Babbage’s design for the Difference Engine was a work of genius, no doubt, but it did not transcend the adjacent possible of its day.
The same cannot be said of Babbage’s other brilliant idea: the Analytical Engine, the great unfulfilled project of Babbage’s career, which he toiled on for the last thirty years of his life. The machine was so complicated that it never got past the blueprint stage, save a small portion that Babbage built shortly before his death in 1871. The Analytical Engine was—on paper, at least—the world’s first programmable computer. Being programmable meant that the machine was fundamentally open-ended; it wasn’t designed for a specific set of tasks, the way the Difference Engine had been optimized for polynomial equations. The Analytical Engine was, like all modern computers, a shape-shifter, capable of reinventing itself based on the instructions conjured by its programmers. (The brilliant mathematician Ada Lovelace, the only daughter of Lord Byron, wrote several sets of instructions for Babbage’s still-vaporware Analytical Engine, earning her the title of the world’s first programmer.) Babbage’s design for the engine anticipated the basic structure of all contemporary computers: “programs” were to be inputted via punch cards, which had been invented decades before to control textile looms; instructions and data were captured in a “store,” the equivalent of what we now call random access memory, or RAM; and calculations were executed via a system that Babbage called “the mill,” using industrial-era language to describe what we now call the central processing unit, or CPU.
Babbage had most of this system sketched out by 1837, but the first true computer to use this programmable architecture didn’t appear for more than a hundred years. While the Difference Engine engendered an immediate series of refinements and practical applications, the Analytical Engine effectively disappeared from the map. Many of the pioneering insights that Babbage had hit upon in the 1830s had to be independently rediscovered by the visionaries of World War II—era computer science.
Why did the Analytical Engine prove to be such a short-term dead end, given the brilliance of Babbage’s ideas? The fancy way to say it is that his ideas had escaped the bounds of the adjacent possible. But it is perhaps better put in more prosaic terms: Babbage simply didn’t have the right spare parts. Even if Babbage had built a machine to his specs, it is unclear whether it would have worked, because Babbage was effectively sketching out a machine for the electronic age during the middle of the steam-powered mechanical revolution. Unlike all modern computers, Babbage’s machine was to be composed entirely of mechanical gears and switches, staggering in their number and in the intricacy of their design.
Information flowed through the system as a constant ballet of metal objects shifting positions in carefully choreographed movements. It was a maintenance nightmare, but more than that, it was bound to be hopelessly slow. Babbage bragged to Ada Lovelace that he believed the machine would be able to multiply two twenty-digit numbers in three minutes. Even if he was right—Babbage wouldn’t have been the first tech entrepreneur to exaggerate his product’s performance—that kind of processing time would have made executing more complicated programs tortur- ously slow. The first computers of the digital age could perform the same calculation in a matter of seconds. An iPhone completes millions of such calculations in the same amount of time. Programmable computers needed vacuum tubes, or, even better, integrated circuits, where information flows as tiny pulses of electrical activity, instead of clanking, rusting, steam-powered metal gears.
You can see a comparable pattern—on a vastly accelerated timetable—in the story of YouTube. Had Hurley, Chen, and Karim tried to execute the exact same idea for YouTube ten years earlier, in 1995, it would have been a spectacular flop, because a site for sharing video was not within the adjacent possible of the early Web. For starters, the vast majority of Web users were on painfully slow dial-up connections that could sometimes take minutes to download a small image. (The average two-minute-long YouTube clip would have taken as much as an hour to download on the then-standard bps modems.) Another key to YouTube’s early success is that its developers were able to base the video serving on Adobe’s Flash plat- form, which meant that they could focus on the ease of sharing and discussing clips, and not spend millions of dollars developing a whole new video standard from scratch. But Flash itself wasn’t released until late 1996, and didn’t even support video until 2002.
To use our microbiology analogy, having the idea for a Difference Engine in the 1830s was like a bunch of fatty acids trying to form a cell membrane. Babbage’s calculating machine was a leap forward, to be sure, but as advanced as it was, the Difference Engine was still within the bounds of the adjacent possible, which is precisely why so many practical iterations of Babbage’s design emerged in the subsequent decades. But trying to create an Analytical Engine in 1850—or YouTube in 1995—was the equivalent of those fatty acids trying to self-organize into a sea urchin. The idea was right, but the environment wasn’t ready for it yet.
11 of us live inside our own private versions of the adjacent possible. In our work lives, in our creative pursuits, in the organizations that employ us, in the communities we inhabit—in all these different environments, we are surrounded by potential new configurations, new ways of breaking out of our standard routines. We are, each of us, surrounded by the conceptual equivalent of those Toyota spare parts, all waiting to be recombined into something magical, something new. It need not be the epic advances of biological diversity, or the invention of programmable computing. Unlocking a new door can lead to a world-changing scientific break lunar module they are using as a lifeboat to return home. Mission Control quickly assembles what it calls a “tiger team” of engineers to hack their way through the problem, and creates a rapid-fire inventory of all the available equipment currently on the lunar module. In the movie, Deke Slayton, head of Flight Crew Operations, tosses a jumbled pile of gear on a conference table: suit hoses, canisters, stowage bags, duct tape, and other assorted gadgets. He holds up the carbon scrubbers. “We gotta find a way to make this fit into a hole for this,” he says, and then points to the spare parts on the table, “using nothing but that” The space gear on the table defines the adjacent possible for the problem of building a working carbon scrubber on a lunar module. The device they eventually concoct, dubbed the “mailbox,” performs beautifully. The canisters and nozzles are like the ammonia and methane molecules of the early earth, or Babbage’s mechanical gears, or those Toyota parts heating an incubator: they are the building blocks that create—and limit—the space of possibility for a specific problem. In a way, the engineers at Mission Control had it easier than most. Challenging problems don’t usually define their adjacent possible in such a clear, tangible way. Part of coming up with a good idea is discovering what those spare parts are, and ensuring that you’re not just recycling the same old ingredients. This, then, is where the next six patterns of innovation will take us, because they all involve, in one way or another, tactics for assembling a more eclectic collection of building block ideas, spare parts that can be reassembled into useful new configurations. The trick to having good ideas is not to sit around in glorious isolation and try to think big thoughts. The trick is to get more parts on the table.
.
Download the full book from this link
Reference:
Johnson, S. (2010). Where good ideas come from: The natural history of innovation. Penguin UK.
BOTTOM LINE
Adjacent Possible :
Coined by the biologist Stuart Kauffman, it refers to the fact that at any given time – in science and technology, but perhaps also in culture and politics – only certain kinds of next steps are feasible. “The history of cultural progress,” Johnson writes, “is, almost without exception, a story of one door leading to another door, exploring the palace one room at a time.”
If this seems completely obvious, consider, Johnson says, how it explains the otherwise spooky phenomenon of the “multiple” – the way certain inventions or discoveries occur in several places simultaneously, apparently by chance. Sun-spots were discovered in 1611 by four different scientists in four different countries; electrical batteries were invented twice, separately, one year apart. (Similar things happened in the earliest days of the steam engine and telephone.) People have tried to explain this using vague terms such as the “zeitgeist”, or of certain ideas just being “in the air”. But there’s a simpler possibility, which is that the innovation in question had simply become part of the adjacent possible. Good ideas, as Johnson puts it, “are built out of a collection of existing parts”, both literally and metaphorically speaking. Take the isolation of oxygen as a component of air, which was another multiple. It couldn’t have happened before the invention of ultra-sensitive weighing scales. But it also couldn’t have happened before the birth of the idea that air is something, rather than nothing, and that it might be made up of gases.
What all this means, in practical terms, is that the best way to encourage (or to have) new ideas isn’t to fetishise the “spark of genius”, to retreat to a mountain cabin in order to “be creative”, or to blabber interminably about “blue-sky”, “out-of-the-box” thinking. Rather, it’s to expand the range of your possible next moves – the perimeter of your potential – by exposing yourself to as much serendipity, as much argument and conversation, as many rival and related ideas as possible; to borrow, to repurpose, to recombine. This is one way of explaining the creativity generated by cities, by Europe’s 17th-century coffee-houses, and by the internet. Good ideas happen in networks; in one rather brain-bending sense, you could even say that “good ideas are networks”. Or as Johnson also puts it: “Chance favours the connected mind.”
Another surprising truth about big ideas: even when they seem to be individual flashes of genius, they don’t happen in a flash – though the people who have them often subsequently claim that they did. Charles Darwin always said that the theory of natural selection occurred to him on 28 September 1838 while he was reading Thomas Malthus’s essay on population; suddenly, the mechanism of evolution seemed blindingly straightforward. (“How incredibly stupid not to think of that,” Darwin’s great supporter Thomas Huxley was supposed to have said on first hearing the news.) Yet Darwin’s own notebooks reveal that the theory was forming clearly in his mind more than a year beforehand: it wasn’t a flash of insight, but what Johnson calls a “slow hunch”. And on the morning after his alleged eureka moment, was Darwin feverishly contemplating the implications of his breakthrough? Nope: he busied himself with some largely unconnected ruminations on the sexual curiosity of primates.
Reference:
https://www.theguardian.com/science/2010/oct/19/steven-johnson-good-ideas