A complexity theorist explores how science and culture co-evolve.
By David Krakauer
Photo by John MacDougal / AFP / Getty Images
April 30, 2015
April 30, 2015
In the following five short chapters, David Krakauer, an evolutionary theorist, and president elect of the Santa Fe Institute, haven of complex systems research, examines five facets of chain reactions, each typifying how ideas spread through science and culture. Together they tell a story of how the ideas that define humanity arise, when and why they die or are abandoned, the surprising possibilities for continued evolution, and our responsibility to nurture thought that might enlighten our future.
Part I: Chain Reactions
Because the actor always moves among and in relation to other acting beings, he is never merely “a doer” but always and at the same time a sufferer. To do and to suffer are like opposite sides of the same coin, and the story that an act starts is composed of its consequent deeds and sufferings. These consequences are boundless, because action, though it may proceed from nowhere, so to speak, acts into a medium where every reaction becomes a chain reaction and where every process is the cause of new processes.
—Hannah Arendt, The Human Condition
On Dec. 2, 1942, just over three years into World War II, President Roosevelt was sent the following enigmatic cable: “The Italian navigator has landed in the new world.” The accomplishments of Christopher Columbus had long since ceased to be newsworthy. The progress of the Italian physicist, Enrico Fermi, navigator across the territories of Lilliputian matter—the abode of the microcosm of the atom—was another thing entirely. Fermi’s New World, discovered beneath a Midwestern football field in Chicago, was the province of newly synthesized radioactive elements. And Fermi’s landing marked the earliest sustained and controlled nuclear chain reaction required for the construction of an atomic bomb.
This physical chain reaction was one of the links of scientific and cultural chain reactions initiated by the Hungarian physicist, Leó Szilárd. The first was in 1933, when Szilárd proposed the idea of a neutron chain reaction. Another was in 1939, when Szilárd and Einstein sent the now famous “Szilárd-Einstein” letter to Franklin D. Roosevelt informing him of the destructive potential of atomic chain reactions: “This new phenomenon would also lead to the construction of bombs, and it is conceivable—though much less certain—that extremely powerful bombs of a new type may thus be constructed.”
This scientific information in turn generated political and policy chain reactions: Roosevelt created the Advisory Committee on Uranium which led in yearly increments to the National Defense Research Committee, the Office of Scientific Research and Development, and finally, the Manhattan Project.
The brain alone is incapable of supporting the sophisticated culture that we are now dependent upon.
Life itself is a chain reaction. Consider a cell that divides into two cells and then four and then eight great-granddaughter cells. Infectious diseases are chain reactions. Consider a contagious virus that infects one host that infects two or more susceptible hosts, in turn infecting further hosts. News is a chain reaction. Consider a report spread from one individual to another, who in turn spreads the message to their friends and then on to the friends of friends.
These numerous connections that fasten together events are like expertly arranged dominoes of matter, life, and culture. As the modernist designer Charles Eames would have it, “Eventually everything connects—people, ideas, objects. The quality of the connections is the key to quality per se.”
Dominoes, atoms, life, infection, and news—all yield domino effects that require a sensitive combination of distances between pieces, physics of contact, and timing. When any one of these ingredients is off-kilter, the propagating cascade is likely to come to a halt. Premature termination is exactly what we might want to happen to a deadly infection, but it is the last thing that we want to impede an idea.
Part II: The domino-like patterns of civilization
The true protagonist of an science-fiction story or novel is an idea and not a person. If it is good science fiction the idea is new, it is stimulating, and, probably most important of all, it sets off a chain reaction of ramification-ideas in the mind of the reader; it so-to-speak unlocks the reader’s mind so that the mind, like the author’s, begins to create.
—Philip K. Dick, Paycheck and Other Classic Stories
In the 1860s, the Augustinian friar and physics teacher, Gregor Mendel, published the results of a series of experiments on pea plants, “Experiments on Plant Hybridization.” These results subsequently appeared in the journal, Proceedings of the Natural History Society of Brünn. Brünn is the German rendering of the Czech city Brno, the capital city of Moravia.
Mendel’s paper is customarily treated as the founding article in the field of genetics and introduces Mendel’s laws of segregation and independent assortment. Through these laws, Mendel was able to account for the appearance and disappearance of traits across generations in terms of recessive and dominant interactions among “factors” (genes had yet to be named).
Mendel was evidently aware that few influential scientists of his time—or any subsequent time for that matter—were likely to read the Proceedings of the Natural History Society of Brünn. In other words, the Proceedings of the Natural History Society of Brünn was not a well-placed cultural domino when it came to initiating an epistemic chain reaction. Mendel strategically mailed reprints of his article to the most famous and influential scholars he could remember. Silence ensued.
However, one botanist, Hermann Hoffmann, did quote Mendel’s results in a monograph on plant hybrids. Charles Darwin owned and read Hoffman’s book. We know this because in Darwin’s copy—now housed in the library at the University of Cambridge—there are found numerous annotations and notes in Darwin’s own hand. Unfortunately, the only influential idea in all of Hoffmann’s book, the inheritance laws of Mendel, was passed over—a domino falls without consequence like George Berkeley’s tree in an uninhabited park. The sagacious Darwin, whose own ideas on inheritance were both implausible and inconsistent, failed to observe the Mendelian mechanisms that could have silenced so many of his later critics.
Mendel is in the distinguished company of scholars whose ideas the biologist and philosopher Gunther Stent describes as “premature.” Other premature ideas that ultimately became hugely influential include: Ignaz Semmelweis’s ideas on the value of sterilization in preventing infection, John Snow’s theory of water-borne cholera, Humphry Davy’s observations on the analgesic value of nitrous oxide, and Alfred Wegener’s theory of continental drift.
Stent suggests that culture actively seeks to dampen cultural chain reactions the way that control rods in nuclear reactors absorb errant neutrons. If prematurity is like increasing the spacing between dominoes, reducing the odds of a cascade, there are other times when the dominoes are completely removed. These cases do not result in prematurity but in total eclipse and near-extinction.
Part III: Ancient calculators and the domino monolith
How dare you and the rest of your barbarians set fire to my library? Play conqueror all you want, Mighty Caesar! Rape, murder, pillage thousands, even millions of human beings! But neither you nor any other barbarian has the right to destroy one human thought!
—Cleopatra, 1963. Screenplay by Joseph Mankiewicz. Somewhat loosely based on Antony and Cleopatra by William Shakespeare, 1623.
Arthur C. Clarke, in speculative fiction, and later Stanley Kubrick, in a more fluid medium, placed a post-singularity obelisk (or was it a giant domino?) in front of a troop of blood-thirsty proto-hominids in 2001: A Space Odyssey. This was arranged in order that the distant future might collide with the remote past and thereby move civilization forward toward space travel.
The Antikythera mechanism encodes the astronomical insights of Aristotle and the Babylonians. It would be the most implausible “obelisk” of an anachronistic novelist or filmmaker if it were not for the fact that it is real. In her book on the mechanism, Decoding the Heavens, Jo Marchant writes, “Rescued from an ancient shipwreck in 1901, it is one of the most stunning artifacts we have from antiquity and, according to everything we know about the technology of the time, it shouldn’t exist. Nothing close to its sophistication appears again for well over a millennium.”
The Antikythera mechanism is estimated to have been constructed around 200 B.C. To put this in some chronological perspective, in Britain for several centuries following this invention, Druids were burning, drowning, and hanging innocents in order to derive from the systematic convulsions of limbs and the gushing of drawn blood, predictions of the future, according to Greek and Roman writings.
The Antikythera reflects such contemporary thinkers as Hipparchus of Nicea. Ptolemy would not be born for another two centuries. Thereafter, the dominoes of astronomical knowledge and mechanical inference stopped falling and did not resume for another 1,000 years.
Part IV: The paradox of the Paleolithic hacker, or why are cultural dominoes possible in the first place?
Whereas the beautiful is limited, the sublime is limitless, so that the mind in the presence of the sublime, attempting to imagine what it cannot, has pain in the failure but pleasure in contemplating the immensity of the attempt.
—Immanuel Kant, Critique of Pure Reason
Anatomically modern humans emerge in the fossil record around 200,000 years ago. The assumption is that any observable differences between humans today and humans in the Upper Paleolithic have little to do with biology and almost everything to do with culture. What makes this observation startling are the following facts about our cultural history.
Spoken language is estimated to be around 50,000 to 100,000 years old. This is based on the analysis of global etymologies of widely shared words such as “finger,” “water,” “two,” and “who.”
The very oldest cave paintings that have been discovered are dated to around 40,000 years old, including the Maros site in Sulawesi, Arnhem Land in Australia, and the Cave of El Castillo in Spain. The Chauvet caves in France are around 30,000 years old and Lascaux, 10,000 years old.
The oldest written languages have been found in Mesopotamia, Egypt, and in the Indus script of the Bronze Age Indus Valley civilization in ancient India. All of these are dated to around 3200 B.C. And the oldest written numbers were invented around 3100 B.C. in ancient Sumer.
The cultural traits that are reckoned to set the human species apart from our closest primate relatives are only a few thousand years old. By contrast, the great apes (humans, chimps, gorillas, and orangutans) are millions of years old, the primates are tens of millions of years old, and life itself, hundreds of millions of years old.
This raises a very curious question. How can hardware that is by and large millions of years old (great ape brains) and at best a couple of hundred thousand years old (anatomically modern humans) support software (modern human culture) that is only a few thousand years old? I call this the “Paradox of the Paleolithic Hacker.”
As long as we think about human minds as separable from the world we are more likely to be guided by the pressures of pure consumerism.
Every child knows that the computational hardware of 1977 is completely incapable of supporting the software of 2015. Come to think of it, the hardware of 2010 cannot support the software of 2015. Hardware and software separated by only a few years become incompatible.
Compare this with the hardware of human cognition that is over 100,000 years older than the software of modern culture. Our late Stone Age brain seems completely at home with Antikythera mechanisms, general relativity, quantum mechanics, information theory, structuralist anthropology, flying jets, shooting hoops, and playing the theremin. How is it that the dominoes of technology—hardware and software—need to coevolve, whereas the dominoes of cognition—brains and culture—have remained compatible despite the brain being a relic from the Upper Paleolithic?
The way in which modern culture “runs” on ancient brains derives from the “plasticity” of neural hardware combined with the observation that the brain has by and large become embedded in culture itself. Our brains have ceased to be exclusively “embodied” and have become largely “exbodied.” The evolutionary dominoes have been falling away from the lineage of organism and into the collective environment. The argument is that the brain alone is incapable of supporting the sophisticated culture that we are now dependent upon.
Machine intelligence is the latest and largest point on a trend line that started with written numbers and words, progressed through the abacus, libraries, and complex numbers, and emerged as Turing machines. The day the first mathematician solved the problem of long division on a piece of paper with a pencil was a day the brain gave up “attempting to imagine what it cannot.”
Part V: Exbodiment and the symbiotic nature of intelligence
… a domino line of laughter, but with an edge to it, a longing, an awe, and many of the watchers realized with a shiver that no matter what they said, they really wanted to witness a great fall, see someone arc downward all that distance, to disappear from the sight line, flail, smash to the ground, and give the Wednesday an electricity, a meaning, that all they needed to become a family was one millisecond of slippage.
—Colum McCann, Let the Great World Spin
Embodied cognition has emerged recently as a hitherto neglected focus of brain science. The field emphasizes the physicality of thought, including the way in which many real-world problems are “off-loaded” to our bodies in order to be creatively solved via the constraints of limb motion (kinematics). Tennis players do not need to “program” their arms with information about how to bend limbs to serve—the articulation of the joints does this for them.
In parallel, philosophers, anthropologists, and archeologists have investigated the computational affordances of material culture—the way that we use tools and instruments to help solve hard problems. The work of cognitive scientists William Chase and Herbert Simon on perception in chess illustrates how a simple tool for storing the positions of chess pieces during a game—the board—overcomes the severe limitations of working memory (as is exemplified by the difficulty of playing chess with a blindfold).
In pursuing questions about the evolution of intelligence, what is becoming increasingly clear is that evolution is driven largely by advantages gained from overcoming the constraints of restrictive computational architectures: brains, bodies, and crowds. Evolution is never “satisfied” by its hardware or its environmental “software” because there is always surplus information to be processed in the world. And this surplus information can be used to improve the power of prediction and control over the physical world—for good or for bad. If an abacus extends the range of our arithmetic we manipulate it. If a telescope extends the range of our vision we look through it. And if a computer should extend the compass of our logic we shall reason by it. And in time the contributions of human and artifact become nigh on indistinguishable.
The fear of an Artificial Intelligence is the fear that we might abdicate thought or forego our free will.
The great Argentinian writer Jorge Luis Borges captured this notion when he wrote in his short story collection Dream Tigers that, “The machinery of the world is far too complex for the simplicity of men.”
The fear of an Artificial Intelligence (AI) is the fear that we might abdicate thought, and potentially far worse, forego our free will for expedient information-processing environments—and that a distributed system, a cloud consciousness, will emerge to make all of our decisions for us, including the proper time to die.
My own view is that the dominoes of our cognitive culture started falling many thousands of years ago. Language and libraries were helping to make “decisions” for us long before PCs and smartphones. Scholasticism, with its reliance on past authority and anti-empiricism, was in part a transitional AI based on the reputational technology of printed books. This AI eventually yielded to an observational AI founded on independently verifiable measurements acquired by microscopes and telescopes.
One element of long-term terrestrial prosperity consists in coming to a deeper understanding of the ongoing symbiotic relationship of brain to exbodied culture. As long as we think about human minds as separable from the world—both natural and technological—we conform to a partial comprehension of the mutualistic network within which we have evolved. And we are more likely to be guided by the pressures of pure consumerism—pressures that tend to extend intelligence into exbodied “products” rather than prostheses—a culture of “Apps.”
I would like to advocate for a new complexity metaphysics that derives its greatest “pleasure in contemplating the immensity of the attempt” to grasp the sublime through an ever-growing community of collective intelligence. A community that needs to be nurtured and protected in order to keep the cognitive dominoes falling and ideas advancing into the future. William James captured this sentiment when he wrote “The great use of life is to spend it for something that will outlast it.”
David Krakauer is the director of the Wisconsin Institute for Discovery, Professor of Genetics and the co-director of the Center for Complexity and Collective Computation at the University of Wisconsin, Madison, and the president elect of the Santa Fe Institute.