Thinking Destruction: Creativity, Rational Choice, Emergence, and Destruction Theory

Thinking Destruction: Creativity, Rational Choice, Emergence, and Destruction Theory

Thinking Destruction: Creativity, Rational Choice, Emergence, and Destruction Theory

Liu, Alan. "Thinking Destruction: Creativity, Rational Choice, Emergence, and Destruction Theory." Occasion: Interdisciplinary Studies in the Humanities v. 1 (October 15, 2009),


1. Toward a Theory of Destruction

What might be a theory of destruction adequate to our times? Can we build such a theory (as antithetical as it may sound) out of better studied theories of creativity? And if so, do we borrow from rationalistic (including rational choice ) theories or instead adopt the subrational view of one of the most vigorous competitors of rationalistic approaches today: emergence theory? These are the questions I pose in this essay, which considers in a preliminary way how a contemporary theory of destruction might be built or "created" from the bottom up.

 We live in the age of "innovation" and "terror"—the blindly coupled idées fixes that may well be how our time will be signed in history. In the case of the former, the theory of innovation (and, more generally, of creativity) has become a boom industry. This is especially the case because our thriving philosophical, psychological, sociological, historical, and aesthetic literature on creativity has recently been abetted by industry. To cite just some of the recent business titles on the topic: we are the people of "Creativity Under the Gun," Continuous Innovation, Radical Innovation, The Innovative Enterprise, Organizational Innovations, Group Creativity, The Rise of the Creative Class, and so on.[1] If the slogan of the New Economy is "creative destruction"—that phrase for entrepreneur-driven innovation originally proposed by Joseph Schumpeter—clearly the theoretical emphasis is on the creative side of the furor.[2]

Business thinking, of course, is not always good thinking. But in concert with other approaches, it has strongly advanced our now dominant theory of creativity. This theory, or rather class of theories, is oriented toward what may be called applied creativity in a way that may be analogized to eighteenth- and nineteenth-century natural theology: the theory of an absent, watchmaker Creator knowable only through the intricate workings of his applied creation.[3]Like natural theology, today's applied theories of creativity still nominally worship a prime mover—in this case the genius of the individual human creator. But their heart—or, rather, mind—isn't really in it. In such theories, genius is a logically empty subject position because its mix of chance, intuition, and will is not rigorously knowable. Exactly how the "neurons of inspiration" fire, Jon Elster (one important recent creativity theorist) has said, "I don't know."[4] Applied theories of creativity instead concentrate on everything that follows after genius: that is, on the expression or development of creativity through causal, determinate, intentional, or conscious acts of choice. Like natural theology turning from the Creator to scrutinize the minute workings of his worldly watch, in other words, applied theories of creativity are fundamentally rationalistic. In the business phrasing, innovation is to be managed.

The usual logic of managed innovation is threefold, though often only the first two steps are expressed. For a rigorous rationalist formulation, we can look to the philosophical, social-science, and economic theory of rational choice, where Elster's work on creativity (like his other work) has been particularly influential. Elster's "I don't know" about "inspiration" is matched symmetrically by the careful knowing with which he thinks about creativity (for example, in his Ulysses Unbound).[5] Improvising upon Elster's definition of creativity as "maximizing aesthetic value under constraints," we can outline the first two steps in the logic of applied or managed innovation as follows.

A. To innovate, begin by choosing some particular constraint in which to create. Examples of creative constraints in literature, for example, include the traditional sonnet form and the improvised literary forms invented by the Oulipo group. Examples in business include the brainstorming rooms in which the IDEO creativity consultancy firm sequesters business people or even the innovation-training scavenger hunts it organizes.[6] The important thought here is that constrained domains—whether structures or processes—enable, rather than block, creativity. "Thinking outside the box," as the cliché goes, turns out to be exactly wrong. Thinking creatively inside the box is the goal.

B. Once the relevant constraint is identified, then redefine the creative subject who has to work in that structure as essentially a manager—the very persona of "rational choice." It helps if one has a genius on the payroll, of course. But the beauty of innovation management is that in a pinch the proverbial roomful of monkeys typing Shakespeare will do. This is because the mandate of creativity-in-constraint is to maximize the available combinatorial possibilities, which can be accomplished either through brilliance or teamwork. Thus, for example, a manager might say to a design team of engineers, artists, and marketeers: here is a highly constrained technological specification together with a palette of colors, forms, and functions originally descended from the era of Bauhaus industrial design; make an iPod. Or again, to bring the argument to art, consider the formula for a recent, experimental form of poetry called “flarf”: write a poem by processing unlikely combinations of terms through the automatic algorithms of Google (e.g., Googling "NASA Viagra").[7]

C. The third step in the logic of managed innovation is then a powerful value-argument that is rarely explicit and (cutting the tether to Elster here) hardly deserves the name of rational. It's an ideology. Specifically: transfer the value traditionally vested in the subject of creativity (the genius or inventor) to the managed disciplinary or corporate structures that express or develop invention—at which point it is the whole enterprise that can be said ideologically to be creative. Creativity becomes an institutionalized, and capitalized, value. Such is the formula for what I have elsewhere called today's "aesthetic ideology" of innovation, meaning the wholesale co-optation of the value of artistic creativity by knowledge-work businesses. Or, at least, as Robert Paul Weiner writes, such is "The Ideology of Creativity in the United States."[8]

These, we may conclude, are the ABC’s of the dominant theory of creativity today. Ezra Pound’s “make it new” has become an ABC not of writing and reading but of producing and consuming.

By contrast, today's dark side of the force—the destructive side of Schumpeterian creative destruction—is theoretically incoherent. Schumpeter's own World War II moment, the Cold War nuclear moment, the Vietnam War or May 1968 protest moment, the 9/11 terrorist moment, and so on: one might fantasize how new ideas about destruction arising from each of these traumas might have combined (as well as critiqued earlier ideas, e.g., revolution) to build a capacious, internally articulated structure of thought about the nature of destruction and how to manage it.[9] But the modern history of destruction—to borrow Claude Lévi-Strauss's phrase—has not been "good to think," and the pain of each successive trauma has repressed the gain in ideas from its predecessor except in the form of vicious returns of the repressed.[10] An example of the way earlier ideas about destruction survive only as a return of the repressed was the topos of the Vietnam War during the 2004 U.S. presidential campaign, whose profoundly under-thought contest over who did what during the earlier war and why that was relevant to the "war on terror" was a case of unburied ancestors if ever there was one.

The consequence is that destruction theory now aligns with creativity theory only at its conceptual outset: a dark, inverted notion of creative origins. Just as creativity theory only nominally worships the creator's genius, so destruction theory only in name damns the terrorist's genius. Both theories, in other words, treat genius (creative or evil) as a logically empty subject position that we do not understand with analytical rigor (aka "intelligence"). But beyond this initial convergence with the agnosticism of creativity theory, destruction theory today has nothing to compare with the applied logic that allows creativity theory to compensate for its originary misunderstanding of genius by shifting the value of that genius into practical, manageable processes bonded to ideology. Instead, when we think about significant acts of destruction today, we race down separate tracks of practical and ideological thought that are not just oblivious but sometimes repugnant to each other. Something of this disconnect appears even in purely philosophical arguments (e.g., David Novitz's discussion of creativity, which sets up an analytic and then adds the extrinsic distinction that destructive acts, no matter how inventive, are a priori not creative because creativity is "intended to be" and is "actually or potentially" of "real value to some people").[11] But it is in the political arena, of course, that the disconnect between the discourses of practice and ideology is most jarring. If the constrained domain of the Abu Ghraib prison is too painful a reminder of the creative license of destructive practice, perhaps we can at least keep in mind the clash between ideological and practical thought witnessed in the widespread moral aversion to the U.S. DARPA Total Information Awareness program under John Poindexter, which in 2003 proposed its game-theoretic FutureMAP system allowing "investors" to predict terror-strikes through a terrorism futures market.[12] However one may find the gaming of terrorism to be morally monstrous, none of the reasons expressed for scuttling the program that I am aware of (e.g., that it would reward real terrorists, whose bets the FBI would surely have scrutinized, for specific attacks, which FutureMAP was designed to track only on an aggregate, quarterly basis) bear up to analysis.[13] Practical and ideological thought in this case seem to be at cross-purposes.

My argument is that we need a capable theory of destruction. We need to think differentially but also coherently about the practices of contemporary destruction—first about its logics, structures, and processes; and then, on that basis, about its philosophy, society, economics, politics, religion, and art.

Moreover, given that creativity and "destructivity" (as I called it in my Laws of Cool book) are two sides of the coin of our age, but that the theory of creativity is more advanced, we would do well to investigate what contemporary creativity theory already reveals, sotto voce, about destruction.[14]


2. Emergency Creativity

Some of our most interesting, recent theories of creativity, indeed, already nurture within themselves a nascent theory of destruction. Consider, for example, the "animat"—we can nickname it "rat thing"—that lives in Steve Potter's neuroengineering lab at the Georgia Institute of Technology. Rat thing consists of "a few thousand living neurons from rat cortex . . . placed on a special glass petri dish instrumented with an array of 60 micro-electrodes" (figures 1, 2).[15]

Figure 1: Figure 1: Rat-neuron “animat.” (Photo courtesy of Symbiotica Research Group and Steve Potter, Georgia Tech Laboratory for Neuroengineering)

Figure 2: Rat neurons with micro-electrode array. (Photo courtesy of Steve Potter, Georgia Tech Laboratory for Neuroengineering)

Figure 3: Hybrot. (Photo courtesy of Steve Potter, Georgia Tech Laboratory for Neuroengineering)

Figure 4: MEART’s robotic arm. (Photo courtesy of SymbioticA Research Group).


The neurons send electrical signals to each other at different points on the array, where they are picked up and amplified, routed to a computer, and then wirelessly transmitted to a small, cup-sized robot (figure 3). Rat thing thinks; robot moves. But communication is two-way, too. Sensors in the robot detect physical location by reference to infrared emitters in the room and feed a telemetry of electrical pulses back to rat thing's micro-electrodes, which in turn fire the neurons. Rat thing thinks; robot moves; rat thing thinks about robot moving. And so it goes. "Basically, we've taken these cells in a dish and given them back a body," Potter says.[16] In this incarnation, rat thing's proper name is Hybrot (for hybrid robot).[17]

But Hybrot is not rat thing's only body. Rat thing has extruded an arm on the other side of the world (figure 4). Working with Potter, Guy Ben-Ary and Phil Gamblen of the SymbioticA Research Group at the University of Western Australia have built for rat thing a robotic drawing arm. Connected to this limb through the same kind of feedback loop (now conveyed remotely through the Internet), rat thing produces a kind of nervous Etch-a-Sketch doodling. In this incarnation, rat thing has a more artistic name: MEART—The Semi Living Artist.[18] In July 2003, MEART held its first exhibition at the Manhattan ArtBots robot talent show, where it drew "portraits" of visitors by responding to digital photos (figure 5).[19] Some people thought MEART was gross. "'Eeeewww,' said Shelley Fienstein, a graphic artist who attended the show. 'A rat is drawing this stuff? A dead rat? Lots of dead rats? Oh, gross.'"[20] But Ben-Ary thinks of beauty. "They are scribbles," he says about the drawings, "but aesthetically they're beautiful."[21] Potter is even more ambitious. He is after creativity. "I hope this merging of art and science will get the artists thinking about our science," he says, "and the scientists thinking about what is art and what is the minimum needed to make a creative entity."[22]

Figure 5: “Portrait” by rat-neuron animat (in its form as MEART). (Photo courtesy of SymbioticA Research Group)


Is rat thing creative, then? And can it express that creativity in art?

There is no answer in the research literature so far. Nor, perhaps, would it be fair to press Potter for an answer. Although Potter is interested in cognitive "mechanisms of creativity" (as he says on his Web page), he's busy investigating underlying phenomena that stand a better chance of being measured.[23] As rat thing chatters neuroelectronically to itself and moves or draws, Potter's team makes "detailed observations of the neural signaling patterns" and "changes in the morphology and connectivity of the cells and networks." Their purpose is to explore the neuroengineering of learning and self-modification, with possible application to self-repairing computer systems, "cars that drive themselves," computers that "assist in situations where humans have lost motor control, memory or information processing abilities," and so on.[24] "I'm banking my whole career on the fact that there is a world of emergent properties in these neural networks that we don't know anything about," Potter says.[25] Yet in this very statement, we recognize, all Potter's metrics only return him at last to the black box of creativity, now called "emergence."

Rat thing is a member of a whole family of similar creature-creators in recent theoretical and experimental approaches to creativity. Other well-known instances include cellular automata (e.g., John Conway's much discussed "Game of Life"); Arthur Samuel's Checkersplayer program (the key instance in John Holland's Emergence: From Chaos to Order); the Seek-Whence, Jumbo, Copycat, Letter Spirit and other much-discussed programs from Douglas Hofstadter's Fluid Analogies Research Group; and Harold Cohen's cybernetic artist entity, Aaron—all artificial modelings of a paradigm that can be called (after Hofstadter or Steven Johnson) the queenless "colony of ants."[26] Less metaphorically, the paradigm is emergence, which has evolved in the last decade or so as our strongest alternative notion of creativity. Not just Potter but also Holland, Hofstadter, and Cohen among the above instances, for example, explicitly address creativity in complexity-theory terms as emergence.[27] So, too, we can note, postindustrial business theory is increasingly "complex"—as detailed, for example, in Ralph D. Stacey and José Fonseca's books about business and complexity theory.[28]  It is not such a large mental leap, it turns out, from rat thing to a contemporary corporation.

Of course, we certainly do not want to subscribe uncritically to the emergence or complexity thesis (nor to too uniform a notion of such theory, which since the British emergentist philosophers of the early twentieth century has developed in several ontological, epistemological, and other flavors).[29] But at least initially, the thesis should strike us as having a perverse brilliance in respect to the rationalist theory of applied creativity I earlier outlined and compared to natural theology. In effect, emergence theory argues that if there is no rationally understandable subject responsible for creativity—especially the genius of the individual—then perhaps such a subject does not exist. If we really want to be rationalist about it, in other words, then we should think of creativity as all a mechanical, extrinsic process of developmental innovation akin to that antithesis of natural theology in the nineteenth century: Darwinian evolution. Specifically, perhaps the secret to a truly rational approach to creativity is to go subrational. We remember the "neurons of inspiration" that Elster—thinking about creativity from the viewpoint of rational choice theory, which some wryly call "rat choice theory"—finds unknowable. Similarly, Paisley Livingston—a rational choice philosopher of literature and aesthetics who argues for the intentionality of literary creation in his Literature and Rationality—excludes "neurophysiological processes, neural nets" and "sub-symbolic, massively distributed, connexionist computer program[s]" as "a-rational."[30] To such philosophy, as it were, emergence theory might respond in its best Alfred E. Neuman voice, "what, me worry?" That is, what's wrong with going subrational when—especially given the so-called "limits of rationality" (to cite the title of a substantial 1990 volume on standard problems in rational choice theory)—the subrational, like some Mad Magazine version of rational choice theory, might be more rational than rationality itself?[31]

Let me score the emergence theory of creativity point by point against the "applied theory" of managed innovation I outlined earlier to show why subrationality—at least up to the concluding point of ideology—can thus seem hyper-rational:

A. Constraint. In the emergence thesis, first of all, "constraint" structures can be defined super-rigorously at a micro-level that does not have to be kept in the view of subjective consciousness. The constraint thesis of managed innovation is thus nothing compared to the precision of constraint imposed by emergence theory. The operative term in Hofstadter's research, for example, is not just "domain" but "microdomain"; and when Holland speaks of "constrained generating procedures," he focuses on a micro-scale far below that of the usual discussions of artistic and other constraints.[32]

The payoff of thus going not just sub but micro is that one of the main "limits of rationality" that has plagued the whole rational choice approach to contemporary life, including the creative life, becomes irrelevant. I refer to the problem of "framing," which in rational choice theory designates the way that perceptual, psychological, social, and other contextualizations of choice—the way a choice is presented, in other words—distorts rational decision.[33] From the point of view of emergence theory, the problem of framing is metaphysical. In the microdomains where creativity starts, everything is perspectivally or locally enframed. After all, the cardinal rule of emergence theory is that there is no knowledge or action at a distance, whether spatial or temporal. For the neuron, cellular automaton, ant, and so on, the only available choices are framed, for example, as follows: if you meet another ant with the same pheromone, then do X; if your pixel is turned "off" and exactly three adjacent pixels are turned "on," then turn yourself "on"; and so on. There is no transcendental manager of choices named consciousness that sits above local choices to adjudicate globally maximal or perspective-free choices.

B. Application. The real magic of the emergence thesis is then its heretical understanding of application, according to which the applied processes that manage innovation can be at once rigorously rational and wholly unpredictable, unmanageable.

On the one hand, therefore, emergence theory posits that when all those ants, cellular automata, neurons, and so on fork down their little, local rules, there is rational choice and nothing but rational choice. We might call this subrational choice, the strictest form of rational choice. It is not accidental that Holland begins his book Emergence: From Chaos to Order with literal instances of game boards as opposed to the metaphorical game theory that rational choice theory draws upon; or that the algorithms of the cognitive or computational modelings I earlier instanced so strictly implement one of the favorite teaching examples of rational choice theory: the "menu" of choices with calculable "preferences" or "utilities."[34] The decision trees of such algorithms—off vs. on, if vs. then—are unambiguous instantiations of rational choice that clarify the logic of preference interactions via hard-coded rules.

But on the other hand, of course, all these absolutely ruled local behaviors also prompt a kinky, emergence-theory kind of free choice. Emergence occurs when all the ants and electrons course down their individually strict decision paths in ways that exceed the sum of their parts and collectively express or develop surprise.[35] One standard interpretation of emergence theory, therefore, is that complex patterns arise causally, but not predictably, from lower-level building blocks, each of whose singleton traits, action rules, and starting states may be locally known, but that collectively interact to generate what Holland, in a nice phrase, calls "perpetual novelty."[36]

Thus does another of the famous "limits of rationality" in contemporary rationalism appear to fall: the dilemma of whether rational choice theory is normative or descriptive—that is, whether it prescribes what rational agents should universally choose or, instead, just empirically describes what decision agents alleged to be rational do choose.[37] From the viewpoint of the emergence thesis, this dilemma too is metaphysical. The general power of complexity theory, after all, stems from its transmutation of norm into system, where the latter—especially as theorized by Ilya Prigogine—is situationally dynamic (e.g., in dissipative exchange with environmental forces). Within complex systems of this sort, norms are in a state of disequilibrium rather than of universal or determinate maximization states, and the consequence is that there is no certain distinction between norms and unpredictable, chaotic adaptive behaviors. In other words, things happen unpredictably from the bottom up (through multiple, incalculable decision paths and local maxima); and then, equally bottom-up, the system deals with those events one way or another—some of which adaptations a later act of interpretation deems creative. Put most generally, the logical elegance of emergence theory is that it equates empirical induction (after-the-fact interpretation of bottom-up events) with creative discovery: description is creativity, in the sense that emergent systems themselves constantly describe at a higher level what is happening unpredictably at a lower level.[38]

Of course, a fuller critical examination of emergence theory would pause at this point to note that the "limits of rationality" disappear only at the cost of generating known epistemological and ontological problems amounting, in effect, to the limits of emergency.[39] But critique of emergence theory is not my main purpose here. In any case, there is something more obvious to criticize.

C. Ideology. In his expansive book of 1995 popularizing emergence (At Home in the Universe: The Search for the Laws of Organization and Complexity), Stuart Kauffman concludes by applying complexity to "An Emerging Global Civilization" (the title of his last chapter). In a revealing section of this chapter, Kauffman compares Mikhail Gorbachev's glasnost to unpredictable emergence, and then makes the same point about Tiananmen Square. He then generalizes: "We lack a theory of how the elements of our public lives link into webs of elements that act on one another and transform one another. . . . We had best attempt to understand such processes, for the global civilization is fast upon us." What is striking about this exposition is that the examples concern Eastern Europe and the Far East, but the generalization speaks for a "we" that turns out to be specifically Western democratic. In the immediately subsequent paragraph, Kauffman observes:

Modern democracy as we encapsulate it, as we tell ourselves the story of ourselves, is so much a product of the Enlightenment. Newton and Locke. The United States Constitution, which has served so well for more than 200 years, is a document built on an image of balancing political forces holding an equilibrium. Newton and Locke. Our political system is built to be flexible enough to balance political forces and allow the polity to evolve.[40]


Like other commentators, in short, Kauffman at last brings emergence theory to bear on "our" ideology. That ideology is the Western (and increasingly American) belief in emergence. Almost irrespective of its truth value, after all, emergence today is an enthusiasm that to an astounding degree lacks self-awareness or self-criticism as a Western enthusiasm. Apparently, anything emergent here is good—whether rat thing, a postindustrial corporation, online social networking systems, and so on. And also, apparently, it is just coincidental that the emergence creed conforms to the "end of history" thesis—the argument, in other words (pace Francis Fukuyama), that democracy has outswarmed the alternative communist creed of "people’s" emergence by central committee.[41] In ways too numerous to cite, in sum, Western cognitive, artificial-intelligence, new-media, business, and other theories of emergence celebrate their cause as a way to swear allegiance to democracy without seeming also to swear by any old-fashioned individualism, nationalism, or industrial capitalism making rationalist decisions (or otherwise pulling puppet strings) in the background. Kauffman, to his credit, ultimately pushes the question further to ask whether an originally Enlightenment notion of democracy must not itself develop in emergent ways to adapt its philosophy of "balanced" powers to the disequilibrium of the "unfolding, evolving nature of cultures, economies, and societies" that comprise globalism.[42]

3. Smithereens

But as I have said, critique of emergence theory—about whose logic I am more or less agnostic—is not my present goal. In conclusion, therefore, let me harvest from emergence theory just one instrumental feature that marks a partial gain in thought about destruction. Going sub and micro allows us to link destructivity integrally with creativity. Emergence , it turns out, is not just bottom-up evolutionary. It is also back-to-the-bottom revolutionary and devolutionary.

Here, then, are two exempla to close upon (merely as a teaser for future thought). One is a design function in the remarkable computer programs I earlier mentioned by Hofstadter and his group. These programs define microdomains—in this case, of numbers or letters—that generate surprising, creative results through the following process. First, an entropic, primal soup of random algorithmic interactions (the soup that Hofstadter calls his "cytoplasm") sparks transient "bondings." The alphabet soup in which the Jumbo program solves anagrams, for example, might yield a transient bond between the letters T and H sufficiently interesting—as interpreted by equally transient, micro-level "codelets" annotating events in the system—to be "glommed" together at a higher level of association (as if a cellular membrane enveloped the cytoplasmic accident). The goal of the system is then to concatenate lower-level bonds and gloms—in a process Hofstadter analogizes to perception rather than cognition (concept is percept, he argues)—until they form higher-level, "happy" combinations ready to be presented to human interpreters as semantically identifiable words or number patterns.[43]

Crucial to Hofstadter's programs, however, is the fact that not every bond or glom of combinatorial possibilities is "happy" when played out in unpredictable interaction with others. For instance, a program that insisted on retaining the letter sequence TH in the face of the fact that the resulting word must afterwards also contain a Q is probably not a good program unless it is very creative indeed, to the point of inanity. Therefore, the programs must also have a way to code (via "codelets") for "unhappiness," or the condition under which bonds and gloms divorce and throw their elements back into the mixing pool. Such unhappy processes, as Hofstadter describes them, are tantamount to a logic of destruction integral with happy creation. In notably violent language, he writes: "A far more radical remedy for an unhappy cytoplasm is the entropy-increasing transformation of disbanding: pulling a glom's top-level bonds apart and perhaps even doing so to some of the thus-revealed lower-level structures. The ultimate in radical remedies would be to smash the unhappy glom to smithereens, and to start again from scratch."[44]

We might caption this exemplum with a question: is smashing to smithereens creative, or destructive? We, of course, don't know. The only agents who might know—local agents focused without worry on their particular task of putting together or taking apart—are not just amoral but, in their equal-handed mixing of creation and destruction, oblivious. They are a prelude to a problem that only higher levels of awareness with a stake in identity can discern.



Figure 6: "Portrait" by rat-neuron animat (in its form as MEART). (Photo courtesy of SymbioticA Research Group).


The other exemplum is one of the most interesting "portraits" that rat thing drew with its robot arm (figure 6). We can easily imagine what happened. The tip of the pen made one too many passes over an area of the paper where the micro-weave had thinned over a micro-pit in the underlying wood (itself the emergent effect of untold millions of cells and interstitial fiber structures). At that point, the paper suddenly collapsed into the cavity, forming a transient, yet all-decisive, micro-adhesion of pen/paper/wood. In that literally applied moment, the signature event of the portrait was born: the fantastic, unpredictable tear (a Prigogine "bifurcation" event if ever there was one) that developed, migrated, and proliferated outwards in crazy zig-zags of improvisation.

We might caption this exemplum with a similar question: is this happy or unhappy? Is this a picture of creativity, in other words, or of destructivity? Again, we don't know. Only the hand that drew—and ripped—might know; and it isn't talking. Its knowledge—a knowledge of pure tact or contact—is not knowledge at all.

In the moment of emergency, Wordsworth says at the start of the boat-stealing episode in The Prelude, "There is a dark / Invisible workmanship that reconciles / Discordant elements, and makes them move / In one society." Or, as Yeats later said, a terrible beauty is born. We confront not just "slouching toward Bethlehem" (emergence) but the "circus animals's desertion" (de-emergence) that was there in the beginning—the all too mortal, creative beginning. We face acts of creation that are also acts of destruction; and we do not yet possess a moral compass to navigate from the radical amorality of the underlying local agents of those actions (what, me worry?) to variations of agency or subjectivity vested in the we that does worry. We—such as we are—only know that an adequate ethics of creation and destruction can no longer be based just on the tactical standpoint of locally efficient hunter-prey identities in the contemporary world and the interpreters (political, journalistic, or otherwise) “embedded” with them (as was said of reporters traveling with military units in the U.S. Iraq war). This is the frontier, it seems to me, of thought on creative destruction today: how to relinquish ontological, epistemological, psychological, religious, political, and other illusions of continuity between underlying agency and fragile, higher subjects while sustaining, or creating, an ethical bridge between the two.

Or in any case, such is one way to begin thinking about creative destruction today—though many more ways are needed to make that thought adequate to the innovation and terror of our times.


Alan Liu is Professor and Chair in the English Department at the University of California, Santa Barbara, where he teaches in the fields of digital humanities, British Romantic literature and art, and literary theory. He has published three books: Wordsworth: The Sense of History (Stanford University Press, 1989), The Laws of Cool: Knowledge Work and the Culture of Information (University of Chicago Press, 2004), and Local Transcendence: Essays on Postmodern Historicism and the Database (University of Chicago Press, 2008). Liu is principal investigator of the University of California's multi-campus research group on "Transliteracies: Research in the Technological, Social, and Cultural Practices of Online Reading." He is the editor of The Voice of the Shuttle: Web Site for Humanities Research.

[1] Teresa M. Amabile et al., "Creativity Under the Gun," in Harvard Business Review on the Innovative Enterprise (Boston: Harvard Business School Publishing, 2003),1-25; Harvard Business Review, Continuous Innovation: No Genius Required, Harvard Business Review OnPoint Collection (Boston: Harvard Business School Publishing, 2001),; Richard Leifer et al., Radical Innovation: How Mature Companies Can Outsmart Upstarts (Boston: Harvard Business School Press, 2000); Harvard Business Review on the Innovative Enterprise; Peter Clark, Organizational Innovations (London: Sage, 2003); Paul B. Paulus and Bernard A. Nijstad, ed., Group Creativity: Innovation Through Collaboration (Oxford: Oxford Univ. Press, 2003); Richard Florida, The Rise of the Creative Class, and How It's Transforming Work, Leisure, Community and Everyday Life (New York: Basic, 2002).

[2] Joseph A. Schumpeter, Capitalism, Socialism and Democracy (New York: Harper and Row, 1975). This book, which first appeared in 1942, is now Schumpeter's best-known work. In particular, its description of disruptive, innovation-driven change as "creative destruction" (especially pp. 83-84) is now cited so often in journalistic and scholarly discussion of the "innovation economy" that it has become the de facto motto of postindustrialism.

[3] The best-known instance of the watchmaker God thesis in this context is William Paley's Natural Theology (1802), which begins, "suppose I had found a watch upon the ground. . . ." (Natural Theology; or, Evidences of the Existence and Attributes of the Deity, 12th ed. [London: J. Faulder, 1809], University of Michigan Humanities Text Initiative, 1998,‑modeng, theology, I am grateful to Colin Jager for correspondence on the relation, and distinction, between natural theology and deism.

[4] Jon Elster, Ulysses Unbound: Studies in Rationality, Precommitment, and Constraints (Cambridge: Cambridge Univ. Press, 2000), 212-13: "The 'neurons for inspiration,' if there are such things, fire more intensely when the demands set by the conscious mind are stringent but not too stringent. Exactly how this might happen, I don't know." Elster's full statement occurs in the context of his discussion of maximization within constraints, which I refer to below.

[5] See esp. Elster's chapter titled "Less is More: Creativity and Constraints in the Arts" in his Ulysses Unbound.

[6] On the IDEO firm, see Bruce Nussbaum's cover story on "The Power of Design," Business Week Online, May 17, 2004

[7] On Flarf poetry, see Charles Bernstein, "The Flarf Files," SUNY Buffalo Electronic Poetry Center, (accessed July 1, 2009). "NASA Viagra" is my invention. Thanks to Annie McClanahan, a member of my graduate seminar on new media at University of California, Berkeley (during my stay as visiting Beckman Professor in 2003), for first alerting me to "flarf" in her fine presentation and essay.

[8] Alan Liu, The Laws of Cool: Knowledge Work and the Culture of Information (Chicago: Univ. of Chicago Press, 2004), esp. 2-3, 322-26, which discuss the fate of the "aesthetic ideology" of the arts when so much of that ideology has been taken over by corporate culture in the name of "creative destruction." Robert Paul Weiner, "The Ideology of Creativity in the United States," chap. 9 in his Creativity and Beyond: Cultures, Values, and Change (Albany: State Univ. of New York Press, 2000).

[9] Schumpeter's thought about destruction is far less extensive or analytical than his argument about creativity (long economic cycles of innovation). However, we can discern in his metaphors of violence not only the residue of nineteenth century debates (as in his unsettled alternation between the words "revolution" and "evolution") but the history of his own moment as an Austrian émigré during the Blitzkrieg epoch of the 1930s and 1940s—e.g., "This kind of competition is as much more effective than the other [normal competition of prices, etc.] as a bombardment is in comparison with forcing a door" (Capitalism, Socialism and Democracy, 84).

[10] Claude Lévi-Strauss, Totemism (Boston: Beacon, 1963), 89: "We can understand, too, that natural species are chosen [as totems] not because they are 'good to eat' but because they are 'good to think.'" In general, I am influenced in my present argument by Lévi-Strauss's structural formulation of myth and other prehistorical knowledge systems. By "structure of thought" about destruction, I mean to suggest the need for a modern theory of destruction comparable to the prehistorical myths that Lévi-Strauss shows accommodated such primal versions of the contradiction between creativity and destruction as Life versus Death, Nature versus Culture, and Cooked versus Raw. A modern theory of destruction, however, would read our contemporary myths about creation versus destruction under the condition of historical change rather than of prehistory.

[11] David Novitz, "Creativity and Constraint," Australasian Journal of Philosophy 77 (1999): 67-82. This passage is cited and further discussed in Novitz, "Explanations of Creativity," in The Creation of Art: New Essays in Philosophical Aesthetics, ed. Berys Gaut and Paisley Livingston (Cambridge: Cambridge Univ. Press, 2003), 184-88.

[12] For an extensive bibliography of press coverage of the FutureMAP controversy, including many links to online versions of print articles (and mirrored versions of no-longer freely accessible online articles), see the page kept by Robin D. Hanson, who, as a professor of economics, helped design the system: "Press Coverage of Robin Hanson and Policy Analysis Market," (accessed July 1, 2009).

[13] Robin D. Hanson defends the system he helped design against the charge that it would reward actual terrorists as follows: "The market wouldn't have involved predictions about specific attacks —the question involved aggregate casualties on a quarterly basis. It's the difference between predicting the crime rate and predicting which bank will be robbed next" (David Glenn, "Defending the 'Terrorism Futures' Market," Chronicle of Higher Education, August 15, 2003,

[14] On "destructivity," see my Laws of Cool, esp. 327-71.

[15] Larry Bowie, "Georgia Tech Researchers Use Lab Cultures to Control Robotic Device," Georgia Institute of Technology Research News, April 25, 2003,‑04/giot‑gtr042403.php. I borrow the name "rat thing" from Neal Stephenson's Snow Crash (New York: Bantam, 1992), 83-91, where it names a cyborg guard dog (dog brain in robot body). Not surprisingly, Potter's rat thing caught the interest of the cyberpunk and related circles. Cyberpunk novelist Bruce Sterling passed on news about Potter's work to the Nettime list in July 2003 ("The Semi-Living Artist," posting to Nettime mailing list, July 11, 2003,

[16] The quotation and the technical information above about the computer-mediated feedback loop between rat thing and its robot are from David Cameron, "Rat‑Brained Robot," MIT Enterprise Technology Review, December 18, 2002,,294,p1.html.

[17] Bowie, "Georgia Tech Researchers."

[18] Guy Ben-Ary, Phil Gamblen, and collaborators developed MEART beginning in 2000 at the SymbioticA Research Group (part of SymbioticA: The Centre of Excellence in Biological Arts in the School of Anatomy and Human Biology at the University of Western Australia, Perth). Originating in an incarnation known as Fish and Chips, which appeared at Ars Electronica in 2001, MEART proposes to embody the fusion of biology and the machine-creativity emerging from a semi-living entity" (MEART home page, [accessed July 1, 2009]). Thanks to Jane Coakley, SymbioticA Manager, for correspondence on the history of MEART. See also "Researchers Use Lab Cultures to Create Robotic 'Semi‑Living Artist,'" Science Daily, July 9, 2003, adapted from Georgia Institute of Technology press release,

[19] On the ArtBots show, see the show's Web page, "Artbots: The Robot Talent Show," 2003, See also "Researchers Use Lab Cultures"; and Helen Pearson, "Artbots Show Talent," Nature, July 15, 2003,‑2.html. According to the Nature story, MEART drew "portraits" of people at the Manhattan show as follows: a digital image of the robotic arm's "initial doodles is subtracted from a digital photo of them [humans]"; the difference "is converted into a grid of 64 pixels" with high-value pixels representing "a spot that is dark on the original but remains blank on the paper"; these pixel values are converted into electrical signals and communicated remotely through the Internet to the rat-neuron array at Georgia Tech; and then return signals from the responsive neurons move the drawing arm to areas of the canvas corresponding to areas of high electrical activity on the neurons' micro-electrode grid. For more information about MEART and its art, see the MEART home page kept by the SymbioticA Research Group at University of Western Australia, Perth, (accessed July 1, 2009)

[20] Michelle Delio, "The Robot Won't Bite You, Dear," Wired News July 15, 2003,,1282,59622,00.html.

[21] Pearson, "Artbots Show Talent."

[22] "Researchers Use Lab Cultures."

[23] Steve Potter home page, Potter Group, 2003, By comparison with the work of Douglas Hofstadter's Fluid Analogies Research Group (FARG) at Indiana University (Center for Research on Concepts and Cognition), which Potter's Web page links to through the phrase "mechanisms of cognition," the Potter Group's work on neural connectivity focuses on a level of the creativity problem lower than that of FARG. The latter investigates not subperceptual neural activity but a logically higher stratum of interaction between the perceptual and conceptual (or "recognition" and "cognition"). For this distinction in levels, see Douglas Hofstadter and the Fluid Analogies Research Group (FARG), Fluid Concepts and Creative Analogies (New York: Basic, 1995)--e.g., Hofstadter and Gary McGraw, "Letter Spirit: Esthetic Perception and Creative Play in the Rich Microcosm of the Roman Alphabet," in Fluid Concepts and Creative Analogies, 466.

[24] Pearson, "Artbots Show Talent"; Cameron, "Rat‑Brained Robot."

[25] Cameron, "Rat‑Brained Robot."

[26] On cellular automata and Arthur Samuel's Checkersplayer program, see, for example, John H. Holland, Emergence: From Chaos to Order (Cambridge, MA: Perseus, 1998). On the Seek-Whence, Jumbo, Copycat, and Letter Spirit programs, see Hofstadter and FARG, Fluid Concepts and Creative Analogies. On Harold Cohen's Aaron, see Cohen, "How to Make a Drawing," Science Colloquium, National Bureau of Standards, December 17, 1982,"; "How to Draw Three People in a Botanical Garden " paper presented at the Seventh National Conference on Artificial Intelligence, St. Paul, MN, August 21-26,; "The Further Exploits of Aaron," Stanford Humanities Review 4, no.2 (1997),‑2/text/cohen.html, also available at On Cohen's Aaron, see also Margaret A. Boden, The Creative Mind: Myths and Mechanisms, 2nd ed. (London: Routledge, 2004), 150-66 and passim. On the "colony of ants" model, see Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York: Vintage, 1980), 311-36 and passim; and Steven Johnson, Emergence: The Connected Lives of Ants, Brains, Cities, and Software (New York: Touchstone Simon and Schuster, 2001), esp.29-33. I have discussed "rat thing," Aaron, and cellular automata at greater length in other papers, including "The Rout of Creativity: Destructive Art, New Media Art, and the Aesthetics of the New" (Beckman Lecture, University of California, Berkeley, October 28, 2003), and "'A Forming Hand': Creativity and Destruction from Romanticism to Emergence Theory" (North American Society for the Study of Romanticism conference, Montreal, August 16, 2005).

[27] Unlike the others mentioned here, Cohen does not directly address or model cognition, agent-based behavior, or cellular automata in his development of Aaron. However, his essays on the way Aaron produces art may be placed in the same camp because they hypothesize bottom-up complexity, sometimes explicitly described as emergent—for example: "We should expect to find creativeness exercised, not as another kind of function entirely, but in highly particularized modes for the reconstruction of mental models from low level experiential material" ("How to Make a Drawing," 10); and "I regard the aesthetics of AARON's performance as an emergent property arising from the interaction of so many interdependent processes, the result of so many decisions in the design of the program, that it becomes meaningless to ask how much any one of them is responsible for the outcome" ("How to Draw Three People," 10).

[28] Ralph D. Stacey et al., Complexity and Management: Fad or Radical Challenge to Systems Thinking? (London: Routledge, 2000); Ralph D. Stacey, Complex Responsive Processes in Organizations: Learning and Knowledge Creation (London: Routledge, 2001); José Fonseca, Complexity and Innovation in Organizations (London: Routledge, 2002). See also Stacey's Complexity and Creativity in Organizations (San Francisco: Berrett‑Koehler, 1996) and Strategic Management and Organisational Dynamics: The Challenge of Complexity, 3rd ed. (Harlow, Eng.: Financial Times, 2000).

[29] For an extensive review of the history and varieties of emergence theory, together with discussion of its philosophical problems, see Timothy O'Connor and Hong Yu Wong, "Emergent Properties," Stanford Encyclopedia of Philosophy, October 23, 2006,‑emergent/. (While most of this article is generally accessible, certain parts assume technical expertise in the philosophy discipline.)

[30] Paisley Livingston, Literature and Rationality: Ideas of Agency in Theory and Fiction (Cambridge: Cambridge Univ. Press, 1991), 3, 19, 17.

[31] Karen Schweers Cook and Margaret Levi, eds., The Limits of Rationality (Chicago: Univ. of Chicago Press, 1990).

[32] Hofstadter and FARG, Fluid Concepts and Creative Analogies—e.g., "It is my firm belief that pattern perception, extrapolation, and generalization are the true crux of creativity, and that one can come to an understanding of these fundamental cognitive processes only by modeling them in the most carefully designed and restricted of microdomains" (86). On constrained generation procedures, see Holland, Emergence, esp. 125-42.

[33] On the framing problem, see for example Amos Tversky and Daniel Kahneman, "Rational Choice and the Framing of Decisions," in Limits of Rationality.

[34] See, for example, the restaurant "menus" in Michael Allingham's Choice Theory: A Very Short Introduction (Oxford: Oxford Univ. Press, 2002), 3-6 and passim.

[35] In using a quantitative trope ("exceeding the sum of the parts") to describe the qualitative surprise of emergent systems, I follow Holland, for whom this exact trope, often repeated, amounts to an informal definition of emergence (e.g., Emergence, 122, 225).

[36] Holland, Emergence, 45. In the classic cellular automata instance, for example, a simple set of if-then rules tell the pixels on a computer screen to react to their local environment to produce such self-sustaining higher organisms as "gliders," "guns," "rakes," and so on. For Prigogine's development of complexity theory, see for example Grégoire Nicolis and Ilya Prigogine, Exploring Complexity: An Introduction (New York: W. H. Freeman, 1989); and Ilya Prigogine with Isabelle Stengers, The End of Certainty: Time, Chaos, and the New Laws of Nature (New York: Free Press, 1997).

[37] On normative versus descriptive understandings of rational choice theory, see for example, Karen Schweers Cook and Margaret Levi, introduction to Limits of Rationality, 3-4; and Tversky and Kahneman, "Rational Choice and the Framing of Decisions," 60.

[38] This last formulation of "descriptions" internal to complex systems is merely a guess at one possible way to reconcile the sometimes divergent epistemological and ontological understandings of emergence—i.e., the relatively weak view of emergence as an artifact of limitations in the observer's knowledge or predictive powers versus the stronger view of it as a fact of genuinely new, higher-level phenomena that do not follow from fundamental, lower-order regimes of existence no matter how well these might be known. My guess is inspired in part by the "coderack" and "codelet" mechanisms in Hofstadter's programs, which in effect describe lower-level phenomena in a manner that is not just interpretive of, but functional in, the process of emergence (see my summary of Hofstadter's programs below). See also Holland's discussion of the way "constrained generating procedures with variable geometry" utilize "descriptions" that "include the state of the mechanism" (Emergence, 162-70). Guessing aside, however, it must be said that I cannot pretend to adjudicate with rigor the quite involved, overall problem of epistemological versus ontological emergence in philosophical treatments of complexity theory (for a review of which, see O'Connor and Wong, "Emergent Properties").

[39] Again, O'Connor and Wong in "Emergent Properties" provide a useful introduction and review of such problems.

[40] Stuart Kauffman, At Home in the Universe: The Search for the Laws of Self-Organization and Complexity (New York: Oxford Univ. Press, 1995), 299.

[41] Francis Fukuyama, The End of History and the Last Man (New York: Free Press, 1992). Manuel Castells suggests that the correlation between postindustrial capitalism as a "mode of production" and networked information technology as a "mode of development" only came to seem a necessity with the decline of competing alternatives for synchronizing modes of production and development—e.g., the decline of the Soviet or Asian Pacific systems (The Information Age: Economy, Society and Culture [Malden, MA: Blackwell, 1996-97], 1:18-22). Something of the same logic of necessitarian coincidence applies here, making it seem that Western modes of emergence—as channeled through historically specific social, economic, political, and cultural forms (e.g., the "individual")—are the only ones possible.

[42] Immediately after the last passage quoted above, Kauffman continues: "But our theory of democracy takes little account of the unfolding, evolving nature of cultures, economies, and societies" (At Home in the Universe, 299).

[43] See, for example, the chapters on the Jumbo and Copycat programs in Hofstadter and FARG, Fluid Concepts and Creative Analogies. (My simplified examples here—T, H, and so on—are inventions designed for my quick-and-dirty summary.) The embeddedness of cognition in perception—where the latter concept encompasses a whole domain of complex processing—is one of Hofstadter's major themes. See, for example, ibid., 92-93, 210-11.

[44] Ibid.,116.