Do revolutions follow predictable patterns, like migratory birds, or are they “black swans” that appear unexpected and surprise us every time? Can historians help us understand the present or do they merely obscure the specificity of current events? These are some of the questions raised by the recent uprisings in North Africa and the Middle East. To answer them satisfactorily, however, we may need to distinguish between different kinds of historical fowl. For sometimes a black swan is really a turkey, a dodo, or a blackbird.
Scenes of revolution in Tunis and Cairo certainly bear a resemblance to uprisings in Paris and other European capitals some 220 or 160 years ago. As a number of distinguished historians have pointed out, there are reasons why history repeats itself: later generations of revolutionaries often model their behavior on that of their predecessors. The Bolsheviks saw themselves as Jacobins, battling their Girondin/Menshevik foes. 2011, similarly, recalls 1789, which in turn begs the question of whether another 1793 lies ahead. Other historical parallels also come to mind: 2011 may well foreshadow another 1979, should the Muslim Brotherhood go the way of Khomeni.
But the present wave of revolutionary movements also overlaps with a historiographical current that questions the value of historical parallels and, more ambitiously, the very possibility of determining historical causation. This current is, for the time being, little more than a trickle, but it flows from an influential source: Nassim Nicholas Taleb’s 2007 best-seller, The Black Swan. His argument underpins two recent articles in the pages of Foreign Affairs, published at a fourteen-month interval, which both presented a case against “linear” interpretations of “complex” historical events, using revolutions as a case-study. In complex systems, “there is an absence of visible causal links between the elements, masking a high degree of interdependence and extremely low predictability,” comment the authors of the second article (“The Black Swan of Cairo”), Taleb himself and Mark Blyth. Writing before the Arab revolutions broke out, Harvard historian Niall Ferguson (in “Complexity and Collapse: Empires on the Edge of Chaos”) had advanced a similar line of thought, but with respect to revolutions past, including the French. Also drawing on The Black Swan, Ferguson criticized the tendency by historians to seek “linear” explanations of complex events,“the proximate triggers of a crisis are often sufficient to explain the sudden shift from a good equilibrium to a bad mess.”
The turn to complex systems by these authors is presented as a frontal challenge to the way historians do business: Ferguson chides his colleagues for privileging the longue durée over the crisis, structural imbalances over gross miscalculations. Taleb also has harsh words for historians, who in his view “reverse engineer too much”(Black Swan, 199). Historians should stop pretending they can discover the causes for events, and be content with delivering “the thrill of knowing the past” (198).
There is no doubt a degree of willful provocation in this indictment of the historical profession. And many historians will simply shrug this criticism off as uninformed, although similar arguments have also been made by well-respected historians, such as John L. Gaddis in The Landscape of History (who drew on a similar intellectual genealogy as Taleb: Henri Poincaré, Edward Lorenz, Benoit Mandelbrot). The specific attacks against the historical explanation of revolutions, however, merits reflection. This in turn entails a broader assessment of Taleb’s overall skepticism of the historical craft, and his assumption that history is driven by the unexpected and unpredictable appearances of Black swans. But in fact, one must distinguish between three types of bird:
1. The turkey. This is one of Taleb’s favorites, as it appears in both his 2007 book and the Foreign Affairs article. If you’re a turkey, life is good for the first 1,000 days of your life. You might induce from this regular pattern of feedings that life will continue to be good for as long as you live. But then Thanksgiving comes around, and… well, end of story. Taleb’s point here is to criticize our tendency to assume that improbable events will never happen. Historically, an analogy with the turkey story might be any devastating event that arrives from without: the Black Death of 1348-50; from an Amerindian perspective, the conquest of America; or a natural disaster, such as the Lisbon earthquake of 1755.
While surprising when it occurs, this kind of Black Swan does not really pose a challenge to the way we think about the past. No sane person explains the Lisbon earthquake in terms of a political imbalance at the Portuguese court, or the Black Death in terms of a longue durée breakdown of personal hygiene in Western Europe. These catastrophic events happen, as it were, on their own – and when they do, we’re no more prepared than the misfortunate turkey.
2.The dodo. One of the reasons the dodo became extinct was that it was fearless of humans. It also couldn’t fly. Unlike the turkey, in other words, its demise was in part due to a problem within: while it took the arrival of humans to wipe the dodo off the face of the earth, the dodo might have avoided this fate had it had a sense of fear and a pair of wings. I offer the dodo as an avian analogy for another kind of Black Swan – the kind epitomized by the housing crash of 2008. The crash was a perfect example of the kind of hubris Taleb had warned about only a year earlier in his book: economists had been unwilling to consider the outlying chance that the mortgage bubble might burst. In retrospect, however, the cause of the housing crash is not hard to identify: economists and investment bankers had simply been dodos, giving out mortgages to people who couldn’t afford them.
In this case as well, the lesson of the dodo does not apply particularly well to the historian. The reason the dodo went extinct is not a mystery: similarly, there is a large set of historical catastrophes that occurred for fairly straightforward reasons, such as the American Civil War (slavery). Of course, our ability as historians to identify causes retrospectively does not mean that participants had a clear idea of what was happening: no one in 1861 would have predicted four years of slaughter. But again, the short sightedness of contemporaries does not mean that historical reconstruction is impossible or unfounded.
3.The redwing blackbird. Early in 2011, 5,000 blackbirds fell out of the sky, dead, in Arkansas. Why they died remains something of a mystery: some have suspected fireworks, while it also appears that the USDA had a hand in other mass bird deaths. The death of the blackbirds thus presents a real challenge to retrospective causality: in addition to being unpredicted, their fate is also unexplained.
Just because an event is a mystery, of course, doesn’t mean it didn’t have a single cause. Detective stories would be rather disappointing if, at the end of an investigation, Hercule Poirot concluded that due to the complex nature of the social relations on Nile cruises, everyone on the boat contributed to the death of Linnet Ridgeway. So the blackbird analogy may be imperfect for describing complex historical systems: presumably the Arizona blackbirds died for some particular reason, even if we don’t know for certain what it was.
Historical catastrophes, by contrast, may not always happen for a particular reason. This at least is the argument that Taleb and Blyth put forward in their Foreign Affairs article. Our instinct, they point out, is to assign blame to catalysts – e.g., the rising cost of living, or lack of employment options – but it is a mistake to “confus[e] catalysts for causes and assum[e] that one can know which catalyst will produce which effect.” The real cause of the revolution, they suggest, was systemic. We tricked ourselves into thinking that Egypt under Mubarak was stable, whereas in reality the Egyptian government had just swept its social and political problems under the rug. If we were caught off guard by the demonstrations in Tahrir square, it’s only because we let ourselves be fooled by the appearance of calm. In hindsight – in the historian’s rearview mirror – we can recognize that Egypt was a fragile, complex system. Any account of its sudden collapse must consider “the properties of the terrain […] rather than those of a single element of the terrain.”
Interestingly, in his account of political collapse, Ferguson reaches the exact opposite conclusion. Against historical explanations that turn to longue durée and structural conflicts, he suggests as we saw that “proximate triggers” can offer a sufficient explanation. Their different views may stem from their different notions of complexity. Taleb and Blyth distinguish between two kinds of political systems, those that are actually in equilibrium, and those that merely appear to be. Stable countries, they argue, can experience political turmoil without being thrown into social disarray. But countries that produce the illusion of stability by suppressing signs of turmoil are much more prone to total debacles.
Ferguson does not make such a distinction: in his view, all countries are complex systems, and thus can fall prey to black swans. The metaphor governing this view of history, in fact, is less of the avian than the insect variety: to illustrate his perspective, Ferguson revives the old chaos theory tale of the “butterfly [who] flaps its wings in the Amazon and brings about a hurricane in southeastern England.” (Taleb also uses this story to illustrate black swans in his book.)
What are we to make, then, of black swans, butterflies, and other fateful birds in the realm of history? Causality is always tricky business: a common joke about history dissertations on early-modern France was that, in their last chapter, they all showed how their topic caused the French Revolution. Tellingly, this particular example of political collapse features in both Foreign Affairs articles as well: for Ferguson, staggering state debt was both the proximate and real cause of Revolution, whereas for Taleb and Blyth, the inherent volatility of Old Regime France is what enabled the Black Swan of Paris to have such a devastating impact.
Given that historians have been debating the origins of the French Revolution ever since it occurred, this event offers a good case study for examining the value of causality in a complex system. The first point to make is that pre-1789 European states seem to justify Taleb and Blyth’s distinction between robust and volatile (if superficially stable) regimes. As Gail Bossenga notes in a recent essay on the “Financial Origins of the French Revolution,” the cost of servicing the French debt was nearly double what the British crown paid in interest, due to the complicated and contradictory structure of French finances. Unsustainable debt may have been the proximate cause of the Revolution, but this debt was only unsustainable because of the systemic problems with French revenue collection. Taleb and Blyth 1, Ferguson 0.
But were these structural problems a sufficient cause of the Revolution? This was the theory, held by both Marxist historians and Marxian political scientists, that lasted up until the 1980s. Then revisionist historians started poking holes in this belief. History itself came to their aid: as nineteenth-century revolutionaries had discovered, the essential contradictions of capitalism were insufficient to trigger the revolution Marx had prophesized. Just because Black Swans can occur doesn’t mean they will. This is where Taleb and Blyth’s distinction between stable and volatile political systems begins to appear a little too neat. Does one ever encounter such ideal-types in reality? Governments that are really good at repression – say, the Soviet Union – are ultimately quite stable. Conversely, all governments undoubtedly have their weak spots and thus exhibit traits of complex systems. Taleb and Blyth 1, Ferguson 1.
The real challenge, it seems, for historians is figuring out how and when systemic weakness are exploited (or exploded) by surface crises. The French debt problem is again exemplary: the French state did not fall apart because it couldn’t collect enough revenue, it fell apart because the debt crisis was intrinsically connected to the broader, systemic problem of political representation and consent to taxation. The financial challenge facing the state only spiraled out of control when it became politicized (as Dale Van Kley and Thomas Kaiser argue in their introduction to From Deficit to Deluge: The Origins of the French Revolution, in which Bossenga’s essay also features).
The French story can thus teach us a number of things:
1. It underscores how unhappy states may be like Tolstoy’s unhappy families: every one is unhappy in its own way. Revolutions are unlikely to happen for the same exact reasons, since in each case the crisis must occur on a political fault-line; otherwise, it will just remain a crisis. This means we must completely revise and rewrite the social scientific literature that believed (in Jack Goldstone’s words) “the periodic state breakdowns in Europe, China, and the Middle East from 1500 to 1850 were the result of a single basic process” (see Revolution and Rebellion in the Early Modern World, 459).
2. For a revolution to occur, both a proximate and a systemic cause are necessary. Perhaps over the extended course of centuries every political system will experience an event that “randomly” triggers its collapse; but this sort of disruption occurs on a very different scale. The probability that every human being will die is extremely elevated, but what interests doctors and scholars are those cases of premature death.
3. Where Taleb and Blyth warn against overly simplistic models of causation (“catalysts as causes”), the more common tendency among historians is to pile on the causes (“and this, too, contributed to bringing about the French Revolution” [to be fair, Taleb and Blyth also criticize this tendency as the “tipping point” fallacy]). To some extent, the social, cultural, economic, and political dimensions of any given society are interconnected, so it is not entirely wrong to say that x played a role in causing z, since x influenced y, which in turn triggered z. If causality is often described in terms of billiard balls, one must imagine a table that is nearly covered in them, with a handful of players hitting different balls at once: the ball that finally sinks the eight-ball will have been set in motion by a chain reaction of other moving balls. It would probably be impossible to predict exactly how (or even if) the eight-ball would be sunk; but that does not entail that we cannot, after the fact, reconstruct the various movements and shocks that led to its sinking. In so doing, we might learn which balls actually contributed and which balls did not: just because another ball is moving in the direction of the eight-ball doesn’t mean it necessarily contributed to sinking it. If one is going to make arguments about causality, one has to walk through all the different steps leading from the cue to the pocket.
Do historians need to rethink their methodology in light of complex system theory? There is surely value in rethinking the nature of events – a perennial subject of methodological inquiry, after all – with an eye to different modes of causality. Historians still make plenty of claims that are ridiculously unfounded (such as, Spinoza caused the French Revolution), and do like to multiply causes excessively. It is also worth thinking more carefully about how surface crises relate to structural weakness. For starters, how do we identify a structural weakness? Certain political or economic arrangements may seem doomed to collapse, and yet stubbornly survive crisis after crisis. Historians like to speak about “internal contradictions,” but how do we know when these contradictions were genuinely untenable, and when they were just, well, contradictions? Doesn’t every society exhibit contradictions? It is here that complex systems may ultimately be of the greatest benefit: less for helping us understand how things fall apart, and more for helping us understand how and when they don’t. After all, there are an awful lot of white swans out there, too.
Most attention is paid at institutions of higher education to the beginning and end of undergraduate studies. Curriculum committees debate the nature and number of requirements that students must fulfill, mostly in their freshman year; and departments spend a great deal of time evaluating the content and structure of majors, which tend to occupy students in their junior and senior years. No one gives much thought to what students do in the middle, when they’re generally encouraged to explore whatever topics they wish.
The principal philosophy that governs this middle period of a student’s education is of course the elective system. The right for all students to take a class on the subject of their choosing is a hallmark and admirable feature of the American university. It is often through such chance encounters with less common subjects that scholarly passions are born and majors are chosen. No one studies linguistics or anthropology in high school.
But because the elective system is so fundamental to higher education, and because the major is under departmental control, we rarely step back and ask whether this combination of general ed requirements, electives, and specialization actually meets the objectives of a liberal education. Of course, the answer to this question depends largely on how one defines liberal education. For the sake of argument, let’s take the definition offered in the 2009 MLA Report to the Teagle Foundation on the Undergraduate Major in Language and Literature. This report identified the acquisition of broad, cross-disciplinary and transhistorical “literacy” as a central component of liberal education (scientific literacy would be another component, but that’s a different story). In other words, students should be sufficiently well versed in an array of humanistic fields, canons, methodologies, and periods, for them to engage with sources (and pursue further research, if they wish) in a large number of areas. To be sure, we expect a lot more from liberal education than this single aim; this is simply a minimalist definition.
Given this definition, it seems fair to say that we place blind faith in the academic virtues of our current system. We simply assume that somewhere along the way, between fulfilling their general education and major requirements, students will pick up enough knowledge about other fields to meet the demands of a liberal education.
It is easy to understand why we place such faith in this system, since there is no obvious, acceptable alternative. Institutions such as St. John’s College whose curricula are set in stone will only ever cater to a tiny minority of students; even Columbia University’s two-year core curriculum is highly exceptional. As Louis Menand recently noted in The Marketplace of Ideas, it is virtually impossible to imagine introducing a curriculum such as Columbia’s core today; such highly regimented courses could only evolve under particular historical circumstances. The vast majority of students today desire a greater say about the content of their education. And we must honor this desire, if only because students who do not buy into their educational program are unlikely to be good learners.
There are other ways, however, to think about the middle part of undergraduate education, particularly in the humanities. Let us focus momentarily on students who major in the humanities. Whether students chose to major in English, religious studies, anthropology, or history, there are in fact no structures in place to encourage or enable them to acquire a solid foundation in other disciplines, cultures, literatures, and historical periods. The student writing her honors thesis on Alexander Pope often does not know who Pope Alexander VI was.
Moving now to all undergraduates, I would push this argument even further. Why is it that the vast majority of humanities courses are taught as if we were training students to professionalize in a given field (say, French), when only a tiny fraction of these students – non-majors and majors alike – are actually going to pursue a graduate degree in the field? Whether a student is majoring in engineering and taking a French class out of a love for French literature, or whether she’s a French major and is required to take a French class, chances are that she is not going to become a professor of French. And yet our humanities majors, and our undergraduate curricula more broadly, are designed to produce budding experts in fairly narrow fields. This design is understandable in fields such as economics or engineering, where students often do go on to take jobs in which they need specific skills and knowledge. But why is it so in the humanities?
To be sure, specialization, even at the undergraduate level, has its virtues: engaging with material at a higher level of expertise allows students to hone their research skills and to produce more consequential bodies of work (such an honors thesis). Still, I would ask whether our primary objective, as humanities professors, should be training students as though they will all go on to become scholars, or whether our primary objective shouldn’t be something else – such as offering all undergraduate students a broader and less discipline-focused foundation for their future lives.
This issue seems particularly pressing today, as the humanities have gone from facing an existential crisis, to literally fighting for their existence. If smaller departments (such as those that were just axed at SUNY Albany) continue to justify their academic purpose chiefly in terms of number of majors, then they will perennially fear (and often face) the chopping block. Admittedly, such a change would also require a shift in perspective on the part of the administrative powers-that-be. But if humanists made a stronger case that the chief purpose of a liberal education is not disciplinary specialization, but broad historical and cultural literacy, then universities simply could not make do without Greek epics, French classical theater, German philosophy, or Russian novels (to name but them).
What would a curriculum reconfigured along these lines look like? One option would be for humanities departments to join forces to offer genuinely interdisciplinary core courses on major topics of interest. An art historian could team up with a literature professor and religious studies scholar to teach a course on the Renaissance; a historian, political theorist, and Spanish professor could offer a course on the discovery of the New World; or a philosopher, psychologist, and musicologist could lead a course on Modernism. These courses, which would need to be vetted by appropriate faculty committees, would stem from faculty interest, and could vary over time.
This curricular structure presents a number of advantages over the existing one. First, by virtue of having courses team-taught and not placed under the auspices of a single department, they would not have a narrow disciplinary focus, but would open up key events or questions to a variety of approaches. (This is currently the structure adopted at Stanford for the Fall Introduction to the Humanities courses.) At the same time, professors could underscore the methodological differences between their disciplines, thereby providing students with a roadmap of how knowledge is divided between the various academic departments (and where to look for classes in the future).
Secondly, by requiring these courses to cover broad topics, they would collectively constitute an overarching panorama of the humanities. This would be a disjointed panorama, to be sure, yet that might be a quality, since it would avoid the problems associated with establishing a grand récit. If this panorama resembles an exploded version of an ideal, inaccessible core curriculum (“These fragments I have shored against my ruins”?), this is ultimately a misleading resemblance. Since the various pieces of this series would constantly be changing, it is not a palliative for a “Great Books” curriculum, in an age that has turned against such courses, but rather the product of a different pedagogical philosophy. Rather than valuing certain specific texts more than others, this philosophy places value on the breadth of knowledge, and the ability to synthesize very different forms and genres of information, from plays and paintings to maps and graphs.
The truly thorny issue that every curricular reform faces is that of requirements. If we build a new program, will anyone come, if they’re not obliged to? One option would be to require students to take, say, two or three of such courses at some point during their studies. This arrangement grants students a degree of choice and a good deal of scheduling flexibility. Other incentives could be found to encourage students to take more than the bare minimum of courses: completion of additional courses could lead to some sort of certification, or could form part of an honors program.
Since a central objective of a liberal education is to ensure breadth of knowledge, it follows, to my mind at least, that a significant humanities requirement is needed. In cases where this is impossible for pragmatic or philosophical reasons, I would argue that it is still important to provide students with a curricular structure that would allow them to achieve the goals of a liberal education on their own. This is particularly true for non-humanities majors, who often do not venture into humanities classrooms, not necessarily due to a lack of interest, but because of the highly specialized focus of most courses. They also may simply not know where to look: our courses are not listed in a central place, but buried behind individual department nomenclatures. Our academic divisions may make sense for research purposes, but are often at odds with our pedagogical goals. The MLA Report to the Teagle Foundation identified four “constitutional elements” that it considered key to liberal education – “a coherent program of study, collaborative teamwork among faculty members, interdepartmental cooperative teaching, and the adoption of outcome measurements” – yet the first three of these four elements cannot be achieved at the departmental level alone. To fulfill the promise of liberal education, we must ensure that students can build “coherent programs of study” that cut across disciplines.
Finally, perhaps we should have more confidence in the wares we’re vending. Wide-ranging courses that combine powerful texts, vivid iconic material, controversial ideas, and dramatic historical episodes, with insightful analysis should not fail to exhilarate students. Of course, good professors, catchy titles, and intriguing perspectives are also needed to invigorate the study of our disciplines; a dry “introduction to X” approach will never be sufficient to meet the goals of a liberal education. But there is also a real thirst for this kind of knowledge, and not only among students in the humanities. Who knows? Maybe if we build it, they will come.[Cross-posted at Inside Higher Ed]
In a recent New York Review article on Byron, Harold Bloom makes the following passing remark: “In the two centuries since Byron died in Greece [...] only Shakespeare has been translated and read more, first on the Continent and then worldwide.” Bloom does not cite any statistics, and one cannot help but wonder: Really? More than Homer and Dante, or, among the moderns, more than Sartre and Thomas Mann? Of course, what Bloom really means is that Byron was translated and read more than any other English writer, and he may well be correct on that count. Yet this omission is telling, as it highlights an unfortunate tendency (recently diagnosed by David Damrosch) among certain English professors to equate literature in general with literature written in English. This disciplinary bias, less prejudice than habit, can distort their scholarship – the authors that they admire tend to be far more catholic in their reading. But this pattern also raises a larger academic question: Why do we still partition the literary canon according to nationalist traditions? Is this really the most intellectually satisfying and authentic approach to literary studies?
For an example of how disciplinary blinders can affect scholars as well-read as Bloom, we need only turn back to his article, where we find Byron described as “the eternal archetype of the celebrity, the Napoleon of the realms of rhyme... the still unique celebrity of the modern world.” What such hyperbole masks is the fact that the model for such literary celebrity is in reality to be located in another author, who unfortunately did not have the good sense to be born in England. Indeed, anyone familiar with the inordinate fame of Jean-Jacques Rousseau knows that he was the first genuine literary celebrity, lionized and sought out across Europe, much to his growing despair and paranoia (as this brilliant study by the historian Antoine Lilti details). Byron himself was smitten by Rousseau, touring the Lac Léman with his friend Shelley to visit the sites from Julie, ou la nouvelle Héloïse. Rousseau may not have provided his public with the same devilish scandals as the naughty Lord, but his Confessions, with their admission of a fondness for spankings and exhibitionism, were sultry enough.
Bloom is certainly no provincial, and his own, published version of The Western Canon includes German, Spanish, French, and Italian works – although this canon, too, is heavily tilted toward English authors. But can this be avoided? No doubt French scholars would produce a version of the canon equally tilted toward the French, just as scholars from other nations would privilege their own authors. To an extent, this literary patriotism is normal and understandable: every culture values its heritage, and will expend more energy and resources promoting it.
From the viewpoint of literary history, however, such patriotism is also intellectually wrongheaded. To be sure, writers are often marked most strongly by their compatriots: one must read Dante to understand Boccacio, Corneille to understand Racine, or, as Bloom would have us believe, Whitman to understand T. S. Eliot. But such a vertical reading of literature (which Bloom himself mapped out in The Anxiety of Influence) overlooks the equally – sometimes far more – important horizontal ties that connect authors across national borders. T. S. Eliot may have been “hopelessly evasive about Whitman while endlessly revising him in [his] own major poems,” yet by Eliot’s own admission, the French school of symbolist poetry had a far greater impact on his work. Some of Eliot’s first published poems, in fact, were written in French. Conversely, the French novelist Claude Simon may have endlessly revised Proust, but his own major novels – such as La route des Flandres and L’herbe – owe far more to William Faulkner. Such examples could be multiplied ad infinitum: they are, in fact, the stuff that literary history is made of.
To this criticism, English professors have a ready-made answer: Go study comparative literature! But they have only half a point. Comp lit programs are designed to give students a great deal of flexibility: their degrees may impose quotas for number of courses taken in foreign language departments, but rarely, if ever, do comp lit programs build curricular requirements around literary history. Yet that is precisely the point: Students wishing to study English Romanticism ought to have more than Wikipedia-level knowledge about German Idealist philosophy and Romantic poetry; students interested in the 18th-century English novel should be familiar with the Spanish picaresque tradition; and so on and so forth. Comp lit alone cannot break down the walls of literary protectionism.
The fact that we even have comp lit departments reveals our ingrained belief that “comparing” literary works or traditions is merely optional. Despite Bloom’s own defense of a “Western canon,” such a thing no longer exists for most academics. This is not because the feminists, post-colonialists, or post-modernists managed to deconstruct it, but rather because our institutions for literary studies have gerrymandered the canon, department by department. Is it not shocking that students can major in English at many colleges without ever having read a single book written in a foreign language? Even in translation? (Consider, by contrast, that history majors, even those desirous to only study the American Revolution, are routinely required to take courses on Asian, African, and/or European history, in many different time periods, to boot.) Given that English is the natural home for literary-minded students who are not proficient in another language, it is depressing that they can graduate from college with the implicit assumption that literature is the prerogative of the English-speaking peoples, an habeas corpus of the arts.
But wait a minute: how dare I criticize English curriculums for not including foreign works, when the major granted by my own department, French, is not exactly brimming with German, Russian, or Arabic texts, either? To the extent that French (or any other foreign language) is a literature major, this point is well taken. But there are differences, too. First, it is far more likely that our students will have read and studied English literature at some point in high school and college. They will thus already have had some exposure, at least, to another national canon. Second, and more importantly, a French, Spanish, or Chinese major is more than a literature major: it is to no small degree a foreign language major, meaning that the students must master an entire other set of linguistic skills. Finally, language departments are increasingly headed toward area studies. German departments routinely offer classes on Marx, Nietzsche, and Freud, none of whom are technically literary authors. Foreign language departments are sometimes the only places in a university where once-important scholarly traditions can still be studied: Lévi-Strauss’s Tristes tropiques probably features on reading exam lists more often in French than in anthropology departments. A model for such an interdisciplinary department already exists in Classics.
I do not wish to suggest that English professors are to blame for the Anglicization of literature in American universities: they reside, after all, in English departments, and can hardly be expected to teach courses on Russian writers. The larger problem is institutional, as well as methodological. But it bears emphasizing that this problem does not only affect undergraduates, and can lead to serious provincialism in the realm of research, as well. An English doctoral student who works on the Enlightenment once openly confessed to me that she had not read a single French text from that period. No Montesquieu, no Voltaire, no Rousseau, no Diderot, rien. Sadly, this tendency does not seem restricted to graduate students, either.
Literary scholars are not blind to this problem: a decade ago, Franco Moretti challenged his colleagues to study “world literature” rather than local, national, or comparative literatures. He also outlined the obvious difficulty: “I work on West European narrative between 1790 and 1930, and already feel like a charlatan outside of Britain or France. World literature?” While the study of world literature presents an opportunity for innovative methodologies (some of which were surveyed in a recent issue of New Literary History), students already struggling to master a single national literary history will no doubt find such global ambitions overwhelming.
What, then, is to be done? Rearranging the academic order of knowledge can be a revolutionary undertaking, in which ideals get trampled in administrative terror. And prescribing a dose of world literature may ultimately be too strong a medicine for the malady that ails literary studies, particularly at the undergraduate level. In fact, a number of smaller measures might improve matters considerably. To begin with, literature professors could make a greater effort to incorporate works from other national literatures in their courses. Where the funds are available, professors from neighboring literature departments could team-teach such hybrid reading lists. Second, language and literature majors could also require that a number of courses be taken in two or three other literature departments. A model for this arrangement already exists at Stanford, where the English department recently launched an “English Literature and Foreign Language Literature” major, which includes “a coherent program of four courses in the foreign literature, read in the original.” To fulfill this last condition, of course, colleges would have to become more serious about their . Finally, literature students would be better served if colleges and universities offered a literature major, as is notably the case at Yale, UC San Diego, and UC Santa Cruz. Within this field of study, students could specialize in a particular period, genre, author, or even language, all the while taking into account the larger international or even global context.
Will such measures suffice to pull down the iron curtain dividing the literary past? Unless they manage to infiltrate the scholarly mindset of national-literature professors, probably not. Then again, as many of us know firsthand, teaching often does transform (or at least inform) our research interests. A case could of course be made for more radical measures, such as the fusion of English and foreign language departments into a single “Literature Department,” as exists at UC San Diego. But enacting this sort of bureaucratic coup carries a steep intellectual (not to mention political) price. It would be unfortunate, for instance, to inhibit foreign literature departments from developing their area-studies breadth, and from building bridges with philosophy, history, anthropology, sociology, religious studies, political science, and international relations. English departments, moreover, are developing in similar, centrifugal directions: in addition to teaching their own majors, English departments contribute more widely to the instruction of writing (including creative writing), and have their own ties with Linguistics and Communications departments. This existing segmentation of the university may appear messy, but has the benefit of preventing new walls from being erected, this time between neighboring disciplines.
[Cross-posted in Inside Higher Ed.]
Stalin’s famous question concerning the number of divisions footed by the Vatican has often been chided for its short-sightedness. Granted, the Swiss guards in their technicolor costumes may not amount to a clear and present danger for any totalitarian despot. But the legions of Catholics scattered across the world could constitute a formidable force indeed, as Pope John Paul II showed Stalin’s successors during the Solidarity movement in Poland.
Where Stalin is mocked for his philistine and (unsurprisingly) Marxist rejection of the “superstructure”—i.e., the ideas, culture, and beliefs, which according to Marx are determined by the industrial “base”—the assumption that something as intangible as ideas can influence or determine something as concrete as political struggle and war is still met with a strong dose of skepticism in many academic quarters. Tony Judt recently slammed John Lewis Gaddis’s history of the Cold War for ignoring the other war of ideas waged between Western intellectuals (partially funded, as we know now, by the CIA) and the fellow travelers on the Left. Further afield, in political science and the corridors of some history departments, arguments not supported by statistics are still given very little consideration—despite such brilliant accounts by political scientists of how, for instance, political culture influences government efficiency across long spans of time.
Intellectual historians are not entirely blameless in creating this climate of suspicion. One need not be a hardened positivist to raise an eyebrow at the way certain intellectual historians wield causality and determinism. Norman Cohn’s classic (1957) history of millennialism, brilliant in its analysis of medieval messianism, succumbs in its conclusion to a common tendency of confusing resemblances with identities: if twentieth-century revolutionary movements look like, smell like, and feel like millennialism, they must be the same thing (good thing we didn’t step in it, as the old punchline goes). Precisely how, by whom, and in what form ideas are transmitted over centuries; how they are transformed along the way; and how they are enmeshed in the messy practice of politics, polemics, or war, are questions too often missing from intellectual histories. It will not do to argue, as one prominent historian has of late, that “the advent of republican and democratic political ideologies” in the eighteenth century “followed directly” from the “revolutionary philosophical, scientific, and political thought systems” of the seventeenth century, without identifying every link in the chain purportedly connecting such ideas to these developments.
But intellectual history has considerably evolved since the days of Arthur Lovejoy. As the foremost practitioner of the genre in the United States, Anthony Grafton, describes in “The History of Ideas: Precept and Practice, 1950-2000 and Beyond,” a chapter of his most recent book, intellectual historians have long abandoned the Platonic world of ideas and stepped back down into the earthly cave, examining how printing techniques, political debates, legal traditions, university curricula, and philosophical controversies shape the ways in which ideas are received and disseminated. No longer do historians view ideas as astrologers viewed the stars, as exercising a powerful influence from afar; the new intellectual history studies what happens when ideas and individuals, groups, or nations collide in the linear accelerator of history. As William Sewell demonstrates repeatedly in his brilliant 2005 collection of essays, Logics of History, the study of culture as a web of meanings is not at all incompatible with the study of culture as a set of practices—they are in fact necessary complements.
Still, skepticism toward intellectual history endures. A good friend and prize-winning historian wrote to me recently to challenge the assumption that “words and concepts have a deep power that does and should constrain the historical user,” whereas in reality, he argued, “it may well be – usually is – the case that far simpler factors than definitions or intellectual or ideological coherence are what are at stake.” His was not an anti-intellectualist stance, but rather the criticism of a political historian who has grown skeptical from experience of definitions that are simply too neat: “As we well know,” he continued, “those neat definitions can and do change, not only from one era and country and individual to the next, but in one individual, from one setting or moment to the next.”
Excessive schematization is no doubt another Charybdis around which the intellectual historian must navigate. And it is imperative to recognize that, just as no two vanilla ice creams taste identical, the same idea comes in different flavors: simply because historical actors speak the same language does not always imply that they mean the same thing.
At the same time, once we abandon the Marxist assumption that material conditions determine all events, and we carve out a space for actions based on ideas or beliefs, then we must take an additional step and acknowledge that ideas and beliefs do not live a solitary life, but exist in relation to one another. Even in the absence of ideology, there is a logic to ideas. This logic can be manipulated, it can be twisted in many directions; yet often one finds that players in a historical drama share a basic understanding (if not an identical definition) of the important terms in use, and more particularly, of how they relate to one another. When Robespierre informed the National Convention that terror was the necessary supplement to virtue, they knew what he meant, even if they did not agree with this equation.
It is this shifting, yet fairly stable edifice of relations between concepts that forms the primary object of inquiry for intellectual historians. How this structure can affect the course of events, but also how it itself can be transformed by circumstances, are some of their central questions. Ideally, this kind of intellectual history will some day be assimilated by all historians, just as the history of ideas has learned to incorporate the material, emotional, and practical matters of history.
In today’s academic environment, however, where faculty positions in intellectual history are shrinking faster than polar ice caps, such wishes seem fantastical. History departments face understandable pressures to “globalize” and expand the geographic coverage offered by their faculty. Given the clout of American history on U.S. campuses, this usually means replacing a retiring professor of European or intellectual history with a junior faculty member specializing in South East Asian, Latin American, or Middle Eastern history. The irony here is that the modern histories of South East Asia, Latin America, or the Middle East are intrinsically tied up in the grand ideological narratives of communism, fascism, and nationalism, narratives that were initially developed, for better and usually for worse, in the West. Intellectual history, in other words, provides a crucial point of entry into, say, the death camps of the Khmer Rouge, the emergence of military juntas across South America, or the pan-Arab movement of Sadat. Yet there is a non-negligible risk that in the near future some history departments will no longer offer courses in modern, European intellectual history.
Literature departments, in the meanwhile, are emerging from their Theory hangovers to embrace intellectual history along national lines (French history of structuralism, German history of the Frankfurt school, etc.). Now is the time for history departments to reflect on whether they truly want to outsource this pillar of the historiographical tradition to a field that has different (and sometimes opposed) methodological principles. If History is any guide, this could be an unfortunate turn of events.
There is a sense in many humanities departments across the country that Theory is dead. The news was not announced by a Nietzschean madman, nor did it spread from one university to the next in a panic wave. The death of Theory, if confirmed, was rather an inglorious one, a back alley affair involving tenure letters, firings at university presses, and a certain fatigue. More than anything else, the floodgates holding back the frustration of many scholars finally broke. The director of an interdisciplinary program at Stanford recently stood up and said, “I think we should teach the canon: no-one else does.” Stunned, everyone in attendance agreed. A silent revolution had occurred.
But this description is only half accurate. In some schools, they’re still partying like it’s 1999. Graduate students flock to Theory courses, where the syllabus consists solely of works by Frenchmen from the ‘60s and ‘70s. Professors speak of a shifting world in which meanings are unstable and the center cannot hold. Where other scholars seek to resurrect values, they’re busy deconstructing them. Like forgotten soldiers, they’re still fighting the war after the armistice has been signed.
Given that Theory did not suffer a Waterloo or even a Thermidor, however, it is fair to ask whether an armistice really was signed, and whether it is not somewhat Whiggish to assume that Theory has gone the way of past intellectual fads. The fact that these two incompatible perspectives on the state of Theory – it’s totally passé, good riddance, vs. have you read the late Derrida? – can nonetheless coexist does not bode well for the future of the field. Indeed, it is beginning to look increasingly as though Theory will not go softly into that dark night, but rather that the humanities will become engaged in the methodological trench warfare that already defines anthropology and philosophy. It is not hard to imagine a future in which there would be “continental”-style, Theory friendly departments at some schools, and “analytic”-style, anti-Theory departments at others. (In some respects, this is already the case.) Since professors who rode the Theory wave in the ‘80s are now at the peak of their careers, everything suggests that they will in turn hire junior faculty who share their theoretical outlook.
Regardless of whether one wishes to praise or bury Theory, this prospect can only be deplored. As sociologists and spectators of American politics know too well, when positions become polarized, they tend to grow increasingly so. Already both sides of the aisle hurl epithets like “positivist” (from the Theory benches) or “relativist” (from the sick-of-Theory opposition) that either aggravate or harden opposing views. With the rise of the digital humanities, and the inevitable use of quantitative analysis that comes with large data corpora, there is a distinct possibility that, faced with a “relativist” challenge, the technologists will indeed embrace a kind of neo-positivism.
It seems appropriate, therefore, to reflect on what can be done to avoid a drawn out civil war in the humanities. It would be pollyannaish to seek a simple compromise (Theory is allowed on Tuesdays and Thursdays?). There are deep problems with the way Theory has been taught and caught up in academic politics. There is no going back to the ‘80s. At the same time, those who have tired of Theory may still remember how exhilarating it was to first discover those Editions de Minuit books with abstruse titles. To be sure, therein lay not just cultural, but social capital as well: much could be gained by peppering one’s sentences with “as Deleuze says in Mille plateaux...” Still, at a time when enrollment in humanities departments is dropping, and students seem to treat college increasingly as professional school, it is worth recognizing the genuine intellectual thrill that studying Theory can trigger. This means taking a frank look at how and why Theory failed to live up to its promise, and how it could be recast.
Only a handful of born-again Theorists will refuse to acknowledge that the fundamental problem with Theory did not lie so much with the original works themselves (which is not to say they do not have their own problems), but with the ways in which they were used. Derrida can be annoying and mystifying, yet is usually a far more stimulating read than most Derrideans. In other words, the fundamental problem with Theory is that it stopped being theory. Derrida, or Lacan, or Deleuze, were not invoked to question, but to answer. The result was that the research always ended up validating the Theory, in an eternal, feedback-loop return. Theory always won.
One of the reasons that Theory failed to function as theory provides a hint at how it could be reformed. The great theorists of the ‘60s and ‘70s produced works of dizzying interdisciplinarity. They could play fast and loose with their references, but at least they were looking over the fence. Once these other disciplines (chiefly, linguistics, semiotics, psychoanalysis, and Marxist political theory) entered into the Theory discourse, however, the old disciplinary walls went back up. Theorists in the ‘90s were still reading Saussure, much to the surprise of their colleagues in linguistics. If there are still Marxists in the university, they are not to be found in economics or political science departments. While there is no reason that humanities scholars must march in lockstep with the vanguard of other disciplines, intellectual integrity demands that we consider the challenges and debates in those fields. Healthy interdisciplinarity requires regular check ups. Otherwise, humanities departments truly run the risk of becoming, in John Searle’s phrase, the place where bad ideas go to die.
If Theory is to survive, it must fall off its pedestal, and loose the capital. Foucault, Deleuze, and others will always remain a source of intellectual thrills, and should not be packed off to some new Enfer. But they, like every other theorist, should be read against the grain; only in this manner can they sharpen, rather than blunt, the mind. At the same time, the doors of theory must be opened wider: it is a curious parallel that at the very moment humanities professors were exploding the literary canon, they were cementing a most exclusive canon of Theory. Must Althusser or Agamben have the last word on political thought? There is an entire discipline of political theory waiting to be tapped and queried (as Josiah Ober points out in his piece in our inaugural issue). Symbolic thought is a fascinating topic, but wouldn’t it be worth considering, say, Charles Sanders Peirce’s semiotics instead of just Saussure’s?
If we are to have a true theoretical reformation, however, it must cut deeper than methodology. Over the last few decades, Theory has left a deep ideological imprint in the minds, perhaps even subconsciouses, of many scholars. Why else do we end up telling the same story over and over again? A story of resistance to power, in which the oppressed are once again endowed with agency, and struggle to overthrow the selfish political and economic structures of class and race. In this story, the State is always the villain, and power something to be avoided at all costs. This is a good story, a noble story; it was a story that needed to be told, if only to challenge the Whiggish narratives that came before. But those narratives have largely been dispelled, at least in academia (and frankly, we are our only audience now). There is no point preaching to the choir: we are all reconstructed. Moreover, this story has reached the point where it has become more of a grand récit, a myth, one might say, of residual Marxism. As a myth, it carries great authority and even transfers onto our own scholarly endeavors a certain revolutionary grandeur (although as Terry Eagleton once pointed out, there is a big difference between writing Marxist literary criticism and engaging in Marxist politics). And as good and noble as this story is, it is not always the true story. Marx and Engels may have wanted to claim Balzac for their cause, but that does not change the fact that Balzac was a Catholic royalist. It is time we told some new stories.