Blog Post

On the spectrum, p.p.

Graphics by Michelle Jia : Image Flickr ( I ) 

The words that the non-disabled use to talk about the disabled, or just the non-neurotypical,1 have not typically been known for nuance or tact. Even as physicians and psychologists have coined new clinical terms, ones that don’t carry the historical baggage of a word like “retarded,” children’s cruelty has kept pace: I remember a form of teasing in elementary school that involved tricking one’s victim into saying the letters “I. M. E. D.” —E.D. standing for some disability, we didn’t then know which, that would’ve caused a student to be placed in special classes or pulled out for therapy sessions. (I looked it up just now, for the first time in my life, and discovered that it’s currently used to mean “emotional disturbance,” but can’t be sure that the abbreviation had the same sense twenty years ago; if it did, a quick glance at the diagnosis reveals that this taunt was a particularly insensitive one, playing upon the social anxiety and interpersonal difficulties that children with emotional disturbances already experience.) Clinicians and advocates for the developmentally disabled must often attempt to recuperate or replace hurtful (or simply misleading) terms, searching for a vocabulary that reflects the rich and unique cognitive worlds of these individuals.

One strategy for adding complexity to traditional diagnostic categories is the “spectrum.” Clinically valuable for its ability to capture the many ways in which a particular disorder may “present,” the spectrum concept also feels a bit more humane: whereas labeling a particular individual “autistic” suggests that he belongs to an entirely different category of person, placing him on the “autism spectrum” implies a neurodevelopmental space shared by both neurotypical and autistic people, one where an autistic person may in some respects resemble an NT person more than he does other people classified as autistic. Referring to the “autism spectrum” also helps dispel the myth of autism as singular and predictable, instead preparing NT people to meet a range of different individuals who, for different reasons and in different ways, can be identified as autistic.

This essay is not about the way we talk about autism in neurological, psychiatric, or activist contexts; it’s about the way we talk about autism colloquially and casually. But I begin with this preamble, partly because terms like “neurotypical” may be new to some readers, and partly because, when I’m teasing out the connotations of “on the spectrum,” I don’t want to give the impression that what we mean by this demotic phrase is what autism is. When the phrase “on the spectrum” comes up in casual conversation, it doesn’t work the same way it does when autistic people or psychologists use it—but neither is it merely mocking or straightforwardly hateful along the lines of many other terms for mental illness or disability. This affective distinction strikes me as a clue, a hint that autism is serving some function other than clinical in the culture at large.

In some ways, the popularization of the phrase “on the spectrum” simply reflects the genuinely increasing integration of non-NT people into everyday life in America. It’s something you say about your brother-in-law, a coworker, your neighbor’s daughter—people whose behavioral habits you know casually but not intimately; and it’s in most contexts a way of making sense of and assimilating their difference rather than rejecting it outright. Sometimes the tenor of this assimilation is lightly dismissive, naming behavior that’s harmless though annoying—what previous generations might have labeled “touched in the head.” At other times, though, it involves a certain wary respect, providing an explanation for the quasi-magical capacities that popular culture still associates with autism’s deficits: a gift for calculation, an ability to focus, a precise and retentive memory. All of this, though a little sloppy and shallow, is in some sense exactly what the “spectrum” designation was meant to do: take the traits associated with autism and Asperger’s and bring them into the range of explicable and familiar, if not entirely ordinary, experience.

Sometimes very familiar indeed: “on the spectrum” is a beloved term of self-diagnosis, as a recent New York magazine article noted with hip disdain (“Is Everyone on the Autism Spectrum?”—we’re so over it!). For those who have never received a formal diagnosis, and who quite possibly wouldn’t, “on the spectrum” typically serves to index a discomfort in social situations and a need for routine and regularity: to hate talking on the phone or regularly find oneself at a loss for words or eat the same meal every day can constitute reason enough to locate oneself on the spectrum. A clinician, of course, might diagnose these behaviors differently (for instance, as symptoms of social anxiety disorder) or not at all, but the colloquial “on the spectrum” serves a purpose that is not strictly psychiatric but social: it’s a gesture of camaraderie, when applied to oneself, or of welcome, when applied to others. The result is just short of a paradox: a syndrome that is popularly understood to entail a lack of interest in social life and an inability to perceive the needs and interests of others becomes, in the right context, a gesture of community and belonging.2

The question then becomes: why has “the spectrum” come to assume this role? Why are autism and Asperger’s acceptable self-identifications among neurotypical folks who would be much less willing to declare themselves bipolar or dyslexic or, indeed, “emotionally disturbed”? In many respects, the concept of “the spectrum” behaves less like these disorders than like the less scientifically grounded categories of personality psychology—“introvert” or “extrovert,” “left-brained” or “right-brained,” and the entire combinatorial catalog of the Myers-Briggs scale. Relocated to this company, the success of “the spectrum” becomes much less surprising: is there anything white middle-class Americans love more than labeling their own cognitive and emotional styles?

For this—let’s not be coy—is the “right context” I mentioned above: those who diagnose themselves as autistic are overwhelmingly white, relatively affluent, and male. (As are, for that matter, the famous intellectuals and artists who’ve been retroactively placed on the spectrum: the aforementioned New York article lists “Thomas Jefferson, Orson Welles, Charles Darwin, Albert Einstein, Isaac Newton, Andy Warhol, and Wolfgang Amadeus Mozart”—a diverse group in all respects but two.) In part, this reflects a disparity on the level of actual clinical practice: autistic children of color are underdiagnosed, diagnosed later, and have less access to treatment, as numerous studies have shown. There’s probably a self-reinforcing schema at work here:3 because Leo Kanner and other early autism researchers tended for various reasons (outlined in depth by Silberman in Neurotribes) to associate the disorder with middle- and upper-class white male children, clinicians diagnose autism less often in children of color and in girls, which in turn helps to reinforce a cultural image of the autistic child as a white boy—probably the child of a Silicon Valley programmer or a successful financial analyst.

This last element of the autism schema offers one angle on why the self-diagnosis seems so inviting: the popular understanding of “the spectrum” bundles together several traits associated with privileged social positions. Most obviously, “the spectrum” trades on surprisingly trite, empirically unsubstantiated stereotypes about male and female behavior: men are supposed to be less interested than women in socializing and conversing, more interested in tools and objects, better at spatial and mathematical reasoning—all features that come to the fore in the simplified version of autism that circulates in popular culture. Insofar as these behaviors and dispositions are believed to be characteristically male, they’re also culturally valued—and so a subject position that seems to grant special access to them, like a self-diagnosed autism spectrum disorder, offers a measure of social cachet.

Less immediately clear, though, may be the whiteness of “the spectrum”; whereas many laypeople and a few psychologists have no compunction about asserting the supposedly male features of autism—Simon Baron-Cohen has infamously referred to autism as a case of “extreme male brain”—any racial association is likely to be less explicit, more socially taboo, and for good reason. (To be clear, there’s no evidence that autism actually varies in prevalence among different racial or ethnic groups.) But autism in the popular imagination does, I think, overlap substantially with a particular feature of European-American whiteness: the bias toward “independent selves” that Hazel Markus and Shinobu Kitayama identified in their classic article, “Culture and the Self.” Markus and Kitayama argue that cultural models of selfhood fall into two major categories: the interdependent self, which relies on the social and emotional support of others to survive, and the independent self, which is imagined as autonomous, unique, and atomistic. Most aspects of European-American culture encourage an independent self-image: parenting books that recommend giving children choices, memoirs that chronicle an individual’s success against all odds, classrooms and workplaces that emphasize inherent talent over teamwork, political structures that reinforce the privacy of personal beliefs and values. But, of course, the message of independence is inflected by the intersectional categories of race, gender, and socioeconomic status: “The prototypical American view of the self,” Markus and Kitayama acknowledge, “… may prove to be most characteristic of White, middle-class men with a Western European ethnic background.” Members of this demographic have the most license to be independent, to behave as though they don’t need or even, necessarily, acknowledge others; and insofar as the popular understanding of autism entails just such obliviousness, it reliably evokes middle-class whiteness.

If all this is true, it puts “on the spectrum” in a curious light: a pseudo-clinical diagnosis that acknowledges the strangeness and strain of the independent model of selfhood—the distortion behind the disregard for interpersonal complexity that is supposedly a white middle-class man’s prerogative—even as it naturalizes that model as an inborn pathology rather than a learned set of behaviors. This means that, as a self-diagnosis, “on the spectrum” isn’t merely gloating or strategic; there’s a hint of melancholy to it as well. Something is missing from the default worldview of the white male American, something to do with other minds and social awareness—but that something is imagined to have always been gone, to be a fixed condition that one must simply live with. The absence is even, most ironically of all, an identity: being socially “unmarked,” when described as a set of character traits and dispositions, turns out to look anomalous, non-normative, worthy of clinical analysis.

I bring this up not exactly to arraign the independent model of the self, which appeals to me (a white middle-class American) on many levels; nor to accuse all those who place themselves “on the spectrum” of harboring white supremacist tendencies; nor, conversely, to suggest that whiteness is some kind of pitiable pathology. I bring it up, first, to suggest that we tread more carefully with our recuperative claims, since in celebrating “the spectrum” we may end up celebrating only those aspects of the spectrum that we as a society already value—those aspects that overlap with privileged identities. Second, and most urgently, I bring it up to remind us that diagnoses can be a kind of capital that, like other forms of capital, will concentrate in the hands of white men unless we’re vigilant about redistributing them. When the behavior that reads as autistic in a white boy would constitute rudeness, insubordination, antisociality in an African-American girl—then it’s time to turn a critical eye on the spectrum.

  • 1. I’ll be using the term “neurotypical” -- or its abbreviation, NT -- throughout this essay; the concept originated within the autistic community as a way of acknowledging the differences between those with and without developmental disabilities in a non-pejorative way, and have now become relatively common among both activists for individuals with neurological conditions and academics who study those conditions. When I am referring to a specific developmental or cognitive disability, I will mention it by name; if I am referring to the general situation of individuals who have such disabilities, I may use the term non-neurotypical or non-NT, with the understanding that this umbrella term includes a truly vast variety of experiences. For more on terminology, especially with respect to autism and “the spectrum,” see these helpful guidelines from the National Autistic Society of the U.K; for a history of the spectrum concept in autism research, see Steve Silberman’s recent Neurotribes.
  • 2. Again, autism spectrum disorders don’t actually entail any such thing; autistic people can have friendships, romantic relationships, and familial attachments with NT and non-NT individuals alike; but a robotic blindness to others’ emotions and intentions is still a common feature of pop-cultural representations of “the spectrum.”
  • 3. The “schema” concept has a long lineage in social and cognitive psychology, but I’m borrowing it most proximately from Paula Moya’s new book The Social Imperative.
Hannah Walser's picture
Hannah Walser is a Ph.D. candidate in the Department of English at Stanford. Her dissertation, titled "Mind-Reading in the Dark: Social Cognition in Nineteenth-Century American Literature," argues for the significance of non-intentionalistic representations of the mind in the selection and solidification of the American canon. Her research and teaching reflects a general interest in literary models of social cognition, from Proust to pragmatist philosophy to the contemporary neuronovel.