Intervention
NaNoGenMo: Dada 2.0

there is no answer to this order of reasoning, except to advise a little wider perception, and extension of the too narrow horizon of habitual ideas. (or there is an answer to this order of reasoning.)—An algorithm

Earlier this year, the Wall Street Journal (WSJ) published an article with an interactive sidebar featuring excerpts from financial investment research reports. Readers were prompted to identify whether the excerpts were written by robots or humans. Admittedly, Wall Street’s preference for terse prose over poetic flourish makes a challenge like this make sense. “Q2 cash balance expectation of $830m implies ~$80m of cash burn in Q2 after a $140m reduction in cash balance in Q1,” a sampled sentence, is effectively just three data points fused together with syntax. And that’s no coincidence. White collar workers like journalists need not fear their job security (at least not yet…) because new natural language generation (NLG) algorithms are very good at representing structured data sets in prose, but not yet very good at much else. That capability in itself is very powerful, as our ability to draw insights from data often depends on how they are presented (e.g. a chart reveals insights one would have missed in rows and columns). But it is a far cry from the creative courage required to build a world on a blank page.

NLG algorithms are generally considered to be a form of “artificial” or “machine intelligence” because they do things—like write news articles about sports or the weather, or write real estate ads, as the prototype my Fast Forward Labs colleagues built—we believe humans alone can do. (I hope to explore the implications of the historical, relativist concept of artificial intelligence, espoused by people like Nancy Fulda, in a separate post.) As illustrated in the WSJ article, most people then evaluate NLG performance like André Bazin evaluates style in realism: as the art of realism lies in the seeming absence of artifice, so too does the art of algorithms lie in the seeming absence of automation. Commercialization only enhances this push towards verisimilitude, as investment banks and news agencies like Forbes won’t pay top dollar for software that generates strange prose. In turn, we come to judge machine intelligence by its humanness, orienting development offers towards writing prose that we would have written ourselves.

But what if machines generated text with different stylistic goals? Or rather, what if we evaluated machine intelligence not by its humanness but by its alienness, by its ability to generate something beyond what we could have created—or would have thought to create—without the assistance of an algorithm? What if automated prose could rupture our automatized perceptions, as Shklovsky described poetry in Art as Device, and offer a new vehicle for our own creativity?

It is this search to use automation as a vehicle for defamiliarization that makes National Novel Generation Month (NaNoGenMo) so exciting. Darius Kazemi, an internet artist who runs an annual Bot Summit, created NaNoGenMo “on a whim” in November, 2013. Thoughtful about literary form, Kazemi was amused by the fact that National Novel Writing Month (NaNoWriMo) set only two criteria for participants: submissions must be written in 30 days (the month of November) and must comprise at least 50,000 words. The absence of form invited experimentation: why write a novel when you can write an algorithm that writes an novel? He tweeted his idea, and a new GitHub (a web-based software development collaboration tool) community was formed.

While open to anyone and, as in NaNoWriMo, governed by the single constraint that submissions contain at least 50,000 words, NaNoGenMo is gradually defining itself as a cohesive artistic movement that uses algorithms to experiment with literary form. The group’s identity is partly generated by ressentiment towards negative criticism that their “disjointed, robotic scripts” are “unlikely to trouble Booker judges.” Last year, one participant mocked how “futile it is to try to explain what we’re actually doing here, to the normals.” More positively, they are shaping identity through shared formal and critical resources. John Ohno (alias enkiv2) posted code to generate sestinas, haikus, and synonyms. Allison Parrish (alias aparrish) shared an interface to the Carnegie Mellon Pronouncing Dictionary that enables users to do things like scrape the dictionary for rhymes for a given word. Finally, Isaac Karth (alias ikarth) explained to members how the group’s tendency to assemble new poetry from prior texts has intellectual roots in Dadaism, Burrough’s cut-up techniques, and the constraint-oriented works of Oulipo. When I spoke with Kazemi about the project, he said that Ken Goldsmith’s Uncreative Writing had inspired his thinking on how NaNoGenMo can challenge customary notions of authorship and creativity.

Technical constraints explain why NaNoGenMo has come to align itself with poetics of recontextualization and reassembly. Indeed, genuine NLG algorithms, that is, those that can build words and syntax from the building blocks of letters and get smarter over time, are still very nascent. Most of the 2014 submissions instead use rules to transform former texts in creative ways, which also leads to topical similarities.

At least two 2014 submissions use dreams as a locus to explore the odd beauty of machine intelligence. Thricedotted’s The Seeker relates the autobiography of a machine trying to “learn about human behavior by reading WikiHow.” The work is visually beautiful, with each iteration of the algorithm’s operations punctuated by pages that raindrop abstractions and house aphorisms like “imagine not one thing could be undirected.” Like the hopscotch overtones in Cortázar’s Rayuela, the aphorisms encourage the reader to perceive meaningful patterns in what might otherwise be random data (Thricedotted’s internet identity often mentions apophenia). Time and again, the algorithm repeats a “work, scan, imagine” loop, scraping WikiHow, searching plain text memories for a concept encountered during “work,” and building a dream sequence—or “univision”—from concepts it doesn’t recognize. These univisions contain the most surprising poetry in the work, where beauty arises from the reader’s ineluctable tendency to feel meaning in fragments:

univision: change

(required evolutions suddenly concentrating

favourable structures. a chemical behind

conclusions. determining the opinion in the event.

looking while happening. reciting the literature

on the water. the position, existing. the amount

around the resource. the task in the example. the

selection near attempts. undergoing the layer and

observing the object. the timeliness around the

availability. Beginning memories…)

Allison Parrish’s I Waded in Clear Water uses sentiment analysis algorithms, which rank sentences based upon features that indicate emotional texture, to transform Gustavus Hindman Miller’s Ten Thousand Dreams, Interpreted. Parrish mobilizes the formulaic “action” and “denotation” structure of Miller’s text (action = “To see an oak full of acorns”; denotation = “denotes increase and promotion”). She first transforms the actions into first-person, simple past sentences (“I saw an oak full of acorns”) and then reorders the sentences from the worst to the best thing that can happen in dreams, according to a score given by a sentiment analysis algorithm run on the denotation. The sentiment scores create short chapters: “I drove into muddy water. I saw others weeding.”; and longer chapters with paratactic strings of disjointed actions: “…I descended a ladder. I saw any one lame. I saw my lover taking laudanum through disappointment. I heard mocking laughter. I kept a ledge. I had lice on my body. I saw. I lost it. I felt melancholy over any event. I saw others melancholy. I sent a message….” According to the sentiment algorithm, wading in clear water is our best dream.

Other submissions recontextualize tweets. Moniker, a design studio based in Amsterdam, wrote a simple query that scans Twitter for sentences in the form “it’s + hour + : + minute + am/PM + and +” to compose a realtime global diary of daily activities. The “it’s hour and I am” tends to elicit predictable confessions or complaints, showing how expressions automate our thoughts: “It’s 12:20 and I need a drink;” “It’s 1:00 pm and I have not moved from my bed;” “It’s 11:00 pm and I’ve finally got a decent cup of coffee.” Twide and Twejudice replaces most of the dialogue in Austen’s original with a word used in a similar context on Twitter, resulting in frivolous dialogue: (Mr Bennet asking Mrs Bennet about Mr Bingley:) "Is he/she overrun 0r single?” (Mrs Bennet exclaiming about Mr Bingley's arrival:) "What _a fineee thingi 4my rageaholics girls!'' While these lack the sophistication of The Seeker, by polluting Austen with Twitter diction, they illustrate how contemporary media have modified communication norms.

Which brings us back to the assumptions that ground our judgments of generated texts. Evaluating these works by their capacity to read like human prose is a stale exercise because what qualifies as “natural” language is relative, not absolute. Our own linguistic habits are developed through interaction with others, be they members of a given social class, colleagues at work or school, or spambots littering our Twitter feeds. In a recent Medium post, Katie Rose Pipkin eloquently described how machines have already modified what we think of as natural language, whether we're cognizant of it or not. We speak differently to search tools and virtual assistants because we have come to develop a tacit understanding of how they work and can modify our requests to communicate effectively.

The latest developments in machine learning are enabling machines to develop models of us in turn, ever updating what information they present and how they present it to match the input we provide. Kazemi is addressing this new give and take between man and machine head on in his 2015 NaNoGenMo submission, “co-authoring” a novel with an algorithm where for every ten sentences the algorithm drafts, he only commits the one he, as human, likes best. “Who wrote the book?” he asks. “[The algorithm] wrote literally every word, but [I] dictated nearly the entire form of the novel.” This is the same kind of dynamic new research tools built on IBM Watson are presenting to lawyers and doctors: ROSS, a legal tool built on the Watson API, presents answers to research questions, and all the lawyer has to do is to commit the answer she likes best. If NaNoGenMo helps us think more deeply about that dynamic, it can offer very important insights on the overall future of AI.

My Colloquies are shareables: Curate personal collections of blog posts, book chapters, videos, and journal articles and share them with colleagues, students, and friends.

My Colloquies are open-ended: Develop a Colloquy into a course reader, use a Colloquy as a research guide, or invite participants to join you in a conversation around a Colloquy topic.

My Colloquies are evolving: Once you have created a Colloquy, you can continue adding to it as you browse Arcade.