Dystopia is a Jacuzzi You Never Want to Leave

sacco header

 
Here’s my pitch for a dystopian novel. It takes place in Wealthy Powerful Nation (WPN), a country that is secretly spying on its citizens. In fact, those citizens life in a state of near-total surveillance and don’t realize it. Or, at least, won’t admit it to themselves.

You see, that’s the weird thing about Wealthy Powerful Nation, it’s a dystopia that doesn’t look like one, because it has a number of mechanisms in place that help hide how dystopic it really is. The citizens know on some level that the world is a terrible place, but they’re also living through a time of abundant good-to-great art and entertainment available at little-to-no cost. The people live in perpetual debt to make it seem like they have a stable, middle-class life. The country supposedly has freedom of speech, but corporations own most of the venues for that speech. Freedom of assembly is guaranteed, but the government can track its citizen’s locations at all times, can turn on the cameras on their electronic devices without them knowing it and record them, and can use very powerful computers to sift through the patterns of their actions to determine what they’ll do next. There’s very little oversight for the Government of WPN, and this system of surveillance has completely coopted industry and banking. Meanwhile, WPN is able to kill pretty much anyone in the world any time it wants, using an army of flying robots.

One man, let’s call him Ed, works for the surveillance state, but he has doubts. He believes that total surveillance impacts freedom, and so he steals a vast archive of information about the system of domestic espionage that WPN employs. He flies halfway around the world to reveal this information to a team of journalists and then skips town.

When the information is finally revealed, the world responds by mocking Ed on twitter for weeks for some stupid things he says about Vladimir Putin. Gradually, opinion polls come to agree with Ed, but nothing of any consequence changes.

Not a great story, is it? Not likely to be turned into a four part movie franchise starring Chris Hemsworth. There’s a couple of reasons why it’s a lousy story. The first is that, well, there’s not a lot of hope in it, and if there’s one thing that sets apart modern day dystopian narratives from their spiritual grandfather 1984, it’s the presence of hope. Hope that the State can be defeated, hope in the future, hope in progress, and, perhaps most important of all, hope in your fellow humans.

Looking at the United States today, it’s hard to see a lot of reasons for hope, largely because there’s been so little change, despite our current President’s use of both of those words for his election campaign. For you see, unlike the characters in most dystopias, we are not exactly victims. We have chosen our leadership, whose prosecution of a global war on terror remains largely popular, except for when it can be demonstrated to harm us directly.

Lucky for us, we outsource our harm as much as possible. The people we kill live half a world away, destroyed by flying robots piloted by children in a dark room nowhere near their quarry. Our all-volunteer army pulls so heavily from specific demographic groups that many of us can go about our lives without seeing any consequence of our war if we don’t want to.

And we don’t want to, do we? Looking the demon jackal that we have summoned with our war on terror dead in the eyes would be unbearable, paralyzing. Certainly, the torture report’s breaching of my own person walls of denial was for me, even though I already knew what was in it. So we ignore the demon jackal even while feeding it ever more of our humanity, willfully joining the only conspiracy that really matters, the one of ignorance and complicity.

These are desperate, hopeless times. Desperate, hopeless times call for desperate, hopeless art forms. Perhaps this is why Joe Sacco, who has made a name for himself as a comic book journalist specializing in war reportage has turned to satire, that most desperate and hopeless of art forms, in Bumf #1, his response to America in the age of perpetual war.

Satire has lost a lot of its luster now that it’s regularly used by racists to excuse impolitic things they’ve said on twitter, but satire has performed a unique and important function since the ancient Greeks. No other genre can get as close to unspeakable truths, because satire rides there on the wings of excessive bad taste—seriously, you have no idea how cleaned up most translations of Lysistrata are— exaggeration, humor and irony.

Enough preamble. Joe Sacco’s Bumf #1, his first fictional work in what feels like forever, is the most necessary comic of 2014. A nightmare that pulls from his roots in underground comix and the work of contemporaries like Michael DeForge and Jim Woodring, Bumf #1 is grappling with American hegemony in a way that serves as a stark reminder of the freedom and possibility that comics allows.

It’s also, to put it mildly, unsubtle. The first two-page spread in the book features Bumf’s narrator telling us that after the Garden of Eden, “There’s been a serious fuck-up,” while surrounded by prostitutes, a woman smacking her child, a man having his brains blown out, a homeless man sitting in front of a garbage can with a human leg sticking out of it, a man hanging himself and the twin towers being hit by planes. Then we’re off to World War II to firebomb some Jerrys and WWI to stroll naked through the trenches while millions of young men die, before seguing into a present day White House where President Barack Obama (drawn as Richard Nixon) attends a meeting in a situation room like something out of Hieronymous Bosch:
 

sacco situation room

 
Fiction, then, isn’t Bumf’s only departure from Sacco’s previous major works (the brilliant Footntoes in Gaza, Palestine and Safe Area Gorazde). In leaving the world of comics journalism, he’s also left behind realism entirely. Bumf #1 is a nightmare peopled by a set of symbolic characters pulled from the collective unconscious. First, there’s our narrator, a scummy, chain-smoking, foul-mouthed, bestubbled human face on the body of tweety-bird. Then there’s Colonel Singo-Jingo, fat, British, and monocled, standing up for the old-fashioned values that the 37 million deaths in World War I couldn’t shake. There’s our eventual protagonist, Nixon/Obama. General Custer makes a cameo appearance. Finally, there’s Joe Sacco himself, hired to be the official propagandist of American Empire, composing a story that’s “boy meets girl meets the State.”

These various threads cohere as the United States opens a new “black site” in the form of a portal to a planet in the Andromeda Galaxy, where neither the rules of physics, the ten commandments, nor the Geneva conventions apply:
 

sacco andromeda dimension

 
Once in Andromeda, everyone dons hoods, water-boards a few detainees, gets their kicks through drone warfare, and falls in some form of love. The nightmare becomes inescapable, the “black site” an infinite hellscape filled with demon jackals, unsuitable for human life. It is what the novelist and critic Charles Baxter has called a “wonderland,” a place where the character’s fugitive subjectivity has been made manifest in the world surrounding them. Or, as Baxter describes the wonderlands of HP Lovecraft, the environments become “inhospitable interiors, either simple or elaborate, [that] feel like private prisons disturbed by lunatic geometry. Their spaces present vistas of grief-stricken vastness, combined with a steadfast inanimate hostility to any human endeavor. They cannot be a home to anybody. Any effort at domesticity within them would be laughable. No one would want to be there.”

Yet, Sacco points out again and again, we do want to be there. He’s first recruited to join the war effort on the edges of a giant Jacuzzi. Gazing upon it, he remarks, “Wow. The press room sure has changed since I was last here. … this Jacuzzi of yours is serious business.”

“It’s not my Jacuzzi,” the chain-smoking tweety bird replies. “Think of it as the people’s Jacuzzi. Getting in?”

Complicity, in other words, is part of what Sacco’s after here. Bumf #1 essays our collective loss of humanity through the prosecution of an endless war against a series of ever-changing Kaisers stretching back to WWI. In one panel, Sacco recreates the infamous “Saigon Execution” photo, adding a WWI-era German helmet to Nguyen Van Lem’s head. “Killing the enemy is never enough,” Colonel Singo-Jingo intones, “We’d been killing them for years.” (His solution to this problem is rape, by the way, which is never quite shown in Bumf’s one act of tasteful restraint).
 

sacco kaiser photo

 
This complicity is vast and all encompassing in Bumf. Religion, art, the legal system, love, all are powerless to resist the temptation of power and obedience. What sets Bumf #1 apart from other dystopian nightmares that the characters all want to be there. Whereas other dystopian narratives often revolve around either an epiphanic moment (Brazil) or an already existing discontent that finally finds a venue (The Hunger Games), in Bumf #1, the various characters discover an acceptance of their dehumanized, alien world. A torture victim falls in love with her torturer after he enrolls her in a sensible mobile plan. Sacco comes to enjoy the power and prestige of being the official State Graphic Novelist. Nixon/Obama realizes he’s the Messiah. Gradually, torturer and tortured alike don hoods and lose their clothes. By the final few panels, they are an anonymous collective mass of victor and victim, Sacco’s glasses the only distinguishing feature amidst the hairy bellies and sagging breasts. There’s no hope for us at any point in Bumf #1, which is part of why its humor is so savage, and, while it often adopts the structure of the short gag comic, the jokes are likely to stick in your throat. There’s no escape from the bed we’ve made. All that remains is to lie in it. Getting in?

Here Come The Planes

(NOTE: This was first published a few years ago in the now-defunct web journal “The Fiddleback.” Noah was kind enough to let me repost it here.)
 

 
At first, it’s just Laurie Anderson’s voice, looped on an Eventide sampler. A pulmonic agressive ha repeats, calling out from 1981, exhaling middle-C. The ha continues through the duration of the song. Seven hundred one times at a pace of eighty-four beats per minute. For those of you keeping score at home, that’s a little bit over eight minutes. And yet this song, with its curious title, “O Superman,” and cold, borderline-cheesy lyrics and seemingly endless repetitions was, briefly, a monster hit. Up to #2 in the UK. One very bad recording of it on YouTube boasts that “this is what we all used to dance to back in the day.”

There are few other sonic elements introduced over the song’s eight-minute-plus length. There’s a handful of synthesized string lines and some organ. The faint sound of birds once or twice. And there’s Anderson’s vocals, a robot chorus announcing a new age.

Anderson’s work as a performance artist and musician relies heavily on distortions of her alto voice. She pitch-shifts it down two octaves, becoming a male “Voice of Authority,” or adds reverb and delay effects to punch home emotional beats. In both United States I-IV—the four hour stage show where “O Superman” debuted—and Big Science—the commercial album it appears on—Anderson sings through the vocoder, a kind of synthesizer that attaches your voice to notes you play through an instrument. You’ve heard it used by Peter Frampton on his biggest hits, or by Afrikaa Bombaata. The British also used it during World War II to send coded messages, breaking the sound up into multiple channels for spies to assemble later.

 

The opening lyrics of “O Superman” (“O Superman, O Judge, O Mom and Dad”) are a play on Le Cid’s aria “O Souverain, O Juge, O Pere.” In the opera, these words are uttered as a prayer of resignation, the hero putting his fate in God’s hands. In the Laurie Anderson song, the three O’s change meaning. First, she prays to Superman (Truth! Justice! The American Way!) but by the end she longs for Mom and Dad, and gives this longing voice in a series of vocoded ah’s: “Ah Ha Ha-Ah Ah-Ah-Ha.”

Despite Anderson dedicating the song to Le Cid’s composer, the two biggest influences it draws from are The Normal’s “Warm Leatherette” and the Philip Glass/Robert Wilson opera Einstein on the Beach. Indeed, “O Superman” can in some ways be read as a marriage of the two, of the uniting of brows both high and low, the repetition of minimalist opera lensed through the repetition of the dance floor.

Anderson is open about the debt she owes Einstein on the Beach. In interviews from the time, she cites the show as opening up possibilities for what a stage show could be, a process that lead to United States I-IV. O Superman’s repetitive “ha” references the sung counting during Einstein’s opening, and the keyboard lines not only sound Glassian, but the actual specific organ tone is one fans of the Philip Glass Ensemble will recognize.

The relationship between “Warm Leatherette” and “O Superman” is more tenuous. Certainly “O Superman” would not have garnered Anderson chart success and a seven-album deal with Warner Brothers without The Normal’s game-changing single from three years prior. Several sawtooth waves—chords, a siren gliss and a thwap-thwap rhythm—make up the entirety of Warm Leatherette’s music, while Daniel Miller, The Normal’s sole member, delivers ominous couplets about sex and car crashes. The song is essentially a musical setting of the JG Ballard novel, Crash, in which a car crash awakens the novel’s narrator to the sexual possibilities inherent in the automobile and its destruction.

It’s through “Warm Leatherette” that “O Superman” accesses JG Ballard’s apocalyptic vision of techno emptiness and Cold War nuclear anxiety. “Warm Leatherette” echoes Crash’s alienated space in which everything becomes simultaneously mechanized and eroticized. “O Superman,” meanwhile, creates a space of mechanization and alienation that also contains our human responses to this alienation: paranoia, loneliness, and a kind of heartbroken yearning. No character in a Ballard novel would ever beg to be held by Mommy, as Anderson does by the end of the song.

 

Coming as they do out of a theatrical tradition, Anderson’s songs, even at their most abstract, tell stories. “O Superman” is no different. Here, more or less, is its story:

You sit in your apartment in New York City at night. You are alone. Perhaps this apartment is on Canal Street, nearby the Holland Tunnel. It is 1981.

You sit in your apartment in a chair rescued off the street. The day you found it, you felt grateful that no one needed this chair anymore. This is the economy of New York furniture. People lug their unused belongings to the curb: The televisions and air conditioners with yellow paper taped to them, the word WORKS written in sharpie; the chairs that look fine, but might contain bedbugs; the couches that get waterlogged while you try to round up friends to lug them up the four flights of stairs to your apartment.

Concrete Island lies open on your lap, off to your right on a stack of milk crates rests a glass of cheap wine. Your violin leans against a nearby bookshelf, desiring your fingers and the bow.

Your phone rings. You decide that you will let the answering machine get it. People own analog answering machines, with real tapes that run and run and run out in the middle of their friends’ loquacious messages.

You hear your own voice first. “Hi. I’m not home right now. But if you want to leave a message, just start talking at the sound of the tone.”

A beep. And then. “Hello? This is your mother. Are you there? Are you coming home?” You hear need in her voice, along with a drop of reproach. Perhaps she didn’t approve of your moving to a hellhole like TriBeCa to be an artist. You do not pick up the phone. You do not tell her when you are coming home.

Another beep and then a voice you do not recognize. A man’s voice. “Hello? Is anyone home?” You do not answer it; you are not in the habit of speaking to strange men on the phone in the middle of the night. Instead of hanging up, however, he speaks more. “Well you don’t know me. But I know you. And I have a message to give to you.” Uh oh. Is this a crank caller? A stalker?

He speaks again. “Here come the planes. So you better get ready. Ready to go. You can come as you are. But pay as you go.”

You’ve had it with this man’s warnings and rhymes. You pick up the phone and say into it, “Okay, but who is this really?”

When the voice replies, what he says is terrifying. “This is the hand, the hand that takes.” He repeats it. He won’t stop saying it. You imagine just a mouth, the rest of the face shrouded in shadow, rendered in grayscale, like in an old movie.

And then he says: “Here come the planes. They’re American planes. Made in America. Smoking, or non-smoking?” He babbles on about the post office, about love, justice and force. And mom.

You hang up the phone. Confronted with this warning, with this mysterious stranger, the hand that takes, perhaps America itself, what can you do? You think about the first message. Your mother. She called you. She wants you to come home.

Sitting in your apartment, stranded in the night in New York, which despite the popultion density can feel like an island bereft of human company, you want your mother.

So hold me mom, you think to yourself, in your long arms.

You are so shaken from the phone call that the vision of your mother holding you gradually changes, becoming perverse and terrifying, but as it does so, you find yourself even more comforted.

 

*          *          *

Here come the planes. They’re American planes. Made in America.

In 1981—the year of O Superman’s commercial release—Ronald Reagan broke the air-traffic controllers’ strike and expanded the US military by the equivalent of $419,397,226.33 (adjusted for inflation).

In 2010, we’ve lost great amounts of our manufacturing sector, but one area remains triumphantly intact. We still make machines of war here in America. Boeing and Lockheed Martin are still based in the United States, the former in Chicago, the latter right outside Washington, D.C. Their plants also remain in this country, in places like Witchita, Kansas, Troy, Alabama and Columbine, Colorado. The Martin F-35 Lightening II—of which the United States intends to buy 2,443 for a price tag of over three hundred billion dollars—performed its first test run in Fort Worth, Texas.

In 2001, Mohammed Atta flew a Boeing 767—manufactured in Everett Washington— into the North Tower of the World Trade Center Building.

 

*          *          *

If you haven’t guessed by now, I might as well come clean: I was obsessed with Laurie Anderson in college. I tracked down out of print monographs of her work. I attempted to sneak her into just about every paper I wrote. Laurie Anderson thus joins a long line of serial obsessions on my part. She sits right between Eddie Izzard and Charles Mee if you’re ordering it chronolgoically.

I only knew one other person who loved Laurie Anderson. He discovered her via a twenty-six CD series titled New Wave Hits Of The Eighties that he bought off of late night television when he was in high school.

Despite all of this, four months after graduating from college, on the actual day when the American Planes Made In America finally showed up, I did not think about “O Superman.” On the actual day, U2’s borderline easy-listening track “Beautiful Day” took up unshakeable residence in my skull. It’s been said often enough to become a cliché, but the eleventh of September, 2001 really was gorgeous. The sky blue and cloudless, the temperature perfect for a walk from my then-girlfriend’s office on 56th and the West Side Highway to deep into the East Teens.

The blue sky loomed ominous, the way nights dark and stormy foreshadow murder in a potboiler. If we couldn’t trust the weather to tell us how to feel, or what would happen next, what could we trust? As we walked, desperate to put our backs to Times Square or any famous piece of Manhattan real estate, occasional planes flew overhead. When this happened, our faces blanched and our clutch on each other’s hands tightened as we ducked into the shadows of a skyscraper to watch the planes streak the blue dome above us.

And in my head, Bono wailed all the while. “It’s a beautiful day, don’t let it” Go away? Go to waste?

I discovered that I did not actually know the words to the song. As we stopped at a McDonald’s for food, bought water off a street vendor, and entered the East side, I became fixated on figuring them out, worrying the words like a loose tooth. Solving this annoyance seemed more important—or at least more manageable—than the attack itself, the questions about my DC-dwelling parents’ safety or where our nation was headed.

 

Unlike most Americans, I did not see what had happened to my city until many hours after the second tower fell. By then, our epic walk concluded, we sat on our friend Alison’s couch and watched the BBC. Again and again the plane flew into tower two, again and again the orange flower bloomed, again and again the towers collapsed and we jump cut to a POV shot of someone running from a wall of dust.

One of us said what became a constant refrain. It looks just like a movie. And indeed it did.

During the weeks to follow, we heard this idea everywhere. Just like a Bruckheimer film or I thought they were showing a disaster movie, until I realized it was on all the channels, or Just like Independence Day.

What we did not ask then is why. Why, at the height of our powers, had we imagined our own destruction so often that we had a ready-made database of images to compare this moment to?

Instead, we clicked our tongues in disapproval. This showed, we believed, the shallowness and alienation of our psyches. Now the time had come to end irony once and for all. We chose this interpretation instead of acknowledging how in tune with our deepest fears mass entertainment really was.

Through the nineties, when everything seemed so good that a blow job consumed media attention for years, it turned out that we both knew and feared that the clock would run out on our exceptional good fortune. The multiplex transformed into the only place to explore these premonitions of what was to come. The movies responded by doing what they do best. They thrilled us again and again, so we didn’t have to feel bad or, really, think much, about any of this.

We did not ask these kinds of questions in the aftermath because we did not have the leisure or distance or time to ask them. Instead, we asked other questions. Questions like, Who did this? And, Whose ass do we get to kick now? And—in certain circles of the left—Is it right that we kick their ass?

The first two questions we immediately answered with a nebulous body known as “the Arabs,” later refined to “Al Qaeda.” First thing we should’ve done, someone said to me at Thanksgiving dinner that year, is turn the Middle East into a parking lot. Even on that day, when we had no idea who had done this or why, we knew it must be “the Arabs.”

On her couch—which she invited us to stay on for as long as we want—Alison launched into a monologue about the Israeli-Palestinian conflict. She did not know that my girlfriend hailed from a Muslim country and I, although a Jew, am not a Zionist. The story culminated with her running into a Hasid on the street two hours after the second tower fell. “Hey man,” she recalls herself saying, “I’m with you and Israel all the way.”

She wanted to hug him, she said, but knowing the prohibitions against touching women, did not.

In that moment my mind wandered back to my girlfriend copping to a desire to bump up against Hasidic men on the subway and then claim to be menstruating. I did not mention this. Sitting next me, my girlfriend was silent. After crying until her pale skin turned a shade of red I did not think occurred in nature, she stared at the television, unblinking.

 

*          *          *

 

“O Superman” contains three moments of wordplay. The first comes right after Anderson mentions the planes, when she then asks, “Smoking or non-smoking?” Since we are not riding in these planes but are instead being warned about them from a mysterious voice, the phrase takes on a double meaning, becoming about corpses.

The second is when Anderson recites the Postal Creed: “Neither snow nor rain/Nor dead of night/Will stop these couriers/From the swift completion/Of their intended rounds”

Because if (once again) we are talking of airplanes, and talking of the death they bring, then the couriers become something different, and the package they are delivering is one you certainly don’t want. Nothing will stop them. A motto of American can-do becomes a motto of uncheckable military aggression.

Or, listening to the song today, dread of the unstoppable terrorist other.

Over the 2010 holidays, as the privacy (and genitals) of white people unaccustomed to legally sanctioned harassment were violated, America seemed to wake up to the absurdity of trying to stop terrorists with the pat down and the extra large zip-loc bag. If they can’t get on a plane, they’ll blow up a subway car. They are nothing if not tenacious. Neither scanner, nor banning of liquids, nor cavity search will stop them from the swift destruction of their intended targets.

The third moment of wordplay comes at the end as Anderson calls out to her mother. Like the question about smoking, the yearning for mommy’s arms gives way to a pun, of all things. “Arms,” of course, contains two meanings, one of which is enshrined in our Second Amendment.

And so, with a mournful, churchlike basso organ sound, Anderson’s mother turns from human into weapon. Her arms progress from “long” to “automatic” to “electronic” to “petrochemical” to “military” and back to “electronic.”

When Anderson sings “petrochemical,” if you turn the volume up very high and listen very closely, you will hear birds chirping.

In college, the merging of mom and machine struck me as a silly bit of early-80s “we are the robots” kitsch. I realize now how I wrong I was. I know now that the comfort found in waging war. I know that hurting others can feel like a familial embrace.

Did our desire for this comfort—the comfort of anger, the comfort of righteousness, the comfort of inflicting, rather than receiving pain—lead us so swiftly to retribution?

And what of other kinds of comfort?

I am at atheist. On 9/11, the only working phone I could find was in a Christian bookstore. I made two phone calls while the employees praise-Jesused behind me. The first was to my girlfriend to tell her I would walk to wherever she was, the second was to my mother. My father works for Congress, and was in who-knows-what federal office building that morning.

In the week that followed, everyone I knew in New York wanted to be held in some way, to be comforted. A friend called me to tell me she had been to church that morning. I laughed into the phone, finding it—and her—absurd. Like me, she was both a Jew and an Atheist. What possible business did she have in a church?

“I was walking down the street and I saw this woman outside a church, and, she just, she just looked at me, and I knew that that was where I needed to be.”

 

*          *          *

From the fall of 2001 until the Spring of 2010, I didn’t listen to “O Superman” or really anything by Laurie Anderson. It wasn’t until I left New York to drive across the country with my now-wife that I played it again. We were all gone to look for America. We were all sorts of cliches. We didn’t care.

As the curving asphalt ribbon of the Pacific Coast Highway unspooled before us, I click-wheeled over to the song. The instant I pressed play it started to rain and we sat in silence and listened to it. And it wasn’t until I got lost in the ha that I realized how long it had been since I had heard it. How did that happen? How did a totem that I carried with me, loved so hard, like it was a person, like it belonged to me, like I made it, how did I abandon this thing for so long?

Right before Anderson’s two-minute litany of different kinds of arms, I looked over at the driver’s seat, seeking approval. Now-wife displayed the face of a champion poker player. On the stereo, Anderson paraphrases the Tao Te Ching, singing, “‘Cause when love is gone, there’s always justice, and when justice is gone, there’s always force, and when force is gone, there’s always mom.” And then her voice breaks a little, and in a rare moment of humanity Anderson says, “Hi, mom.”

It felt like letting her read my diary from before we met. I wanted to be known better by this woman I would soon marry and move from New York with. I wanted to let her see the embarrassing parts that resist verbalization and need the true falsehoods of art. Part of me felt, in this moment, like all young men who like imposing their tastes on their loved ones—that somehow my self-worth was caught up in this moment in this purple Honda listening to this song.

Why had I stopped listening to O Superman? The answer seemed obvious now. After that sunny September day, her work became unbearable to me. The song contained too much of what I tried not to feel and not to recognize about the world and myself and the country in whose name horrible things were being done.

Instead, the song went into a cardboard box in a dusty attic closed off from my soul. Also in that box: a book of plays that lay, spine cracked on my windowsill collecting mysterious black and grey and green dust from September 12th through 15th. It sits on my bookshelf now, spine facing the wall, unopened, a guardian against destruction.

 

*          *          *

At the end of the song, Anderson repeats a vocodered melodic line from the beginning: “Ah Ha Ha-Ah Ah-Ah-Ha.” This time, however, she interrupts herself with a synthesized string line that once again feels like it could come out of the Philip Glass playbook.

This string figure references the vocoder melody off of the song “From The Air,” the track right before “O Superman” on Big Science.

“From the Air” is a song about a plane crashing into New York City.

 

Via crossfade, an honest-to-god tenor saxophone replaces the synth strings. The only instrument to appear in the song without some kind of treatment on it, it makes its realness known by being slightly out of key.

And then, at the very end, everything cuts out, giving way once again to the ever-present, omnipotent “Ha,” repeating itself solo for seventeen seconds.

If you turn the volume up very high and listen very closely, you will hear sirens in the background.

 

*          *          *

 

As the song ended, we feared for our lives. The storm transformed the Pacific Coast Highway into something treacherous, slick, unknowable. The next pulloff onto more trafficked streets lay tens of miles ahead of us. Did we have enough gas to make it? Would our stomachs give out amidst the twists and turns? And—most importantly—did my now-wife like the song?

“Huh. Wow.”

“It’s kinda brilliant, right?” I asked.

“Yeah. It’s also kinda unlistenable at the same time.”

Kinda brilliant, kinda unlistenable is about as close to a judgement of the aesthetic quality of “O, Superman” as I can offer.

 

An odd component of post-9/11 American life has been the failure of art to address the event itself. Many—including some of our greatest living artists—have tried.

Instead, we’ve had to turn back to before the smoking day to find art that resonates. Some claim that Radiohead’s Kid A is the best album ever made about 9/11, despite coming out years before. Immediately after the event, the pundits on television wanted so badly to believe in our President that they told us to reach back to Shakespeare’s Henry V to understand how a drunken spoiled brat could become a Good Christian King.

Why not, then, appropriate “O Superman”? Laurie Anderson herself remains unclear as to the inspiration of the song. She claimed in one interview that she wrote it in response to the Iran-Contra scandal, which broke over five years after the song’s improbable chart climb. Like JG Ballard claiming to have seen the flash over Hiroshima from Hong Kong, this memory is impossible, invented but right nonetheless.

Our claming of these artifacts as being “about 9/11” shows that—rather than changing everything—that day recapitulated and unleashed what lurked, buried underneath us like one of Lovecraft’s ancient Gods. As much as we said this was the day we’d never forget, it revealed how much we’d already forgotten.

 

Free Will and Wanton Lust

 

octavia-butler

Octavia E. Butler’s Fledgling is two books in one. Like a pre-fab house with the world’s most fascinating basement, everything above ground feels thin and standard issue, but lurking beneath is a troubling look at slavery from the point of view of a sympathetic slave master, who never quite realizes what she is.

The primary narrative of Fledgling concerns a character named Shori, who awakens without her memory following an unknown violent tragedy. Shori, it turns out, is an Ina, a race of humanoids that developed in parallel to humanity who are, basically, vampires. They suck blood, the sun hurts them, they live almost forever, and they have what anyone who was into Vampyre the Masquerade can tell you are thralls, humans whom they have bewitched through repeated biting. Except here, the thralls are called Symbionts. They provide a steady food source and other somewhat vague physiological benefits to their Ina, and in turn they get to live for around two hundred years, are immune to disease, and get a whole host of other benefits.

The novel’s plot revolves around Shori trying to relearn who she is and, eventually, find justice for the murder of her parents, her siblings, and her first group of human Symbionts. While excellently plotted, the actual story of Fledgling leaves much to be desired.  Often, the story appears to be an excuse to do a lot of world-building about Ina that never fully pays off, and a kind of Mary Sueism leaks into the book’s protagonist. There is nothing wrong with Shori as a character beyond her memory loss. She is completely devoid of flaws, and all her struggles are external in nature. She spends nearly all the book being told by everyone around her how great she is. She is physically and intellectually superior to every other character in the book. Her only seeming fault—her temper, which arrives abruptly right before she is told she needs to learn to control it—is only a challenge because the hidebound rules of Ina decorum frown on it. The villains in the book are essentially Nazis, and there’s never any question about whether justice will be done during Fledgling’s courtroom drama second half, because Ina can smell whether or not people (or fellow Ina) are lying. The allegorical aspects—Shori is black and all other Ina are white, Shori is the product of genetic mingling between humans and Ina etc.—are transparent and heavy handed. It’s a fun page-turner, good for a lazy weekend or long flight, but not exactly up to Butler’s well-deserved reputation as a trailblazing science fiction writer.

Again, ignore the house and take a trip down to its basement. Pry up the floorboards and look around a bit for the bodies buried there, and you find much more fascinating material. As Noah discussed recently, Fledgling is a book that works in part by trapping you in the narrator’s head. Shori and the reader have a kind of soul-bond. As she has lost her memory, we begin in the exact same place as she does, learn what she learns, when she learns it. We never escape her subjectivity; her experience is our experience. But as in many books with a clearly defined first person narrator, there are paths into that experience that Shori can’t see, but that we are free to roam around in and explore.

This different understanding largely revolves around Symbionts, or as we would probably call them, slaves. The bond with Symbionts is formed through a venom the Ina infect them with. After a few bites, the venom is addictive and, if a Symbiont is ever separated from their Ina for too long, fatal to the Symbiont. It also destroys their free will. Not only are they unable to disobey their Ina’s command, once bitten even for the first time, they feel pulled towards the Ina, wanting what the Ina wants, wanting to serve. Once bound, Symbionts will die if separated from their Ina for to long.

Thus, even though the Ina talk about the ethics of their Symbiont system with quite a bit of lofty rhetoric about consent, consent is actually impossible. Once bitten for the first time, a prospective Symbiont is going to want to be a Symbiont, because they are going to want to please the Ina who has bitten them. The only regime governing how Ina treat their Symbionts are social norms. The current norms are egalitarian. Symbionts are supposed to consent to becoming Sumbionts, you aren’t supposed to boss them around unless absolutely necessary—a necessity that comes up far more often than the well meaning liberal Ina would like to admit—and talking about them like they are inferior is gauche. The villainous Silk family use their Symbionts as pawns and, we are led to believe, treat them barbarously, and there is nothing the other Ina can (or want to) do about it. The eventual trial revolves largely around the Silk’s crimes against Shori’s family, short of outright murder, there is nothing Ina are legally forbidden from doing with their Symbionts.

Having lost her memory, Shori is free from the socialization of having grown up the benefactor of an oppressive social order. Shori adores her Symbionts, and feels closely tied to them, and something about this system troubles her, even if she remains unable to articulate what it is. All of that articulation is left up to her “first” (Ina must have a group of Symbionts so they don’t kill them by feeding from them too often and the feeding process is overtly sexual, so the Ina-Symbiont relationship comes to resemble a shared marriage with a primary partner and several secondaries), a white man named Wright. Shori binds Wright to her before she re-learns what the Symbiont-Ina relationship entails, and he grows increasingly resentful about his role and their relationship as the novel progresses. While some of this is couched as a critique of heteronormativity—he’s angriest at having to share her with another male Symbiont—you can feel Fledgling pull sympathetically towards Wright’s problems with the world he has been forced into. Late in the novel, Shori casually takes up the Ina habit of replacing a Symbiont’s last name with the word “sym” and the name of the Ina they are bound to, erasing the human’s individuality. Wright responds:

“Sym Wayne?” Wright said, frowning. “Is that how you say it, then, when someone is a symbiont? That’s what happens to our names? We’re sym Shori?”

“You are,” I said.

“Something you remembered?”

“No. Something I learned from hearing people talk.”

The moment of a forced name-change is an important plot point in many slave narratives, from Roots to 12 Years a Slave, whose action is only resolved when Solomon Northrup reclaims his name.  It’s vital that this moment comes late in the book, after Shori has begun to be welcomed into Ina society. As she becomes more Ina, her patience for the very human needs and dignities of her Symbionts lessens, and her complicity in their oppression becomes less noticeable to her.

Wright never breaks with Shori. In fact, his growing discontentment goes nowhere. Other Ina assure Shori that Wright will “come around” one day, but there’s no real evidence that this is true. He has no choice but to stay with Shori, and, while he’s in love with her, is unclear whether or not that love is real.

Fledgling is much trickier than it initially seems. While its surface story is a straightforward allegory about race and white supremacy, its b-plot takes the same victim of oppression and turns her into an oppressor. The book further scrambles our ready-made categories by situating the narrative inside the head of a black, female slave master and making a white man the voice of human dignity. It’s a fascinating and troubling look into how systems of oppression justify and perpetuate themselves, told from the perspective of someone who thinks they’re in a YA supernatural coming of age novel.

It could be that part of why Fledgling feels so unsatisfying as a novel yet so thematically rich is because it was conceived as being part of a series. There’s no evidence of this beyond the text itself, other than Butler’s penchant for serialization. But it could be that the plot feels unfinished because its primary purpose was to keep us interested while we learned a hundred pages or so of exposition about Ina customs, history, biology and religion that would be important later. It could be that Wright and Shori’s relationship—the key relationship in the book, and, at first, its apparent subject—does not resolve in this book because it was meant to in a future volume. This would help explain why Shori’s arch-enemies are left alive in the book’s conclusion as she goes to live with a new family that has not been fully developed yet, and why the book hints at growing factionalism within the Ina, pinned to the question of the species’ origin, that may break out into civil war.

Sadly, we’ll never know. Fledgling, Octavia E. Butler’s first book after a lengthy hiatus, would prove to be her last. She died suddenly, as the story goes, on book tour, promoting it. Of all the aspects of Fledgling that are richly, deliciously troubling, this may be the most. That Butler wrote a book in part about people so desperate to cheat death and loneliness that they would agree to be enslaved, right before her own life was cut so tragically short.

When Is A Job Not A Job? When It’s In The Arts, Apparently.

[IMPORTANT UPDATE AT THE END]

Here’s a story for you, and it’s a good one, an uplifting one in this time of constant headlines about this or that art form dying or being in yet another crisis. It’s about a little theater, a small off-off-Broadway space[1] towards the bottom of that Triangle Below Canal, a professional theatre well known for experimental work called The Flea. This little experimental theater nearly went out of business in the wake of 9/11, when Tribeca became a ruined, gray-dusted alien landscape. The Flea was only saved through a mixture of innovative fundraising and striking gold with a hit play called The Guys, a two-hander about a reporter and a firehouse hit hard by the WTC attacks that starred a roster of celebrities, ran for years and helped put the theater back on solid footing.

Now, thirteen years after the theater nearly went out of business, The Flea is thriving. Its resident acting company (called The Bats) numbers around 150 people and produces work constantly. A directing apprenticeship program helps mentor the next generation of directors. The theatre does a variety of programming with a kind of ambition—particularly where cast sizes are concerned—that no one else in town can match on a budget so small as to be almost unimaginable.

It’s a remarkable turnaround, so remarkable that The Flea has managed to raise $18 million to purchase a nearby building and convert it into a new space. The new space will have three state of the art theater spaces available to local companies to rent for cheap[2] and allow The Flea to produce more work. And so far, the plan has received rapturous coverage in the press, helping to raise the profile of The Flea even further[3].

There’s just one little wrinkle in this story, and it’s about The Bats. You know, the resident company of 150 or so early career actors? The ones the Times calls the “beating heart” of the theater? The young, hip, diverse troupe whose work helps ensure the theater is constantly full of young, hip, diverse audiences? Well, they’re unpaid.

*

Is it a problem that The Bats aren’t paid to act? It turns out that answering that question involves answering a whole lot of other sub-questions. Questions like: is acting a job? If it is, is exposure a form of payment, a kind of service in lieu of cash, perhaps? Are there mitigating circumstances that affect any of this? Does it matter that the kind of large scale, ambitious works The Bats often do at The Flea would be impossible if they had to pay their actors? Does it matter that there is the money to build an $18 million new space but seemingly no money to pay artists?

It turns out the answer to those questions change depending on who you talk to, depending on what kind of story you want to tell. The story that tends to get told about the arts leaves out labor issues[4]. If labor—and that no-no topic, pay—are brought up at all, they’re usually in the context of whether or not Broadway performers, musicians and technicians are getting paid too much, despite the fact that, as Terry Teachout discussed in the Wall Street Journal, ballooning marketing costs are largely to blame for increased ticket prices on the Great White Way.  Rarely discussed in the conventional story about theater and money is that salaries are so high on Broadway because those high payments make it possible for artists to remain in a system that, except for their brief tenures in the largest theaters, will ask them to do enormous amounts of work, often for little to no money[5].

The story we tell each other about creative work, meanwhile, is that it isn’t really a job, not really, and that you should be grateful for what you can get for it, even if other people are getting paid off of the work that you do.  This isn’t limited to theater. David Byrne recently talked about this issue and music in Salon, and Molly Crabapple wrote about it in the visual arts for Vice. Many (if not most) literary magazines don’t pay. Many major websites won’t pay for writing if they can get away with it. Hell, I am currently writing this essay about The Flea not paying its early career actors for a website that doesn’t pay its writers. I don’t always see a problem with this. Here at Hooded Utilitarian, no one, including Noah Berlatsky who works much, much harder on it than I, makes any money off of it.  HU is a labor of love (or, for some of you, hate) where we can get together and publish things we’re unlikely to place elsewhere. It’s a site where professionals do some non-professional—but hopefully professional quality— work.

There’s a term for this kind of work—professional grade labor that goes unpaid (and is thus amateur)—and that is “pro-am.” We’ve all witnessed how the internet has created an exploding pro-am writing sector. This has been positive in all sorts of ways. There is more great writing being produced every day, easily available at little to no cost for the reader. And as long as the reader’s costs are the only part of the story you’re interested in, it’s incredible.

I started working as a theatre professional as an actor in my teens. In the twenty years since, I’ve witnessed a similar explosion in the pro-am sector in the dramatic arts. Undergrad and graduate theatre programs have grown in number and size, and the number of paying jobs outside of academia hasn’t kept pace. This dynamic has both depressed wages and fueled vibrant pockets of “independent theatre” in many American cities, as artists have come together to create work for little to no money[6].

Given this reality, perhaps the right question then is… what’s the line? When does something stop being a pro-am labor of love and start being something more problematic?

In the case of The Flea, setting the boundaries of the acceptable is thorny.The Flea exists in a specific context and a specific industry. Early career actors tend to have only a few options available to them, all of them bad. They could self-produce work at great personal cost, even if they convince Uncle Shmuel and Aunt Betsy to kick in some money. They could act in self-produced work, which is something of a crap-shoot, exposure-wise. They could intern at a theater (likely for free), stuff envelopes all day, and if they are very, very lucky get someone to come to a show of theirs from, like, I don’t know, marketing. They could go to graduate school (at, again, great personal cost[7]) and, chances are, end up right back where they were only better trained and in enormous debt. Most perniciously, they could pay to take an “audition workshop” with a casting director (or just as often, a casting director’s assistant) which is really just a pay-to-play audition.

It’s a raw deal, in other words, this life of an early career actor. And it will continue being so for the foreseeable future because—and this should read familiar to any writers out there—the supply of actors so overwhelms the demand for them that the dollar value of their labor has been depressed to, essentially, zero. Given this, what The Flea provides—real exposure, free rehearsal space, frequent opportunities to get up on stage and learn one’s craft through getting work up in front of an audience, a chance to produce work, connections, a real community of fellow artists, and the opportunity to learn various ancillary skills of theater without having to pay a dime—is nothing at which to scoff.

All The Flea asks is that, in exchange for getting to be on stage, The Bats work three hours a week doing tasks around the theatre—more if they’re currently in a show since they’re benefiting more—an exchange that, when you talk to any current Bat seems to make perfect sense. It’s hard to argue that three hours of labor in exchange for the opportunity to be in shows is onerous.  Indeed, The Bats love being Bats, and don’t feel particularly exploited.

Unless you view acting in plays as labor. And how is it not labor? The Flea is charging money for people to see The Bats perform[8]. The institution is building itself based on their work. It’s one thing to accept that early career artists must be paid in exposure.  It’s another thing entirely to accept that they must be paid in exposure and that they must also pay for the opportunity in sweat equity.

That sweat equity is also problematic in ways not often discussed. Three hours is not a lot of work to ask an individual Bat to do per week. But with 150 Bats, each doing at least three hours of work for free, The Flea is picking up at least 450 hours worth of free labor per week. That’s ten full time employees worth of work. While this is clearly part of what makes The Flea able to do what it does on such a shoestring—and helps explain why, despite moving to a three-performance-space complex, they’re only expanding their paid staff by two—it has the unintended side effect of further depressing wages, setting an uncomfortable precedent for how a professional theater should be run[9].

These problems are only heightened by the new $18 million building. Practices that are forgivable amongst the scrappy are less so amongst the well-appointed, as Upright Citizens Brigade and Amanda Palmer have recently learned. Supporters of The Flea I’ve spoken with will tell you that paying actors and buying a new space are separate conversations, different stories. The Flea is currently spending around $17K a month in rent, and the new space will secure their future. Furthermore, it’s nearly impossible to raise money to pay artists properly and much, much easier to get donations for “brick & mortar” projects[10].

While I agree that the new building is necessary and am happy for The Flea’s good fortune, and happier still that off-off broadway companies will have access to three nice, clean, functional spaces at a low rental cost, this is almost too clever by half, this walling off the payment of labor from conversations about budgets, about donations, about the “public good” part of a nonprofit’s mission. It may be true that the problems of The Flea are the problems of the industry that The Flea is in. But that doesn’t mean The Flea shouldn’t show leadership on issues of labor fairness.

After all, The Flea has retooled The Bats before, to the mutual benefit of both the company and the theater. The work hour requirements for The Bats used to be higher, and the jobs more menial. The Bats used to perform in fewer shows, there used to be fewer of the Bats, and, according to current and former members I spoke to, less of a sense of community. The Flea even once charged actors a fee to audition[11], something they’d never imagine doing today. The Flea also hasn’t precluded rejiggering the program again three years from now when the new building is complete.

There are a number of changes The Flea could make that would still allow them to do ambitious large-cast projects with an excited community of performers while showing leadership on labor issues. The Flea could simply begin paying The Bats when they appear in shows. It needn’t be a large amount of money; even a stipend would send the message that the theater values The Bats and takes their art seriously. Being a Bat is often likened to a kind of practical graduate school, a training-by-doing program. Part of that training could—and should—include teaching The Bats that their art is worthwhile enough to be paid for practicing it.

If The Flea does not want to do that, they could drop the work requirement. Or they could work with the actors’ union to turn The Bats into an Equity Membership Candidacy program, a true apprenticeship[12] that ends with the actors well on their way to Union membership[13].

More drastically, The Flea could drop the 1-2 professional shows from their annual calendar and cease calling themselves a professional theater altogether.  This wouldn’t stop them from working with professional artists from time to time, particularly where playwrights and directors are concerned. The model for how The Bats work, a tight knit group of artists who do most of the work around the space including everything from running the concession stand to hanging the lights, is already closer to that of a community theater than it is to anything else. While “community theater” is a term loaded with all sorts of associations, most of them negative, it is where most Americans will go to see (or take part in) large cast, ambitious shows that don’t pay actors.

There will not be any pressure on The Flea (and other, even worse companies) to reform so long as the story we tell about art remains the same. So long as we keep telling each other that exposure is payment, that erecting a new building is the only true sign of success, and that labor issues are irrelevant, so long as we keep writing the same story, glowingly reporting the official line without digging an inch deeper, we’ll be stuck in the same place: Bigger, shinier buildings—or websites sold to AOL—with broke-ass people getting paid less and less to do the creative work that keeps them alive.

UPDATE: Since this article was posted, one of the people I interviewed for it (the one mentioned in the final footnote) e-mailed to say that she neglected to mention during our interview about The Bats and and payment that The Bats  receive a nominal stipend during tech rehearsals, since those are what are known as “10 out of 12s” which is to say, 12 hour rehearsals with two one hour meal breaks. This schedule makes it impossible for Bats to make money elsewhere, like temping or waiting tables etc. while in tech.  The stipend was introduced last year and is variable, but under $50.  This means that, when they appear in shows, the Bats are no longer working for free, which is a positive step.

That said, when The Bats are not working in shows, they are still doing 3 hours a week of uncompensated labor around the space. And I would furthermore argue that less than $50, framed entirely as  a way to make up for hourly wages lost elsewhere during tech rehearsals, is still inadequate. It is far less, for example, than the daily subway fare a Union actor is paid in a showcase production. And the larger issues of how we value the people who actually create art in our culture remain.  But it is a positive step in the right direction and reinforces my hope and belief that The Flea wants to find ways to do right by their ensemble.


[1] Off-off Broadway refers not to theater location but the kind of Union contract it uses when working with members of Actor’s Equity Association (aka Equity or AEA).  Off-off Broadway codes are for New York City theaters under a hundred seats. Off-Broadway is the designation for theaters holding between 100 and 499 souls. Anything larger and you’re in Broadway contract territory.

[2] This is no small thing. Theater space—even a 50 seat shithole—can cost thousands of dollars a week to rent, making the amount of cash young companies have to shell out to produce their work often the largest parts of their budgets.

[3] This is one of the reasons why theaters embark on building campaigns. Often the first season after a new building opens brings more audience members and donors, although I once heard a fundraising consultant say that those new donors and viewers often vanish after that first year or two.

[4] It was highly controversial, for example, when Jason Zinoman made the argument in the New York Times that the Upright Citizens Brigade should start paying at least some of its performers, given that a large and very successful institution had been built off of their labor.

[5] A union actor acting in an off-off Broadway show can make as little as daily subway fare in pay. Union actors working Off-Broadway often make under $500 a week. And that’s when they’re actually working on a show. Things like staged readings don’t always pay. And, of course, there’s the gaps between gig when actors aren’t getting paid at all.

Perhaps this is too much to get into in this space, but this is one of the many reasons why the current theatre system is set up the way it is, with larger “regional” (non-NYC) theaters hiring NYC-based actors. The theaters pay a premium for what is generally considered a more talented labor pool. Actors then make more money on the road both through higher weekly salaries and through subletting their apartments back in New York. It’s a system that screws just about everyone. Working actors pay an enormous premium to have a NYC mailing address. Local actors often won’t even get to audition for shows in their hometowns. And for audiences, to paraphrase monologist Mike Daisey, it’s something akin to going to see your hometown baseball team and finding out they’ve been replaced by a bunch of people who guested on Law & Order a couple of times.

[6] The vast majority of Portland, Oregon’s  theatre scene is made up of pro-am companies, for example. It’s worth saying that some indie theater companies take pride in compensating their artists to the best of their abilities.

[7] Nearly all graduate schools for theatre cost roughly one vital organ per year to attend.

[8] As this audition notice (http://www.theflea.org/blog_detail.php?page_type=4&blog_id=238) makes clear, performing is a lot of work in and of itself. In case you don’t feel like clicking over and reading it, this Bats production asks actors to commit to almost two months of six-days-a-week rehearsals plus two months of 5-days-a-week performances, tying up their schedule from January until May. This would, amongst other things, keep them from getting paying acting work for half of the normal theatrical season.

[9] After all, can you really call yourself a professional theater if the majority of the work in your theatre is done on a non-professional basis?

[10] Most of the money for the new building is coming from the City of New York. By comparison , the National Endowment for the Arts is legally barred from giving money directly to artists to support the making of art.

[11] According to an actor who auditioned during this time and joined The Bats a year later, in the wake of 9/11 they charged prospective Bats $25 to audition, saying that they needed to cover the hole in their budget caused by the terrorist attacks.

[12] The Bats are called volunteers, not apprentices or interns. Were the program called an internship, it could be illegal, as by law interns cannot do the work traditionally done by paid employees and more benefit must accrue to the intern than to the company they work for. These laws are on the books to prevent companies from skirting minimum wage laws, something it could be argued The Bats’ weekly work hours requirement clearly does.

[13] There are almost no professional actors in the non-profit system who aren’t members of Actors Equity Association. You cannot be a member of AEA and be part of The Bats. One Bat I spoke to loved being a Bat so much (and was getting regular acting work that she cared about) that she declined joining the Union so she could stay in the group.

Playing Narrative Part 2: Survivor’s Guilt

the last of us

(Hey! As the title indicates, this is part 2 of something! Part 1 is here!)

(Warning: Spoilers. Including the end of the game.)

Somewhere around the halfway mark of Naughty Dog’s The Last Of Us, Joel, the hardened survivor of a plant-parasite-fungus-zombie-apocalypse that you spend most of the game controlling, finally makes it to his brother Tommy, located somewhere in the vast middle of America. Joel’s there to try to hand off Ellie, a teenage girl who must be taken to the Fireflies, a subversive group located somewhere out West. It’s the second time you’ve seen Tommy. In the game’s prologue, set twenty years before the rest of the action on the day the apocalypse started, Joel, Tommy and Joel’s daughter attempted to escape Austin, Texas.  Now, relations between the two of you have cooled. Or, as Joel tells Ellie, “His last words to me were… I don’t ever want to see your goddamn face again.”

The player never learns exactly what caused Joel and Tommy’s falling out,  but when Tommy—who now has a wife and helps run a small town based around a hydroelectric plant—refuses to help Joel, you get some idea. Joel tells Tommy that he’s owed this, “for all those goddamn years I took care of us.”  Tommy replies, “took care? That’s what you call it? I got nothing but nightmares from those years.”

“You survived because of me,” Joel, tells his brother.

“It wasn’t worth it,” Tommy says, looking at the camera, stricken and haunted.

 

What could possibly make not-dying not-worth it? Likely, it’s the stabbing, shiving, Molotov-cocktailing, strangling, shooting, archering, punching, bricking, bottling, and IEDing that the player has spent the last seven hours making Joel do to various zombies and humans. The Last of Us is a game that takes its violence and its theme of survival very seriously, and gradually asks the player to do the same. In doing so, we come to realize that Joel, the man we inhabit, may be a survivor, but he sure ain’t a hero.

After the prologue, when we jump twenty years in the future and re-meet Joel as a childless middle-aged man, he is a lowlife. He smuggles drugs, ration cards and weapons, serving up some terrible ownage on people who cross him. He runs in a relationship of sexual and financial convenience with a fellow smuggler named Tess, who will go on to summarize their lives by saying “we’re shitty people, Joel,” and mean it. Later still, after Joel and Ellie take on a group of marauding bandits, Joel reveals to Ellie that he’s “been on both sides of this thing.” When a different group of bandits invade Tommy’s power plant, Tommy asks Joel if he still knows how to kill, but the look on Tommy’s face tells you that he’s disgusted with himself for asking.

Joel, just to be clear, isn’t an anti-hero. Nor is he another in a long line of video game asshole warriors. He’s not a Don Draper or Tony Soprano charming psychopath. He’s actually kind of a piece of shit. Not that he doesn’t have his complexities, particularly in his relationship with Ellie. She sees a goodness in him, the same goodness we glimpse in the prologue, the goodness he appears to have lost. It’s a goodness that, when it’s just the two of them together, The Last of Us dangles in front of us as a possibility.  Joel’s a broken man, physically strong and spiritually bereft. A man who has turned off his soul for twenty years, and, over the course of The Last Of Us, we begin to care whether he gets it back or not, just as much as we care about whether he and Ellie ever make it out West.

Much of the time, however, Joel’s like a mix between Rooster Cogburn from True Grit and Theo Faron from Children of Men, sans most of the redeeming qualities of both.  What makes The Last of Us so startling is that it knows this. And, gradually, it makes the player know it too.

 

Naughty Dog became famous over the last decade for a series of Indiana Jones like games called Uncharted that, as cinematic acts of storytelling, are actually better than half of the Jones films and all of Jones’s latter day imitators like The Mummy and National Treasure. In those games, the player controls Nathan Drake, a descendent of Sir Francis Drake and international treasure hunter who gets in over his head having a series of thrilling, funny, genuinely charming adventures having to do with lost artifacts that may hold great power. The Uncharted games harken back to movies like Treasure of the Sierra Madre or Romancing the Stone, the kind of big budget, exotic locale, rakish hero, adventure films that Hollywood used to be able to do well, while removing the problematic racial politics that often make those films unwatchable today.

There’s just one problem: These are, of course, action games. Which means that the player also spends a great deal of time killing people. Hundreds of people, it turns out. After Uncharted 3: Drake’s Deception came out, more and more people started raising a stink about this issue. It’s pretty clear that the team on The Last of Us—many of whom also worked on Uncharted—wanted to see what would happen if they started taking all the killing seriously and asked their audience to do the same.

While The Last of Us, like The Walking Dead, takes place in a world hit with a zombie apocalypse, the similarities pretty much end there. TWD’s gameplay functions through dialogue and action choices. The Last of Us has very little choice in it at all. TWD’s graphics are stylized and cell-shaded.  The Last of Us uses motion capture. TWD is an adventure/puzzle game. The Last of Us is a stealth/action game.

Most importantly, TWD takes place immediately following the zombie apocalypse, as people learn how to survive. The Last of Us takes place twenty years in, and is set amongst the whittled down population of people who’ve figured it out.

Survival is what The Last of Us is all about on both a thematic and gameplay level. If Naughty Dog were in search of an alternate title for the game, Survivor’s Guilt (with “guilt” here meaning both the feeling of remorse and the state of having done something wrong) would’ve been a good stand-in. As with The Walking Dead—where a series of choices serves as an essay on ethics when you realize death in inevitable—it is this interweaving of theme and mechanics that enriches The Last of Us and makes it work.

In the game you have limited weapons, and all of them have limited uses. You have to worry constantly about making too much noise, alerting nearby enemies. Killing people is difficult, noisy, and time consuming. All of the materials you find are necessary to craft multiple items. You can’t carry very much. There are also many points in the game where you can sneak by adversaries and not engage with them, leading—if you are, like me, both ethically minded and neurotic—to calculations that go something like Well, I’m low on supplies and I bet I could take these guys out and loot their corpses. Wait. Am I seriously contemplating killing six people who aren’t a threat to me for the express purpose of looting their corpses? Oh my God. I’m the worst.

In The Walking Dead, violence is very personal. Most of the time, it is being dealt by or to someone Lee Everett knows. The Last Of Us, on the other hand, primarily features the kind of depersonalized violence that most video games trade-in, it just makes that depersonalization part of the point. Joel—who has survived precisely because he’s selfish— can’t see the people he’s killing as human.

Not that the game is a relentless downer. Much of it is spent wandering overgrown urban landscapes and idyllic vistas talking with Ellie and deepening the bond between the two of them. Ellie is one of the few great characters to emerge from video games. She’s funny, charming and human and feels in many ways like a real fourteen year old. Indeed, any affection the player gains for Joel is likely the end result of loving Ellie, and wanting to love what she loves. For each of the game’s acts (there are four of them, one for each season), Ellie and Joel meet and team up with other survivors, who all prove to be interesting, fully realized characters written and performed with that rarest of video game traits: subtext. The Last of Us is a game where watching facial expressions and listening to tone of voice changes meaning, and the few choices they give you along the way are entirely about character development. You can stop to explain to Ellie what a coffee shop was, or pet a giraffe. You can find comic books to give her to read. You can give a man a Dear John letter from his boyfriend.

Ultimately, however, The Last of Us’s themes cannot be escaped for long. And yet, because it is a very well designed game, it is fun play. And yet, because it takes what it is doing seriously, it’s a disturbing and wrenching and truly, deeply, haunting. The ending of the game is anti-cathartic and disturbing and in no way resolves the central tension between depicting the urge for survival while also problematizing it, suggesting that perhaps, at times, being a survivor means being a monster.

Joel, you see, is presented with the opportunity to save the world, but doing so entails Ellie’s death.  Ellie is immune to the parasite that has destroyed civilization, but creating a vaccine from her body would involve removing her brain. Joel saves her life, killing a hospital full of people, and ends any hope of humankind’s recovery. The Last of Us twice hints that Ellie would’ve accepted her death if given the opportunity to choose. But she never knows she had the choice because Joel lies to her about it. Joel, we come to understand, is as selfish as ever. Needing and loving this new surrogate daughter, after having lost his own twenty years before, he is unable to let her go for the greater good.

For those of you reading this who don’t play video games, I want you to understand that this kind of ending—one that is neither triumphant nor cathartic, but instead haunting and true to its characters—basically does not exist in mainstream video gamesIn fact, it’s the kind of ending that most mainstream blockbuster movies—and The Last of Us is the equivalent in terms of budget, market presence, hype and sales—would never dare attempt.

It’s these kinds of elements—story, theme, structure, subtext, writing, performance—that are responsible for the nearly universal critical rapture that has greeted The Last of Us, and they flow directly out of the thematic integration of gameplay and story, and from questioning the purpose of all the violence the video game marketplace demands. It is in this way similar to Watchmen. By taking its subject matter seriously, it simultaneously is a masterpiece of its form (the superhero comic/ the action game) while undermining the existing status quo.

And that brings me to the ultimate problem with making the resolution of ludonarrative dissonance the ultimate goal and measure of quality of video games. It’s no mere coincidence that The Walking Dead and The Last of Us take place during the apocalypse. There’s a limited number of scenarios that justify the kind of violence that the form regularly contains and that audiences demand from it. While we can get moralistic about this, high body counts have graced our literature since The Iliad, our theatre since The Persians, our films since Intolerance and on and on. As someone interested in video games becoming a richer source of stories, of examining theme, subject, narrative and character through the unique medium of a player interface, I’m less concerned with the virtues of violent games and more by how thuddingly boring and narrow their possibilities often are.  As the current “gritty downer” era of superhero comics and films shows, replacing the current narrow possibilities of the medium with a different set of narrow (but critic-approved) possibilities isn’t really a solution, even if we get more games like The Walking Dead and The Last of Us along the way.

Playing Narrative, Part 1

twd choice

Back in 2011, I wrote on my own blog about storytelling in video games, and whether or not they are a narrative art form, a post that led me to wonder:

[D]o video games really want to be known as a narrative art form?

I find this question far more interesting than Ebert’s question about whether they’re art or not. (Simple answer: some are, some aren’t!)

Right now, video games are in a sweet spot. Games like Heavy Rain and Mass Effect 2 can come out and gain a certain amount of cachet and sales because of their sophisticated deployment of game mechanics to complexly explore genre. At the same time, when people question the racial politics of Resident Evil 5 or look at the truly execrable pro-torture narrative of Black Ops, gamers (and game critics) can retreat behind “Hey, it’s only a game!”

Sure enough, over the last couple of years, I’ve noticed more talk about the quality of stories that games tell and the phenomena of ludonarrative dissonance, or the disconnect between the gameplay experience and the narrative experience of a title. Most of these conversations tend to coalesce around fretting about violence. In the Uncharted games, rakish hero Nathan Drake kills something like six to eight hundred people whilst treasure hunting around the globe. The emotional resonance of Bioshock: Infinite’s clever universe-hopping maze of a plot is undermined by the constant need to mow down everyone who gets in your way. In fact, the term ludonarrative dissonance apparently originates with this blog post from Clint Hocking about the first Bioshock game, in which he writes that the contrast between the selfishness of the game play (it’s a first person shooter) and the anti-selfishness polemics of the plot (it’s a takedown of objectivism) contrast to such a degree that it wrecks the game.

I personally find the concept of ludonarrative dissonance interesting for thinking and discussing video games but do not find it to be quite the magic bullet that game critics seem to think it is. Basically, I believe that, in part due to the history of how games have aesthetically developed, game players are quite used to compartmentalizing gameplay from story, tending to either view the former as the task one must accomplish to get the latter, or viewing the latter as the increasingly cumbersome speed bumps that interrupt the former.

While the violent gameplay is the least interesting part of Bioshock:Infinite, I’m not sure that most video game players  think that they’re killing people as they play it from a narrative perspective any more than watchers of Looney Tunes feel Elmer Fudd’s physical pain in any kind of serious way. Aesthetics matter, after all, and Bioshock:Infinite is a candy colored cartoon wonderland filled with nonrealistic character portraits. Most of the human extras you encounter throughout the world are more like animatronic dolls than people. It’s also worth noting that  violence is in many ways woven into the DNA of videogames, much as snark and  assumptions of bad faith are woven into the DNA of online discourse.

That said, ludonarrative dissonance will prove a worthwhile concept if it leads to better games and better narrative mechanics within them, and over the past year, at least, this appears to be happening. Two recent works, Telltale Games’ The Walking Dead and Naughty Dog’s The Last of Us have done a remarkable job of integrating gameplay mechanics, story, and theme, pointing the way to a possible new maturity in the field. Yet at the same time, both are built out of sturdy video game genres.  The Walking Dead is a classic puzzle-adventure game, while The Last of Us focuses on the kind of stealth-action familiar to players of Metal Gear Solid, Deus Ex or the Tenchu franchise. They never lose their game-ness[1], yet remain satisfying, emotionally engaging, thought provoking narrative experiences[2].

The Walking Dead even manages to upstage both the preexisting source material (the comics by Robert Kirkman) and the blockbuster TV adaptation on AMC.  In it, you play Lee Everett, a recently convicted murderer (and former college professor) being transported to prison when the cop car carrying you hits a zombie.  Shortly thereafter, you take on a young girl named Clementine, whose parents are in another city and whose babysitter has gone all let-me-eat-your-brains on you[3]. As you and Clementine struggle to survive, you eventually come upon other survivors and have a series of difficult trials that brings you both across the state of Georgia.

On a gameplay level, much of The Walking Dead revolves around the normal puzzle-adventure michegas, where you have to figure out what action and items will get you from point A to point B in the plot. Occasionally, you also have to kill zombies or hostile humans. Neither of these functions are particularly remarkable. And at least one puzzle, which involves figuring out the right things to say to get someone to move out of your way so that you can press a button, is seriously infuriating. What makes the game work, however, is the way that character, emotion and choice function within the narrative. Like many games today, The Walking Dead presents the player with multiple narrative choices via either forcing you to take one of a series of mutually exclusive actions or choosing dialogue options in conversations.

Telltale’s stroke of genius was to insert a timer into these decisions.  Normally when you reach a major choice in a game, it will wait for you.  You can think about it for a while, perhaps peruse a walkthrough online that will tell you the outcome of the choices, and then make it. You can perform a cost-benefit analysis in other words, thinking about it purely in game terms. In The Walking Dead, you have a very limited time to make each decision, and as a result, the decisions become a reflection of your personality, or the personality of Lee as you’ve chosen to play him[4]. Perhaps you think Lee should tell people he’s a convicted murderer, because honesty is the best policy. Or perhaps you think you should hide it from people because you’re a good guy and you don’t want people pre-judging you. Perhaps you should tell people you’re Clementine’s father. Or her babysitter. Perhaps you raid that abandoned station wagon filled with food. Or perhaps you sit back, willing to go hungry in case the car belongs to fellow survivors.

Many of your choices involving brokering disagreements between two survivors named Kenny and Lily, who are both, to put it politely, assholes. Kenny, a redneck father, will do anything for the survival of his family (including betray you), and will forget any nice thing you do for him (including saving his life) the second you disagree with him. Lily, the defacto leader of the group, is belligerent, domineering, and frequently sticks up for her racist shitbag dad. Being a good middle child, I kept opting for choices that recognized the validity of their points of view and tried to form consensus. Due to their aforementioned assholitry, they both hated me by halfway through the game. One of them even told me I had to man up and start making decisions or what was the point of having me along.

The decisions tend to function like this in the game. Unlike in most games with choice mechanics, there aren’t morally good and bad choices coded blue and red. And unlike old school adventure games, the choices you make in the plot won’t lead to fail states. They simply are things that you’ve done, and they ripple out throughout the game, shifting (in ways both subtle and non) how the story progresses, how people treat you, and what choices you have remaining to you.

None of this exactly explains what a remarkable achievement The Walking Dead game turned out to be. So let me try some other ways: It’s the only game I’ve played that has reduced every person I know who has played it to tears at least twice. It’s the only game I’ve ever played where the characters are so clearly and humanly written that I finished one chapter of it and flew into a rage over what one of the characters did to me[5].

Part of this is because there are limits to what your choices can achieve. Due to the realities of game making and the limitations of the engine that’s running underneath TWD’s hood, the number of paths you can take in the game is finite. There are truncation points in the branching narrative to keep things under control. As a result, certain things will happen no matter what you do and certain characters will die.  There are things you cannot stop from occurring in the game, fates that, like the protagonist in a play by Euripides, are inexorable and horrible all at once.

I wouldn’t have it any other way. Robert Kirkland’s two great innovations in the zombie apocalypse genre—telling a story with no finite ending and making zombieism inevitable[6]—are what gave early issues of the comic book their thematic sizzle, turning the saga into a story about how we confront our mortality and an ongoing essay into whether death made life more meaningful or a sick joke. Sadly, after a difficult and necessary foray into the issue of survivor’s guilt, the comics are largely now about how difficult and noble it is to be the White Man in Charge who makes the tough decisions and often feature Rick Grimes walking around having other people tell him how awesome he is while he gets ever more self-pitying.

The video game, meanwhile, does a superior job of exploring the themes of its source material, because the choice mechanics literalize those themes. By removing fail states from the game (like most contemporary video games, it is literally impossible to lose The Walking Dead) and eschewing simple morality in designing the choices, TWD constantly forces you to think about why you are making the choices you make. As you decide whether or not to save the female reporter and firearms expert or a male hacker dweeb you may find yourself suddenly thinking Oh crap, I have to choose between one of these people. And they both seem so nice. But, well, this is the apocalypse, so electronics aren’t going to be as necessary. And that reporter is a markswoman. And at some point the world is going to need to be repopulated, so I suppose I need to save as many potential sexual partners as possible. So I guess I’m going to save the reporter. [CLICK] Wait. Am I terrible person?

It’s rare that games provoke that kind of calculus. And it’s very rare that they are constructed in a way that forces you to think about not just the decisions you make but why you make them.  By the end of the game, as a mysterious stranger interrogates you about every major decision you’ve made over the last ten or so hours of gameplay, it’s hard not to notice that what you’ve just been playing is a length examination not just of what it means to survive, but of yourself.

(This is part one of a two-part essay on recent advances in video game storytelling. Part two will run soon)

CORRECTION: I’ve been a little remiss in apportioning credit in the above. The idea of infectionless zombies dates back to Romero and, of course, The Walking Dead was co-created by artist Terry Moore and, after its first few issues, has been co-created by artist Charlie Adlard. Apologies to the relevant parties.


[1] Oddly, both games have been criticized for still being too “game-like.”  This strikes me as wrongheaded, akin to arguing that a graphic novel, by including panels and images, wasn’t enough like a prose book. Or that book, by being made out of words, wasn’t enough like a television show. If we want the medium of games to improve, it shouldn’t be via them becoming very long movies.

[2] Please take the fact that I used as clunky a phrase as “narrative experiences” in this sentence as a sign of the newness of taking narrative in video games seriously and the difficulty in discussing same.

[3] You put a hammer through her head. But at least it’s justified by the world.

[4] This was even more true when the game was initially released in a serialized 5 episode format.  A choice you make in Episode 2 might not pay off until Episode 5, thus making a walkthrough of your choices totally useless.

[5] Or should I say Lee? This gets me to a side point that I don’t have much time to get into here: The relationship between choice mechanics and attachment to games. There is something about having a say in the way a game progresses that creates in most gamers I know a greater sense of emotional attachment to the events as they unfold. I think on some level we come to care for our characters (and the characters around them) as if they were our charges. We don’t want bad things to happen to them, and have at least some ability to keep them out of trouble. When we fail, it’s heartbreaking. And I feel silly about owning up to the fact that it’s heartbreaking, because, after all, this is a fucking video game we are talking about here people. It’s probably—outside of hardcore pornography—the medium with the most uneven ratio of profit to respectability there is.

[6] In the world of The Walking Dead, all dead people become zombies. Zombie bites spread a poison that helps speed the process of death along. The only way to stop this process is to have whoever is with you—likely a loved one or friend—kill you by shooting you in the head or otherwise destroying your brain.

Upstream Color: Less Than Meets The Eye

upstream_explainer5

Almost a decade ago, Shane Carruth’s film Primer took the Grand Jury Prize at Sundance.  Shot for only $7,000, and looking like at least a million bucks, Primer was a low-key, hyperrealistic take on the time travel film in which two friends invent a machine that allows them to go a handful of hours backwards in time.  They end up playing the stock market and becoming increasingly paranoid and sociopathic, betraying first their business partners and then each other.

Primer is a textbook “has potential” movie.  Written, directed, produced by and starring Carruth, it displayed a great command of atmospherics and visuals while not quite working as a story. All time travel narratives must eventually either cheat or collapse under the weight of their own paradoxes, and when Primer eventually falls, it falls hard into a swamp of incoherence that borders on incompetence. Yet the movie seems to add up to something at the end, and is shot and edited in a fashion that makes it appear as if the filmmakers understand what is going on, even if you the viewer do not. This led many to mistakenly infer that Primer was smarter than they and anoint it a deep and meaningful film. Still, it was a first film, and made for next to nothing, and showed that Carruth was a filmmaker of promise. It also helped give rise to a new strain of low(ish)-budget, small-scale, personal science fiction films.  Films like Brit Marling’s Another Earth or Duncan Jones’s Moon were both better than Primer, but it’s hard to imagine either would’ve gotten the attention they did without it.

Now Carruth is back with Upstream Color, another low budget, contemplative science fiction movie in which he wears even more hats, directing, writing, starring, cinematographing, composing, casting and designing the film. Upstream Color is a rare beast, a true auteurist science fiction work where every detail is the result of one man’s vision. It also demonstrates conclusively that Carruth’s skeptics were right. Upstream Color is ultimately an empty experience that squanders an interesting premise on meaningless beauty and mood.

Upstream Color is about a parasite that goes through three different life cycles. In the first, it’s a small white worm growing in plants. In the second, it grows inside humans, making them, in the immortal words of Khan Noonien Singh, highly susceptible to suggestion. In the third, it grows inside pigs.  When the film opens, we see a man (he’s billed simply as “Thief”) cultivating the plant-stage parasites. He drugs a woman named Kris and infects her with it. Thief uses his total control over her to force to do all sorts of ponderous stuff like writing down passages of Walden on pieces of paper and then making paper chains of them. He also cleans out her bank account and gets her to pull all the equity out of her home and give it to him. Sometime later, a man billed only as The Sampler uses low frequency sound waves to summon her to his farm, where he extracts the parasite and puts it in a pig.

Soon, having lost everything and gone to work at a copy shop, Kris meets and finds herself mysteriously drawn to Jeff, a disgraced businessman who works for a hotel chain. Jeff, of course, is a victim of Thief as well, and we soon learn that their parasites were put into pigs who have since mated.

This is not a bad premise for a sci-fi film. Mind control parasites are, of course, an old saw, used in everything from Star Trek to Bodyworld to Fringe, but the added side-effect of the human-pig connection is a nice twist. One could imagine any number of things that could be done with the idea, from a Dickian paranoid parable of loss of control in love to a Michael DeForge freakout about the human body, to a searing indictment of the food industry.

Upstream Color decides that the best thing it can think of to do with this idea is to have Kris and Jeff fall intensely, cinematically in love, which is to say they stare at each other in intensely lit locations, sometimes breaking from this to either have emotional breakdowns or say ponderous bushwa into the ether. Then it digresses into a long segment featuring The Sampler wandering around nature recording various sounds and turning them into musical notes and spying on people (apparently he can turn invisible). In the end, Kris and Jeff are able to defeat The Sampler through means that make no sense but prominently feature a quinoa salad, retrieving rocks from a swimming pool, and more quotes from Walden. After killing the Sampler, they track down all the other victims of the parasite and start a cooperative farm.

Carruth tries to save the rapidly-deflating soufflé of Upstream Color’s plot by shooting the whole thing like The Tree of Life, constantly cutting between images, highlighting subjectivity, using deep-focus, voice-over, and a rapidly circling camera to overwhelm the viewer with beauty.  The problem is that, love it or hate it, The Tree of Life is actually about something and the cinematic techniques on display in the film are part and parcel of the philosophical inquiry into which it entersUpstream Color, meanwhile, is deploying these techniques to paper over a fundamental emptiness, just as Primer deploys the climactic-montage-with-recycled-voice-over trick of The Usual Suspects and The Sixth Sense to make it seem as if it’s headed towards some kind of revelation in its conclusion.

In many ways, Carruth might better be understood as a composer who works with images than an actual filmmaker. But as the film defaults to mood every time it should head towards meaning, the various gestures begin to feel manipulative, cynical rather than creative. As the friend I saw it with quipped to me over e-mail, “it was like a bad and spooky techno arrangement which seems at least to have the benefit of ambiguity until you realize it’s a cover of Riders On The Storm.”  Upstream Color, then, exists at the intersection of The Beauty Problem and The Weird Shit Problem. Like a lot of so-called experimental art, it substitutes compositional beauty and oddballity for substance. It has the perfect alibi, “you just don’t get it, man,” which is, in its own way true. You don’t get it, man. There’s nothing to get.