Some Notes on Contemporary Furry Affairs

I’ve slept on my duties as HU’s resident perverted furry crank, so I’ll take up a little space here for some housekeeping.

husbands

Rocket drawing by (I think) Mike Mignola and Al Gordon // Bojack designed by Lisa Hanawalt

A One Night Stand with the Lion Queen

An article by HU contributor Isaac Butler which appeared in Slate has been on my mind since I read it some weeks ago.  In anticipation of the The Lion King’s 20th anniversary, Isaac charmingly and convincingly compares Scar, the movie’s antagonist, to Shakespeare’s most lusty and magnetic villains.  He also astutely takes into account how the film’s particular place in the timeline affects his character’s framing in the story.

“In the Renaissance, Scar would have been the lead of a tragedy that bore his name. In 2014, he’d be the star of a prestige-cable drama about a charming, thwarted sociopath who’s smarter than everyone around him.”

There are some key properties that bewitch young people.  These early obsession can germinate into someone identifying as a furry.  The Lion King is one of the big furry generators.  Though I was never fixated on the Disney property (I’m a Bluth / Robin Hood / Redwall type) I have some affection for Lion King, and I like that someone’s writing enthusiastically about its best character.

One snippet though, has been hanging onto me like a grapefruit rind between my front teeth.

“While Scar’s clear effeminate coding feels problematic now,”

What does that mean?  There is no unpacking of this coda to a tasty meditation on Scar’s sexiness, slinkiness, decadent indulgent amorality, with a positive comparison to David Bowie!  Effeminate men are out there.  I am effeminate (hi).  What’s problematic about a character like Scar being like me?  Andreas Deja, his gay supervising animator who escaped mention in Isaac’s article, might have had something to do with his effeminacy. There’s straight writers making nelly guys the butt of a joke, and then there’s Scar!  Holding court and being fabulous and generally about to steal your man at all times.

I’m a little sensitive about this. I’m recovering from years of rounding myself down to “default” to appease my punk friends from high school who were cool with me being gay but had remarks for our mutual friends who spoke and acted “stereotypically.” I had no sassy(tm) retort for them in my vocabularic arsenal at the time. It has been a whetstone for my resentment for people being dissected in this way.  A tangential coincidence:  Just today I read someone called the Comics Crumudgeon’s assessment of the Cathy Guisewite’s final Cathy strip.  It’s in line with the general miasma of contempt my (always male) colleagues projected onto the (on my own investigation) funny and well-drawn strip.  Here’s the offending little jab:

“but Sally (Forth) was always a more or less fully functional human being, whereas Cathy is a nightmare bundle of neuroses. The fact that the character always seemed to take every negative stereotype about women and extend them to cringe-inducing extremes made it hard to celebrate it as a feminist achievement.”

I need.  A drink.  Excuse me, I’ll be right back.

I’m back.  Thanks for waiting.  Cathy, a strip starring a woman, written by a woman, about the concerns and experiences of the woman who wrote it, is bad because Cathy is the wrong kind of woman.  How many male self-inserts in comics have put in their hours in the pathetic ennui mines?  But this woman? *ACK*, how irritatingly neurotic!  If only she were more “fully functional” (respectable, proper, correct).  If only she didn’t fall under the lens of negative stereotypes that men invented to pressure and demean women no matter what they do!

ACK, INDEED, SIR.

The concern for Scar’s framing doesn’t steam me up like the line about Cathy so much as puzzle me.  I’m puzzled because it seems to abruptly undermine everything that came before it. Scar’s effeminacy is the whole reason we like him. He’s CAMPY.  Not rigidly and blandly masculine like his gilded-beige brother, Mufasa.  If he were a human he’d wear a cloak and circular sunglasses and those pewter finger-claw things and a pointed goatee. Scar’s queeniness is the engine of our delight in him, and the essence that sets him up in opposition to the hypocrisy of Mufasa’s reign.

Mufasa worships the “circle of life” which is not a circle but a ladder with the Lions occupying the rung where they are not killed and eaten.  Scar’s minions, the equally slinky hyenas, capable predators all, are set apart to pick over bones.  Hyenas are led by women, this pack is headed by Whoopi Goldberg as the alluring butch Shenzi.   Maybe that’s why the lions set them apart. Scar does not  produce an heir (unless one surfaced in the Lion King 2, the search for Zazu’s Gold, which I did not see) and by the end of his rule, the Savannah is in ruin.  If we accept the film’s framing, it is Scar’s fatal flaw of arrogance in upsetting the cosmic moral order that dooms him.  But really it is drought and migration that doom him.  His rule was in fact a brief and unlucky, though perhaps dictatorial rebuke of the divine right of straight kings.

Thanks, Isaac.  Great piece.  Let’s watch Kimba the White Lion next.

A One Night Stand with the Raccoon and his Husband the Tree

“We did it.  We finally kissed the raccoon.  He’s a good kisser.”  I tweet as my wife and I leave the theater.

Guardians of the Galaxy is a big hit.  A lot of furries went to see it and wanted to kiss Rocket.  A lot of not furries went to see it and wanted to kiss Bradley Cooper as Rocket.  Watching this happen has been kind of priceless.  Though I’m less interested in how many kids watch this with their parents and become furries than the kids who watch this with their parents and end up thinking that guns are really cool.  There was a good movie trapped somewhere in the machinery of a bad movie.  An unexpected left turn from a seedy cantina scene remix to a drunken brawl let’s Rocket demonstrates that slurs are bad, and they wear you down.  But later there are misogynistic slurs as a joke?????  The Raccoon was as interesting character as you can fit into his macho packaging, and I liked watching his mercenary qualities dissolve in caring about other people.  Like the Lion King, where Scar’s point of view is subsumed into Simba’s hero’s journey, a zany road movie/5-way buddy comedy was dying to escape the very boring shell of a brutally violent bauble.  I list toward being a “Scenes from a Marriage” type of film enjoyer, and Rocket and Groot’s partnership is clearly a cranky kind of marriage.  I’d rather have watched two hours of that.

Of the material I’ve seen so far, I give most of the furry porn featuring Rocket +/- Groot a D+
 
A One Night Stand with a Pun about Horses I Didn’t Feel Like Following Through With

Bojack Horseman is a Netflix Original animated series about Furries in Los Angeles.  They’re uh.  It’s about the television industry.  Self-absorbed washup actor.  Still materially OK so we can see him stress his excess as a measure of his delusion.  Had the idea ever occurred to you that money and fame plus drugs can corrupt people?  Hm.  The best and most distinguishing feature is the designs of Lisa Hanawalt, who is a national treasure.  Her animals are uncannily perfect.  I almost lost it when two pigeons took flight from a tree overlooking Bojack’s bedroom window, flapping their blazer-encased humanoid arms.  She’s so amazing.  I saw Frank Santoro say they should have let her write the jokes.  Yeah.  Bojack Horseman is often sharp but hones its edge on joke after joke that would have been prescient in 2003.  I was in high school in 2003.  Don’t make me go back there.  The show comes around and starts to get more interesting around episode seven.

A Marriage

I left my beloved Minneapolis when I vacated the ex-boyf’s apartment.  Then I met someone.  She’s a furry too, but our growing up furries was different and our being furries together is another thing.  Getting older is asserting its leverage on our way of being. But we’re keeping it weird.

Here Come The Planes

(NOTE: This was first published a few years ago in the now-defunct web journal “The Fiddleback.” Noah was kind enough to let me repost it here.)
 

 
At first, it’s just Laurie Anderson’s voice, looped on an Eventide sampler. A pulmonic agressive ha repeats, calling out from 1981, exhaling middle-C. The ha continues through the duration of the song. Seven hundred one times at a pace of eighty-four beats per minute. For those of you keeping score at home, that’s a little bit over eight minutes. And yet this song, with its curious title, “O Superman,” and cold, borderline-cheesy lyrics and seemingly endless repetitions was, briefly, a monster hit. Up to #2 in the UK. One very bad recording of it on YouTube boasts that “this is what we all used to dance to back in the day.”

There are few other sonic elements introduced over the song’s eight-minute-plus length. There’s a handful of synthesized string lines and some organ. The faint sound of birds once or twice. And there’s Anderson’s vocals, a robot chorus announcing a new age.

Anderson’s work as a performance artist and musician relies heavily on distortions of her alto voice. She pitch-shifts it down two octaves, becoming a male “Voice of Authority,” or adds reverb and delay effects to punch home emotional beats. In both United States I-IV—the four hour stage show where “O Superman” debuted—and Big Science—the commercial album it appears on—Anderson sings through the vocoder, a kind of synthesizer that attaches your voice to notes you play through an instrument. You’ve heard it used by Peter Frampton on his biggest hits, or by Afrikaa Bombaata. The British also used it during World War II to send coded messages, breaking the sound up into multiple channels for spies to assemble later.

 

The opening lyrics of “O Superman” (“O Superman, O Judge, O Mom and Dad”) are a play on Le Cid’s aria “O Souverain, O Juge, O Pere.” In the opera, these words are uttered as a prayer of resignation, the hero putting his fate in God’s hands. In the Laurie Anderson song, the three O’s change meaning. First, she prays to Superman (Truth! Justice! The American Way!) but by the end she longs for Mom and Dad, and gives this longing voice in a series of vocoded ah’s: “Ah Ha Ha-Ah Ah-Ah-Ha.”

Despite Anderson dedicating the song to Le Cid’s composer, the two biggest influences it draws from are The Normal’s “Warm Leatherette” and the Philip Glass/Robert Wilson opera Einstein on the Beach. Indeed, “O Superman” can in some ways be read as a marriage of the two, of the uniting of brows both high and low, the repetition of minimalist opera lensed through the repetition of the dance floor.

Anderson is open about the debt she owes Einstein on the Beach. In interviews from the time, she cites the show as opening up possibilities for what a stage show could be, a process that lead to United States I-IV. O Superman’s repetitive “ha” references the sung counting during Einstein’s opening, and the keyboard lines not only sound Glassian, but the actual specific organ tone is one fans of the Philip Glass Ensemble will recognize.

The relationship between “Warm Leatherette” and “O Superman” is more tenuous. Certainly “O Superman” would not have garnered Anderson chart success and a seven-album deal with Warner Brothers without The Normal’s game-changing single from three years prior. Several sawtooth waves—chords, a siren gliss and a thwap-thwap rhythm—make up the entirety of Warm Leatherette’s music, while Daniel Miller, The Normal’s sole member, delivers ominous couplets about sex and car crashes. The song is essentially a musical setting of the JG Ballard novel, Crash, in which a car crash awakens the novel’s narrator to the sexual possibilities inherent in the automobile and its destruction.

It’s through “Warm Leatherette” that “O Superman” accesses JG Ballard’s apocalyptic vision of techno emptiness and Cold War nuclear anxiety. “Warm Leatherette” echoes Crash’s alienated space in which everything becomes simultaneously mechanized and eroticized. “O Superman,” meanwhile, creates a space of mechanization and alienation that also contains our human responses to this alienation: paranoia, loneliness, and a kind of heartbroken yearning. No character in a Ballard novel would ever beg to be held by Mommy, as Anderson does by the end of the song.

 

Coming as they do out of a theatrical tradition, Anderson’s songs, even at their most abstract, tell stories. “O Superman” is no different. Here, more or less, is its story:

You sit in your apartment in New York City at night. You are alone. Perhaps this apartment is on Canal Street, nearby the Holland Tunnel. It is 1981.

You sit in your apartment in a chair rescued off the street. The day you found it, you felt grateful that no one needed this chair anymore. This is the economy of New York furniture. People lug their unused belongings to the curb: The televisions and air conditioners with yellow paper taped to them, the word WORKS written in sharpie; the chairs that look fine, but might contain bedbugs; the couches that get waterlogged while you try to round up friends to lug them up the four flights of stairs to your apartment.

Concrete Island lies open on your lap, off to your right on a stack of milk crates rests a glass of cheap wine. Your violin leans against a nearby bookshelf, desiring your fingers and the bow.

Your phone rings. You decide that you will let the answering machine get it. People own analog answering machines, with real tapes that run and run and run out in the middle of their friends’ loquacious messages.

You hear your own voice first. “Hi. I’m not home right now. But if you want to leave a message, just start talking at the sound of the tone.”

A beep. And then. “Hello? This is your mother. Are you there? Are you coming home?” You hear need in her voice, along with a drop of reproach. Perhaps she didn’t approve of your moving to a hellhole like TriBeCa to be an artist. You do not pick up the phone. You do not tell her when you are coming home.

Another beep and then a voice you do not recognize. A man’s voice. “Hello? Is anyone home?” You do not answer it; you are not in the habit of speaking to strange men on the phone in the middle of the night. Instead of hanging up, however, he speaks more. “Well you don’t know me. But I know you. And I have a message to give to you.” Uh oh. Is this a crank caller? A stalker?

He speaks again. “Here come the planes. So you better get ready. Ready to go. You can come as you are. But pay as you go.”

You’ve had it with this man’s warnings and rhymes. You pick up the phone and say into it, “Okay, but who is this really?”

When the voice replies, what he says is terrifying. “This is the hand, the hand that takes.” He repeats it. He won’t stop saying it. You imagine just a mouth, the rest of the face shrouded in shadow, rendered in grayscale, like in an old movie.

And then he says: “Here come the planes. They’re American planes. Made in America. Smoking, or non-smoking?” He babbles on about the post office, about love, justice and force. And mom.

You hang up the phone. Confronted with this warning, with this mysterious stranger, the hand that takes, perhaps America itself, what can you do? You think about the first message. Your mother. She called you. She wants you to come home.

Sitting in your apartment, stranded in the night in New York, which despite the popultion density can feel like an island bereft of human company, you want your mother.

So hold me mom, you think to yourself, in your long arms.

You are so shaken from the phone call that the vision of your mother holding you gradually changes, becoming perverse and terrifying, but as it does so, you find yourself even more comforted.

 

*          *          *

Here come the planes. They’re American planes. Made in America.

In 1981—the year of O Superman’s commercial release—Ronald Reagan broke the air-traffic controllers’ strike and expanded the US military by the equivalent of $419,397,226.33 (adjusted for inflation).

In 2010, we’ve lost great amounts of our manufacturing sector, but one area remains triumphantly intact. We still make machines of war here in America. Boeing and Lockheed Martin are still based in the United States, the former in Chicago, the latter right outside Washington, D.C. Their plants also remain in this country, in places like Witchita, Kansas, Troy, Alabama and Columbine, Colorado. The Martin F-35 Lightening II—of which the United States intends to buy 2,443 for a price tag of over three hundred billion dollars—performed its first test run in Fort Worth, Texas.

In 2001, Mohammed Atta flew a Boeing 767—manufactured in Everett Washington— into the North Tower of the World Trade Center Building.

 

*          *          *

If you haven’t guessed by now, I might as well come clean: I was obsessed with Laurie Anderson in college. I tracked down out of print monographs of her work. I attempted to sneak her into just about every paper I wrote. Laurie Anderson thus joins a long line of serial obsessions on my part. She sits right between Eddie Izzard and Charles Mee if you’re ordering it chronolgoically.

I only knew one other person who loved Laurie Anderson. He discovered her via a twenty-six CD series titled New Wave Hits Of The Eighties that he bought off of late night television when he was in high school.

Despite all of this, four months after graduating from college, on the actual day when the American Planes Made In America finally showed up, I did not think about “O Superman.” On the actual day, U2’s borderline easy-listening track “Beautiful Day” took up unshakeable residence in my skull. It’s been said often enough to become a cliché, but the eleventh of September, 2001 really was gorgeous. The sky blue and cloudless, the temperature perfect for a walk from my then-girlfriend’s office on 56th and the West Side Highway to deep into the East Teens.

The blue sky loomed ominous, the way nights dark and stormy foreshadow murder in a potboiler. If we couldn’t trust the weather to tell us how to feel, or what would happen next, what could we trust? As we walked, desperate to put our backs to Times Square or any famous piece of Manhattan real estate, occasional planes flew overhead. When this happened, our faces blanched and our clutch on each other’s hands tightened as we ducked into the shadows of a skyscraper to watch the planes streak the blue dome above us.

And in my head, Bono wailed all the while. “It’s a beautiful day, don’t let it” Go away? Go to waste?

I discovered that I did not actually know the words to the song. As we stopped at a McDonald’s for food, bought water off a street vendor, and entered the East side, I became fixated on figuring them out, worrying the words like a loose tooth. Solving this annoyance seemed more important—or at least more manageable—than the attack itself, the questions about my DC-dwelling parents’ safety or where our nation was headed.

 

Unlike most Americans, I did not see what had happened to my city until many hours after the second tower fell. By then, our epic walk concluded, we sat on our friend Alison’s couch and watched the BBC. Again and again the plane flew into tower two, again and again the orange flower bloomed, again and again the towers collapsed and we jump cut to a POV shot of someone running from a wall of dust.

One of us said what became a constant refrain. It looks just like a movie. And indeed it did.

During the weeks to follow, we heard this idea everywhere. Just like a Bruckheimer film or I thought they were showing a disaster movie, until I realized it was on all the channels, or Just like Independence Day.

What we did not ask then is why. Why, at the height of our powers, had we imagined our own destruction so often that we had a ready-made database of images to compare this moment to?

Instead, we clicked our tongues in disapproval. This showed, we believed, the shallowness and alienation of our psyches. Now the time had come to end irony once and for all. We chose this interpretation instead of acknowledging how in tune with our deepest fears mass entertainment really was.

Through the nineties, when everything seemed so good that a blow job consumed media attention for years, it turned out that we both knew and feared that the clock would run out on our exceptional good fortune. The multiplex transformed into the only place to explore these premonitions of what was to come. The movies responded by doing what they do best. They thrilled us again and again, so we didn’t have to feel bad or, really, think much, about any of this.

We did not ask these kinds of questions in the aftermath because we did not have the leisure or distance or time to ask them. Instead, we asked other questions. Questions like, Who did this? And, Whose ass do we get to kick now? And—in certain circles of the left—Is it right that we kick their ass?

The first two questions we immediately answered with a nebulous body known as “the Arabs,” later refined to “Al Qaeda.” First thing we should’ve done, someone said to me at Thanksgiving dinner that year, is turn the Middle East into a parking lot. Even on that day, when we had no idea who had done this or why, we knew it must be “the Arabs.”

On her couch—which she invited us to stay on for as long as we want—Alison launched into a monologue about the Israeli-Palestinian conflict. She did not know that my girlfriend hailed from a Muslim country and I, although a Jew, am not a Zionist. The story culminated with her running into a Hasid on the street two hours after the second tower fell. “Hey man,” she recalls herself saying, “I’m with you and Israel all the way.”

She wanted to hug him, she said, but knowing the prohibitions against touching women, did not.

In that moment my mind wandered back to my girlfriend copping to a desire to bump up against Hasidic men on the subway and then claim to be menstruating. I did not mention this. Sitting next me, my girlfriend was silent. After crying until her pale skin turned a shade of red I did not think occurred in nature, she stared at the television, unblinking.

 

*          *          *

 

“O Superman” contains three moments of wordplay. The first comes right after Anderson mentions the planes, when she then asks, “Smoking or non-smoking?” Since we are not riding in these planes but are instead being warned about them from a mysterious voice, the phrase takes on a double meaning, becoming about corpses.

The second is when Anderson recites the Postal Creed: “Neither snow nor rain/Nor dead of night/Will stop these couriers/From the swift completion/Of their intended rounds”

Because if (once again) we are talking of airplanes, and talking of the death they bring, then the couriers become something different, and the package they are delivering is one you certainly don’t want. Nothing will stop them. A motto of American can-do becomes a motto of uncheckable military aggression.

Or, listening to the song today, dread of the unstoppable terrorist other.

Over the 2010 holidays, as the privacy (and genitals) of white people unaccustomed to legally sanctioned harassment were violated, America seemed to wake up to the absurdity of trying to stop terrorists with the pat down and the extra large zip-loc bag. If they can’t get on a plane, they’ll blow up a subway car. They are nothing if not tenacious. Neither scanner, nor banning of liquids, nor cavity search will stop them from the swift destruction of their intended targets.

The third moment of wordplay comes at the end as Anderson calls out to her mother. Like the question about smoking, the yearning for mommy’s arms gives way to a pun, of all things. “Arms,” of course, contains two meanings, one of which is enshrined in our Second Amendment.

And so, with a mournful, churchlike basso organ sound, Anderson’s mother turns from human into weapon. Her arms progress from “long” to “automatic” to “electronic” to “petrochemical” to “military” and back to “electronic.”

When Anderson sings “petrochemical,” if you turn the volume up very high and listen very closely, you will hear birds chirping.

In college, the merging of mom and machine struck me as a silly bit of early-80s “we are the robots” kitsch. I realize now how I wrong I was. I know now that the comfort found in waging war. I know that hurting others can feel like a familial embrace.

Did our desire for this comfort—the comfort of anger, the comfort of righteousness, the comfort of inflicting, rather than receiving pain—lead us so swiftly to retribution?

And what of other kinds of comfort?

I am at atheist. On 9/11, the only working phone I could find was in a Christian bookstore. I made two phone calls while the employees praise-Jesused behind me. The first was to my girlfriend to tell her I would walk to wherever she was, the second was to my mother. My father works for Congress, and was in who-knows-what federal office building that morning.

In the week that followed, everyone I knew in New York wanted to be held in some way, to be comforted. A friend called me to tell me she had been to church that morning. I laughed into the phone, finding it—and her—absurd. Like me, she was both a Jew and an Atheist. What possible business did she have in a church?

“I was walking down the street and I saw this woman outside a church, and, she just, she just looked at me, and I knew that that was where I needed to be.”

 

*          *          *

From the fall of 2001 until the Spring of 2010, I didn’t listen to “O Superman” or really anything by Laurie Anderson. It wasn’t until I left New York to drive across the country with my now-wife that I played it again. We were all gone to look for America. We were all sorts of cliches. We didn’t care.

As the curving asphalt ribbon of the Pacific Coast Highway unspooled before us, I click-wheeled over to the song. The instant I pressed play it started to rain and we sat in silence and listened to it. And it wasn’t until I got lost in the ha that I realized how long it had been since I had heard it. How did that happen? How did a totem that I carried with me, loved so hard, like it was a person, like it belonged to me, like I made it, how did I abandon this thing for so long?

Right before Anderson’s two-minute litany of different kinds of arms, I looked over at the driver’s seat, seeking approval. Now-wife displayed the face of a champion poker player. On the stereo, Anderson paraphrases the Tao Te Ching, singing, “‘Cause when love is gone, there’s always justice, and when justice is gone, there’s always force, and when force is gone, there’s always mom.” And then her voice breaks a little, and in a rare moment of humanity Anderson says, “Hi, mom.”

It felt like letting her read my diary from before we met. I wanted to be known better by this woman I would soon marry and move from New York with. I wanted to let her see the embarrassing parts that resist verbalization and need the true falsehoods of art. Part of me felt, in this moment, like all young men who like imposing their tastes on their loved ones—that somehow my self-worth was caught up in this moment in this purple Honda listening to this song.

Why had I stopped listening to O Superman? The answer seemed obvious now. After that sunny September day, her work became unbearable to me. The song contained too much of what I tried not to feel and not to recognize about the world and myself and the country in whose name horrible things were being done.

Instead, the song went into a cardboard box in a dusty attic closed off from my soul. Also in that box: a book of plays that lay, spine cracked on my windowsill collecting mysterious black and grey and green dust from September 12th through 15th. It sits on my bookshelf now, spine facing the wall, unopened, a guardian against destruction.

 

*          *          *

At the end of the song, Anderson repeats a vocodered melodic line from the beginning: “Ah Ha Ha-Ah Ah-Ah-Ha.” This time, however, she interrupts herself with a synthesized string line that once again feels like it could come out of the Philip Glass playbook.

This string figure references the vocoder melody off of the song “From The Air,” the track right before “O Superman” on Big Science.

“From the Air” is a song about a plane crashing into New York City.

 

Via crossfade, an honest-to-god tenor saxophone replaces the synth strings. The only instrument to appear in the song without some kind of treatment on it, it makes its realness known by being slightly out of key.

And then, at the very end, everything cuts out, giving way once again to the ever-present, omnipotent “Ha,” repeating itself solo for seventeen seconds.

If you turn the volume up very high and listen very closely, you will hear sirens in the background.

 

*          *          *

 

As the song ended, we feared for our lives. The storm transformed the Pacific Coast Highway into something treacherous, slick, unknowable. The next pulloff onto more trafficked streets lay tens of miles ahead of us. Did we have enough gas to make it? Would our stomachs give out amidst the twists and turns? And—most importantly—did my now-wife like the song?

“Huh. Wow.”

“It’s kinda brilliant, right?” I asked.

“Yeah. It’s also kinda unlistenable at the same time.”

Kinda brilliant, kinda unlistenable is about as close to a judgement of the aesthetic quality of “O, Superman” as I can offer.

 

An odd component of post-9/11 American life has been the failure of art to address the event itself. Many—including some of our greatest living artists—have tried.

Instead, we’ve had to turn back to before the smoking day to find art that resonates. Some claim that Radiohead’s Kid A is the best album ever made about 9/11, despite coming out years before. Immediately after the event, the pundits on television wanted so badly to believe in our President that they told us to reach back to Shakespeare’s Henry V to understand how a drunken spoiled brat could become a Good Christian King.

Why not, then, appropriate “O Superman”? Laurie Anderson herself remains unclear as to the inspiration of the song. She claimed in one interview that she wrote it in response to the Iran-Contra scandal, which broke over five years after the song’s improbable chart climb. Like JG Ballard claiming to have seen the flash over Hiroshima from Hong Kong, this memory is impossible, invented but right nonetheless.

Our claming of these artifacts as being “about 9/11” shows that—rather than changing everything—that day recapitulated and unleashed what lurked, buried underneath us like one of Lovecraft’s ancient Gods. As much as we said this was the day we’d never forget, it revealed how much we’d already forgotten.

 

When Is A Job Not A Job? When It’s In The Arts, Apparently.

[IMPORTANT UPDATE AT THE END]

Here’s a story for you, and it’s a good one, an uplifting one in this time of constant headlines about this or that art form dying or being in yet another crisis. It’s about a little theater, a small off-off-Broadway space[1] towards the bottom of that Triangle Below Canal, a professional theatre well known for experimental work called The Flea. This little experimental theater nearly went out of business in the wake of 9/11, when Tribeca became a ruined, gray-dusted alien landscape. The Flea was only saved through a mixture of innovative fundraising and striking gold with a hit play called The Guys, a two-hander about a reporter and a firehouse hit hard by the WTC attacks that starred a roster of celebrities, ran for years and helped put the theater back on solid footing.

Now, thirteen years after the theater nearly went out of business, The Flea is thriving. Its resident acting company (called The Bats) numbers around 150 people and produces work constantly. A directing apprenticeship program helps mentor the next generation of directors. The theatre does a variety of programming with a kind of ambition—particularly where cast sizes are concerned—that no one else in town can match on a budget so small as to be almost unimaginable.

It’s a remarkable turnaround, so remarkable that The Flea has managed to raise $18 million to purchase a nearby building and convert it into a new space. The new space will have three state of the art theater spaces available to local companies to rent for cheap[2] and allow The Flea to produce more work. And so far, the plan has received rapturous coverage in the press, helping to raise the profile of The Flea even further[3].

There’s just one little wrinkle in this story, and it’s about The Bats. You know, the resident company of 150 or so early career actors? The ones the Times calls the “beating heart” of the theater? The young, hip, diverse troupe whose work helps ensure the theater is constantly full of young, hip, diverse audiences? Well, they’re unpaid.

*

Is it a problem that The Bats aren’t paid to act? It turns out that answering that question involves answering a whole lot of other sub-questions. Questions like: is acting a job? If it is, is exposure a form of payment, a kind of service in lieu of cash, perhaps? Are there mitigating circumstances that affect any of this? Does it matter that the kind of large scale, ambitious works The Bats often do at The Flea would be impossible if they had to pay their actors? Does it matter that there is the money to build an $18 million new space but seemingly no money to pay artists?

It turns out the answer to those questions change depending on who you talk to, depending on what kind of story you want to tell. The story that tends to get told about the arts leaves out labor issues[4]. If labor—and that no-no topic, pay—are brought up at all, they’re usually in the context of whether or not Broadway performers, musicians and technicians are getting paid too much, despite the fact that, as Terry Teachout discussed in the Wall Street Journal, ballooning marketing costs are largely to blame for increased ticket prices on the Great White Way.  Rarely discussed in the conventional story about theater and money is that salaries are so high on Broadway because those high payments make it possible for artists to remain in a system that, except for their brief tenures in the largest theaters, will ask them to do enormous amounts of work, often for little to no money[5].

The story we tell each other about creative work, meanwhile, is that it isn’t really a job, not really, and that you should be grateful for what you can get for it, even if other people are getting paid off of the work that you do.  This isn’t limited to theater. David Byrne recently talked about this issue and music in Salon, and Molly Crabapple wrote about it in the visual arts for Vice. Many (if not most) literary magazines don’t pay. Many major websites won’t pay for writing if they can get away with it. Hell, I am currently writing this essay about The Flea not paying its early career actors for a website that doesn’t pay its writers. I don’t always see a problem with this. Here at Hooded Utilitarian, no one, including Noah Berlatsky who works much, much harder on it than I, makes any money off of it.  HU is a labor of love (or, for some of you, hate) where we can get together and publish things we’re unlikely to place elsewhere. It’s a site where professionals do some non-professional—but hopefully professional quality— work.

There’s a term for this kind of work—professional grade labor that goes unpaid (and is thus amateur)—and that is “pro-am.” We’ve all witnessed how the internet has created an exploding pro-am writing sector. This has been positive in all sorts of ways. There is more great writing being produced every day, easily available at little to no cost for the reader. And as long as the reader’s costs are the only part of the story you’re interested in, it’s incredible.

I started working as a theatre professional as an actor in my teens. In the twenty years since, I’ve witnessed a similar explosion in the pro-am sector in the dramatic arts. Undergrad and graduate theatre programs have grown in number and size, and the number of paying jobs outside of academia hasn’t kept pace. This dynamic has both depressed wages and fueled vibrant pockets of “independent theatre” in many American cities, as artists have come together to create work for little to no money[6].

Given this reality, perhaps the right question then is… what’s the line? When does something stop being a pro-am labor of love and start being something more problematic?

In the case of The Flea, setting the boundaries of the acceptable is thorny.The Flea exists in a specific context and a specific industry. Early career actors tend to have only a few options available to them, all of them bad. They could self-produce work at great personal cost, even if they convince Uncle Shmuel and Aunt Betsy to kick in some money. They could act in self-produced work, which is something of a crap-shoot, exposure-wise. They could intern at a theater (likely for free), stuff envelopes all day, and if they are very, very lucky get someone to come to a show of theirs from, like, I don’t know, marketing. They could go to graduate school (at, again, great personal cost[7]) and, chances are, end up right back where they were only better trained and in enormous debt. Most perniciously, they could pay to take an “audition workshop” with a casting director (or just as often, a casting director’s assistant) which is really just a pay-to-play audition.

It’s a raw deal, in other words, this life of an early career actor. And it will continue being so for the foreseeable future because—and this should read familiar to any writers out there—the supply of actors so overwhelms the demand for them that the dollar value of their labor has been depressed to, essentially, zero. Given this, what The Flea provides—real exposure, free rehearsal space, frequent opportunities to get up on stage and learn one’s craft through getting work up in front of an audience, a chance to produce work, connections, a real community of fellow artists, and the opportunity to learn various ancillary skills of theater without having to pay a dime—is nothing at which to scoff.

All The Flea asks is that, in exchange for getting to be on stage, The Bats work three hours a week doing tasks around the theatre—more if they’re currently in a show since they’re benefiting more—an exchange that, when you talk to any current Bat seems to make perfect sense. It’s hard to argue that three hours of labor in exchange for the opportunity to be in shows is onerous.  Indeed, The Bats love being Bats, and don’t feel particularly exploited.

Unless you view acting in plays as labor. And how is it not labor? The Flea is charging money for people to see The Bats perform[8]. The institution is building itself based on their work. It’s one thing to accept that early career artists must be paid in exposure.  It’s another thing entirely to accept that they must be paid in exposure and that they must also pay for the opportunity in sweat equity.

That sweat equity is also problematic in ways not often discussed. Three hours is not a lot of work to ask an individual Bat to do per week. But with 150 Bats, each doing at least three hours of work for free, The Flea is picking up at least 450 hours worth of free labor per week. That’s ten full time employees worth of work. While this is clearly part of what makes The Flea able to do what it does on such a shoestring—and helps explain why, despite moving to a three-performance-space complex, they’re only expanding their paid staff by two—it has the unintended side effect of further depressing wages, setting an uncomfortable precedent for how a professional theater should be run[9].

These problems are only heightened by the new $18 million building. Practices that are forgivable amongst the scrappy are less so amongst the well-appointed, as Upright Citizens Brigade and Amanda Palmer have recently learned. Supporters of The Flea I’ve spoken with will tell you that paying actors and buying a new space are separate conversations, different stories. The Flea is currently spending around $17K a month in rent, and the new space will secure their future. Furthermore, it’s nearly impossible to raise money to pay artists properly and much, much easier to get donations for “brick & mortar” projects[10].

While I agree that the new building is necessary and am happy for The Flea’s good fortune, and happier still that off-off broadway companies will have access to three nice, clean, functional spaces at a low rental cost, this is almost too clever by half, this walling off the payment of labor from conversations about budgets, about donations, about the “public good” part of a nonprofit’s mission. It may be true that the problems of The Flea are the problems of the industry that The Flea is in. But that doesn’t mean The Flea shouldn’t show leadership on issues of labor fairness.

After all, The Flea has retooled The Bats before, to the mutual benefit of both the company and the theater. The work hour requirements for The Bats used to be higher, and the jobs more menial. The Bats used to perform in fewer shows, there used to be fewer of the Bats, and, according to current and former members I spoke to, less of a sense of community. The Flea even once charged actors a fee to audition[11], something they’d never imagine doing today. The Flea also hasn’t precluded rejiggering the program again three years from now when the new building is complete.

There are a number of changes The Flea could make that would still allow them to do ambitious large-cast projects with an excited community of performers while showing leadership on labor issues. The Flea could simply begin paying The Bats when they appear in shows. It needn’t be a large amount of money; even a stipend would send the message that the theater values The Bats and takes their art seriously. Being a Bat is often likened to a kind of practical graduate school, a training-by-doing program. Part of that training could—and should—include teaching The Bats that their art is worthwhile enough to be paid for practicing it.

If The Flea does not want to do that, they could drop the work requirement. Or they could work with the actors’ union to turn The Bats into an Equity Membership Candidacy program, a true apprenticeship[12] that ends with the actors well on their way to Union membership[13].

More drastically, The Flea could drop the 1-2 professional shows from their annual calendar and cease calling themselves a professional theater altogether.  This wouldn’t stop them from working with professional artists from time to time, particularly where playwrights and directors are concerned. The model for how The Bats work, a tight knit group of artists who do most of the work around the space including everything from running the concession stand to hanging the lights, is already closer to that of a community theater than it is to anything else. While “community theater” is a term loaded with all sorts of associations, most of them negative, it is where most Americans will go to see (or take part in) large cast, ambitious shows that don’t pay actors.

There will not be any pressure on The Flea (and other, even worse companies) to reform so long as the story we tell about art remains the same. So long as we keep telling each other that exposure is payment, that erecting a new building is the only true sign of success, and that labor issues are irrelevant, so long as we keep writing the same story, glowingly reporting the official line without digging an inch deeper, we’ll be stuck in the same place: Bigger, shinier buildings—or websites sold to AOL—with broke-ass people getting paid less and less to do the creative work that keeps them alive.

UPDATE: Since this article was posted, one of the people I interviewed for it (the one mentioned in the final footnote) e-mailed to say that she neglected to mention during our interview about The Bats and and payment that The Bats  receive a nominal stipend during tech rehearsals, since those are what are known as “10 out of 12s” which is to say, 12 hour rehearsals with two one hour meal breaks. This schedule makes it impossible for Bats to make money elsewhere, like temping or waiting tables etc. while in tech.  The stipend was introduced last year and is variable, but under $50.  This means that, when they appear in shows, the Bats are no longer working for free, which is a positive step.

That said, when The Bats are not working in shows, they are still doing 3 hours a week of uncompensated labor around the space. And I would furthermore argue that less than $50, framed entirely as  a way to make up for hourly wages lost elsewhere during tech rehearsals, is still inadequate. It is far less, for example, than the daily subway fare a Union actor is paid in a showcase production. And the larger issues of how we value the people who actually create art in our culture remain.  But it is a positive step in the right direction and reinforces my hope and belief that The Flea wants to find ways to do right by their ensemble.


[1] Off-off Broadway refers not to theater location but the kind of Union contract it uses when working with members of Actor’s Equity Association (aka Equity or AEA).  Off-off Broadway codes are for New York City theaters under a hundred seats. Off-Broadway is the designation for theaters holding between 100 and 499 souls. Anything larger and you’re in Broadway contract territory.

[2] This is no small thing. Theater space—even a 50 seat shithole—can cost thousands of dollars a week to rent, making the amount of cash young companies have to shell out to produce their work often the largest parts of their budgets.

[3] This is one of the reasons why theaters embark on building campaigns. Often the first season after a new building opens brings more audience members and donors, although I once heard a fundraising consultant say that those new donors and viewers often vanish after that first year or two.

[4] It was highly controversial, for example, when Jason Zinoman made the argument in the New York Times that the Upright Citizens Brigade should start paying at least some of its performers, given that a large and very successful institution had been built off of their labor.

[5] A union actor acting in an off-off Broadway show can make as little as daily subway fare in pay. Union actors working Off-Broadway often make under $500 a week. And that’s when they’re actually working on a show. Things like staged readings don’t always pay. And, of course, there’s the gaps between gig when actors aren’t getting paid at all.

Perhaps this is too much to get into in this space, but this is one of the many reasons why the current theatre system is set up the way it is, with larger “regional” (non-NYC) theaters hiring NYC-based actors. The theaters pay a premium for what is generally considered a more talented labor pool. Actors then make more money on the road both through higher weekly salaries and through subletting their apartments back in New York. It’s a system that screws just about everyone. Working actors pay an enormous premium to have a NYC mailing address. Local actors often won’t even get to audition for shows in their hometowns. And for audiences, to paraphrase monologist Mike Daisey, it’s something akin to going to see your hometown baseball team and finding out they’ve been replaced by a bunch of people who guested on Law & Order a couple of times.

[6] The vast majority of Portland, Oregon’s  theatre scene is made up of pro-am companies, for example. It’s worth saying that some indie theater companies take pride in compensating their artists to the best of their abilities.

[7] Nearly all graduate schools for theatre cost roughly one vital organ per year to attend.

[8] As this audition notice (http://www.theflea.org/blog_detail.php?page_type=4&blog_id=238) makes clear, performing is a lot of work in and of itself. In case you don’t feel like clicking over and reading it, this Bats production asks actors to commit to almost two months of six-days-a-week rehearsals plus two months of 5-days-a-week performances, tying up their schedule from January until May. This would, amongst other things, keep them from getting paying acting work for half of the normal theatrical season.

[9] After all, can you really call yourself a professional theater if the majority of the work in your theatre is done on a non-professional basis?

[10] Most of the money for the new building is coming from the City of New York. By comparison , the National Endowment for the Arts is legally barred from giving money directly to artists to support the making of art.

[11] According to an actor who auditioned during this time and joined The Bats a year later, in the wake of 9/11 they charged prospective Bats $25 to audition, saying that they needed to cover the hole in their budget caused by the terrorist attacks.

[12] The Bats are called volunteers, not apprentices or interns. Were the program called an internship, it could be illegal, as by law interns cannot do the work traditionally done by paid employees and more benefit must accrue to the intern than to the company they work for. These laws are on the books to prevent companies from skirting minimum wage laws, something it could be argued The Bats’ weekly work hours requirement clearly does.

[13] There are almost no professional actors in the non-profit system who aren’t members of Actors Equity Association. You cannot be a member of AEA and be part of The Bats. One Bat I spoke to loved being a Bat so much (and was getting regular acting work that she cared about) that she declined joining the Union so she could stay in the group.

Isaac Butler on Perceiving Race

Isaac Butler from a recent comments thread drops some science on perceptions of race:

Basically, our brains have evolved to do an enormous amount of automatic processing of and reacting to simuli and life experience. They do this through a few different processes, but they mainly involve creating cateogories, associations between these categories and what get are called “schema,” which are essentially stories our brain tells itself without our conscious knowledge.

The associations and stories we have often involve categories of people, which we call stereotyping (it doesn’t have a negative connotation in psych circles). A lot of stereotyping is harmless. How do you know without having to think about it that a large, bald, fat human that’s crying probably doesn’t need a diaper change but a tiny, bald, fat human does? How do you know that a black rectangle that rings is a phone and not a wallet? it’s all these kinds of processes.

Anyway, not all of our associations are harmless or value neutral, often they involve preferences (when they’re positive) or biases (when they’re negative) about people in certain groups. Simply put, we have a story about them in our heads that we do not realize we have.

This whole phenomenon, one where our decision making and POV is affected by prejudices ovcuring at the unconscious level, is called Implicit BIas. It’s not limited to race and it’s not limited to the United States. It is, in fact, part of the human condition. It also isn’t a moral failing. The majority of white people in this country consciously hold egalitarian values. This is why explicit measures of bias and prejudice basically have no predicative value as to what people will actually do.

Implicit measures, on the other hand, do tend to predict behavior in experimental settings. The most famous of these is the IAT, which you can actually take yourself at projectimplicit.net. The IAT tests categorical associations through reaction times.

Anyway, this is long-winded, but there’s decades now of scientific evidence as to the validity of implicit bias, its predicative power, etc. and so forth. There is also considerable evidence that believing oneself to be objective actually causes people to act with more rather than less bias. There is some evidence that being aware of implicit bias, coupled with context-specific interventions, can help safeguard our decision making processes from implicit bias’s effects.

This is why color-blindness is such a pernitious idea. It’s actually the opposite of what we need. It’s the delusion that we’re objective. And what the Right does is talk about color-blindness through one side of its mouth while stoking White racial anxiety with the other. So they take race off the table as a valid topic for discussion (“playing the race card”) while also talking about it in ways guaranteed to panic Whites. For an example of this, look at Fox’s coverage of the Zimmerman verdict.

 

George Zimmerman

Playing Narrative Part 2: Survivor’s Guilt

the last of us

(Hey! As the title indicates, this is part 2 of something! Part 1 is here!)

(Warning: Spoilers. Including the end of the game.)

Somewhere around the halfway mark of Naughty Dog’s The Last Of Us, Joel, the hardened survivor of a plant-parasite-fungus-zombie-apocalypse that you spend most of the game controlling, finally makes it to his brother Tommy, located somewhere in the vast middle of America. Joel’s there to try to hand off Ellie, a teenage girl who must be taken to the Fireflies, a subversive group located somewhere out West. It’s the second time you’ve seen Tommy. In the game’s prologue, set twenty years before the rest of the action on the day the apocalypse started, Joel, Tommy and Joel’s daughter attempted to escape Austin, Texas.  Now, relations between the two of you have cooled. Or, as Joel tells Ellie, “His last words to me were… I don’t ever want to see your goddamn face again.”

The player never learns exactly what caused Joel and Tommy’s falling out,  but when Tommy—who now has a wife and helps run a small town based around a hydroelectric plant—refuses to help Joel, you get some idea. Joel tells Tommy that he’s owed this, “for all those goddamn years I took care of us.”  Tommy replies, “took care? That’s what you call it? I got nothing but nightmares from those years.”

“You survived because of me,” Joel, tells his brother.

“It wasn’t worth it,” Tommy says, looking at the camera, stricken and haunted.

 

What could possibly make not-dying not-worth it? Likely, it’s the stabbing, shiving, Molotov-cocktailing, strangling, shooting, archering, punching, bricking, bottling, and IEDing that the player has spent the last seven hours making Joel do to various zombies and humans. The Last of Us is a game that takes its violence and its theme of survival very seriously, and gradually asks the player to do the same. In doing so, we come to realize that Joel, the man we inhabit, may be a survivor, but he sure ain’t a hero.

After the prologue, when we jump twenty years in the future and re-meet Joel as a childless middle-aged man, he is a lowlife. He smuggles drugs, ration cards and weapons, serving up some terrible ownage on people who cross him. He runs in a relationship of sexual and financial convenience with a fellow smuggler named Tess, who will go on to summarize their lives by saying “we’re shitty people, Joel,” and mean it. Later still, after Joel and Ellie take on a group of marauding bandits, Joel reveals to Ellie that he’s “been on both sides of this thing.” When a different group of bandits invade Tommy’s power plant, Tommy asks Joel if he still knows how to kill, but the look on Tommy’s face tells you that he’s disgusted with himself for asking.

Joel, just to be clear, isn’t an anti-hero. Nor is he another in a long line of video game asshole warriors. He’s not a Don Draper or Tony Soprano charming psychopath. He’s actually kind of a piece of shit. Not that he doesn’t have his complexities, particularly in his relationship with Ellie. She sees a goodness in him, the same goodness we glimpse in the prologue, the goodness he appears to have lost. It’s a goodness that, when it’s just the two of them together, The Last of Us dangles in front of us as a possibility.  Joel’s a broken man, physically strong and spiritually bereft. A man who has turned off his soul for twenty years, and, over the course of The Last Of Us, we begin to care whether he gets it back or not, just as much as we care about whether he and Ellie ever make it out West.

Much of the time, however, Joel’s like a mix between Rooster Cogburn from True Grit and Theo Faron from Children of Men, sans most of the redeeming qualities of both.  What makes The Last of Us so startling is that it knows this. And, gradually, it makes the player know it too.

 

Naughty Dog became famous over the last decade for a series of Indiana Jones like games called Uncharted that, as cinematic acts of storytelling, are actually better than half of the Jones films and all of Jones’s latter day imitators like The Mummy and National Treasure. In those games, the player controls Nathan Drake, a descendent of Sir Francis Drake and international treasure hunter who gets in over his head having a series of thrilling, funny, genuinely charming adventures having to do with lost artifacts that may hold great power. The Uncharted games harken back to movies like Treasure of the Sierra Madre or Romancing the Stone, the kind of big budget, exotic locale, rakish hero, adventure films that Hollywood used to be able to do well, while removing the problematic racial politics that often make those films unwatchable today.

There’s just one problem: These are, of course, action games. Which means that the player also spends a great deal of time killing people. Hundreds of people, it turns out. After Uncharted 3: Drake’s Deception came out, more and more people started raising a stink about this issue. It’s pretty clear that the team on The Last of Us—many of whom also worked on Uncharted—wanted to see what would happen if they started taking all the killing seriously and asked their audience to do the same.

While The Last of Us, like The Walking Dead, takes place in a world hit with a zombie apocalypse, the similarities pretty much end there. TWD’s gameplay functions through dialogue and action choices. The Last of Us has very little choice in it at all. TWD’s graphics are stylized and cell-shaded.  The Last of Us uses motion capture. TWD is an adventure/puzzle game. The Last of Us is a stealth/action game.

Most importantly, TWD takes place immediately following the zombie apocalypse, as people learn how to survive. The Last of Us takes place twenty years in, and is set amongst the whittled down population of people who’ve figured it out.

Survival is what The Last of Us is all about on both a thematic and gameplay level. If Naughty Dog were in search of an alternate title for the game, Survivor’s Guilt (with “guilt” here meaning both the feeling of remorse and the state of having done something wrong) would’ve been a good stand-in. As with The Walking Dead—where a series of choices serves as an essay on ethics when you realize death in inevitable—it is this interweaving of theme and mechanics that enriches The Last of Us and makes it work.

In the game you have limited weapons, and all of them have limited uses. You have to worry constantly about making too much noise, alerting nearby enemies. Killing people is difficult, noisy, and time consuming. All of the materials you find are necessary to craft multiple items. You can’t carry very much. There are also many points in the game where you can sneak by adversaries and not engage with them, leading—if you are, like me, both ethically minded and neurotic—to calculations that go something like Well, I’m low on supplies and I bet I could take these guys out and loot their corpses. Wait. Am I seriously contemplating killing six people who aren’t a threat to me for the express purpose of looting their corpses? Oh my God. I’m the worst.

In The Walking Dead, violence is very personal. Most of the time, it is being dealt by or to someone Lee Everett knows. The Last Of Us, on the other hand, primarily features the kind of depersonalized violence that most video games trade-in, it just makes that depersonalization part of the point. Joel—who has survived precisely because he’s selfish— can’t see the people he’s killing as human.

Not that the game is a relentless downer. Much of it is spent wandering overgrown urban landscapes and idyllic vistas talking with Ellie and deepening the bond between the two of them. Ellie is one of the few great characters to emerge from video games. She’s funny, charming and human and feels in many ways like a real fourteen year old. Indeed, any affection the player gains for Joel is likely the end result of loving Ellie, and wanting to love what she loves. For each of the game’s acts (there are four of them, one for each season), Ellie and Joel meet and team up with other survivors, who all prove to be interesting, fully realized characters written and performed with that rarest of video game traits: subtext. The Last of Us is a game where watching facial expressions and listening to tone of voice changes meaning, and the few choices they give you along the way are entirely about character development. You can stop to explain to Ellie what a coffee shop was, or pet a giraffe. You can find comic books to give her to read. You can give a man a Dear John letter from his boyfriend.

Ultimately, however, The Last of Us’s themes cannot be escaped for long. And yet, because it is a very well designed game, it is fun play. And yet, because it takes what it is doing seriously, it’s a disturbing and wrenching and truly, deeply, haunting. The ending of the game is anti-cathartic and disturbing and in no way resolves the central tension between depicting the urge for survival while also problematizing it, suggesting that perhaps, at times, being a survivor means being a monster.

Joel, you see, is presented with the opportunity to save the world, but doing so entails Ellie’s death.  Ellie is immune to the parasite that has destroyed civilization, but creating a vaccine from her body would involve removing her brain. Joel saves her life, killing a hospital full of people, and ends any hope of humankind’s recovery. The Last of Us twice hints that Ellie would’ve accepted her death if given the opportunity to choose. But she never knows she had the choice because Joel lies to her about it. Joel, we come to understand, is as selfish as ever. Needing and loving this new surrogate daughter, after having lost his own twenty years before, he is unable to let her go for the greater good.

For those of you reading this who don’t play video games, I want you to understand that this kind of ending—one that is neither triumphant nor cathartic, but instead haunting and true to its characters—basically does not exist in mainstream video gamesIn fact, it’s the kind of ending that most mainstream blockbuster movies—and The Last of Us is the equivalent in terms of budget, market presence, hype and sales—would never dare attempt.

It’s these kinds of elements—story, theme, structure, subtext, writing, performance—that are responsible for the nearly universal critical rapture that has greeted The Last of Us, and they flow directly out of the thematic integration of gameplay and story, and from questioning the purpose of all the violence the video game marketplace demands. It is in this way similar to Watchmen. By taking its subject matter seriously, it simultaneously is a masterpiece of its form (the superhero comic/ the action game) while undermining the existing status quo.

And that brings me to the ultimate problem with making the resolution of ludonarrative dissonance the ultimate goal and measure of quality of video games. It’s no mere coincidence that The Walking Dead and The Last of Us take place during the apocalypse. There’s a limited number of scenarios that justify the kind of violence that the form regularly contains and that audiences demand from it. While we can get moralistic about this, high body counts have graced our literature since The Iliad, our theatre since The Persians, our films since Intolerance and on and on. As someone interested in video games becoming a richer source of stories, of examining theme, subject, narrative and character through the unique medium of a player interface, I’m less concerned with the virtues of violent games and more by how thuddingly boring and narrow their possibilities often are.  As the current “gritty downer” era of superhero comics and films shows, replacing the current narrow possibilities of the medium with a different set of narrow (but critic-approved) possibilities isn’t really a solution, even if we get more games like The Walking Dead and The Last of Us along the way.

Playing Narrative, Part 1

twd choice

Back in 2011, I wrote on my own blog about storytelling in video games, and whether or not they are a narrative art form, a post that led me to wonder:

[D]o video games really want to be known as a narrative art form?

I find this question far more interesting than Ebert’s question about whether they’re art or not. (Simple answer: some are, some aren’t!)

Right now, video games are in a sweet spot. Games like Heavy Rain and Mass Effect 2 can come out and gain a certain amount of cachet and sales because of their sophisticated deployment of game mechanics to complexly explore genre. At the same time, when people question the racial politics of Resident Evil 5 or look at the truly execrable pro-torture narrative of Black Ops, gamers (and game critics) can retreat behind “Hey, it’s only a game!”

Sure enough, over the last couple of years, I’ve noticed more talk about the quality of stories that games tell and the phenomena of ludonarrative dissonance, or the disconnect between the gameplay experience and the narrative experience of a title. Most of these conversations tend to coalesce around fretting about violence. In the Uncharted games, rakish hero Nathan Drake kills something like six to eight hundred people whilst treasure hunting around the globe. The emotional resonance of Bioshock: Infinite’s clever universe-hopping maze of a plot is undermined by the constant need to mow down everyone who gets in your way. In fact, the term ludonarrative dissonance apparently originates with this blog post from Clint Hocking about the first Bioshock game, in which he writes that the contrast between the selfishness of the game play (it’s a first person shooter) and the anti-selfishness polemics of the plot (it’s a takedown of objectivism) contrast to such a degree that it wrecks the game.

I personally find the concept of ludonarrative dissonance interesting for thinking and discussing video games but do not find it to be quite the magic bullet that game critics seem to think it is. Basically, I believe that, in part due to the history of how games have aesthetically developed, game players are quite used to compartmentalizing gameplay from story, tending to either view the former as the task one must accomplish to get the latter, or viewing the latter as the increasingly cumbersome speed bumps that interrupt the former.

While the violent gameplay is the least interesting part of Bioshock:Infinite, I’m not sure that most video game players  think that they’re killing people as they play it from a narrative perspective any more than watchers of Looney Tunes feel Elmer Fudd’s physical pain in any kind of serious way. Aesthetics matter, after all, and Bioshock:Infinite is a candy colored cartoon wonderland filled with nonrealistic character portraits. Most of the human extras you encounter throughout the world are more like animatronic dolls than people. It’s also worth noting that  violence is in many ways woven into the DNA of videogames, much as snark and  assumptions of bad faith are woven into the DNA of online discourse.

That said, ludonarrative dissonance will prove a worthwhile concept if it leads to better games and better narrative mechanics within them, and over the past year, at least, this appears to be happening. Two recent works, Telltale Games’ The Walking Dead and Naughty Dog’s The Last of Us have done a remarkable job of integrating gameplay mechanics, story, and theme, pointing the way to a possible new maturity in the field. Yet at the same time, both are built out of sturdy video game genres.  The Walking Dead is a classic puzzle-adventure game, while The Last of Us focuses on the kind of stealth-action familiar to players of Metal Gear Solid, Deus Ex or the Tenchu franchise. They never lose their game-ness[1], yet remain satisfying, emotionally engaging, thought provoking narrative experiences[2].

The Walking Dead even manages to upstage both the preexisting source material (the comics by Robert Kirkman) and the blockbuster TV adaptation on AMC.  In it, you play Lee Everett, a recently convicted murderer (and former college professor) being transported to prison when the cop car carrying you hits a zombie.  Shortly thereafter, you take on a young girl named Clementine, whose parents are in another city and whose babysitter has gone all let-me-eat-your-brains on you[3]. As you and Clementine struggle to survive, you eventually come upon other survivors and have a series of difficult trials that brings you both across the state of Georgia.

On a gameplay level, much of The Walking Dead revolves around the normal puzzle-adventure michegas, where you have to figure out what action and items will get you from point A to point B in the plot. Occasionally, you also have to kill zombies or hostile humans. Neither of these functions are particularly remarkable. And at least one puzzle, which involves figuring out the right things to say to get someone to move out of your way so that you can press a button, is seriously infuriating. What makes the game work, however, is the way that character, emotion and choice function within the narrative. Like many games today, The Walking Dead presents the player with multiple narrative choices via either forcing you to take one of a series of mutually exclusive actions or choosing dialogue options in conversations.

Telltale’s stroke of genius was to insert a timer into these decisions.  Normally when you reach a major choice in a game, it will wait for you.  You can think about it for a while, perhaps peruse a walkthrough online that will tell you the outcome of the choices, and then make it. You can perform a cost-benefit analysis in other words, thinking about it purely in game terms. In The Walking Dead, you have a very limited time to make each decision, and as a result, the decisions become a reflection of your personality, or the personality of Lee as you’ve chosen to play him[4]. Perhaps you think Lee should tell people he’s a convicted murderer, because honesty is the best policy. Or perhaps you think you should hide it from people because you’re a good guy and you don’t want people pre-judging you. Perhaps you should tell people you’re Clementine’s father. Or her babysitter. Perhaps you raid that abandoned station wagon filled with food. Or perhaps you sit back, willing to go hungry in case the car belongs to fellow survivors.

Many of your choices involving brokering disagreements between two survivors named Kenny and Lily, who are both, to put it politely, assholes. Kenny, a redneck father, will do anything for the survival of his family (including betray you), and will forget any nice thing you do for him (including saving his life) the second you disagree with him. Lily, the defacto leader of the group, is belligerent, domineering, and frequently sticks up for her racist shitbag dad. Being a good middle child, I kept opting for choices that recognized the validity of their points of view and tried to form consensus. Due to their aforementioned assholitry, they both hated me by halfway through the game. One of them even told me I had to man up and start making decisions or what was the point of having me along.

The decisions tend to function like this in the game. Unlike in most games with choice mechanics, there aren’t morally good and bad choices coded blue and red. And unlike old school adventure games, the choices you make in the plot won’t lead to fail states. They simply are things that you’ve done, and they ripple out throughout the game, shifting (in ways both subtle and non) how the story progresses, how people treat you, and what choices you have remaining to you.

None of this exactly explains what a remarkable achievement The Walking Dead game turned out to be. So let me try some other ways: It’s the only game I’ve played that has reduced every person I know who has played it to tears at least twice. It’s the only game I’ve ever played where the characters are so clearly and humanly written that I finished one chapter of it and flew into a rage over what one of the characters did to me[5].

Part of this is because there are limits to what your choices can achieve. Due to the realities of game making and the limitations of the engine that’s running underneath TWD’s hood, the number of paths you can take in the game is finite. There are truncation points in the branching narrative to keep things under control. As a result, certain things will happen no matter what you do and certain characters will die.  There are things you cannot stop from occurring in the game, fates that, like the protagonist in a play by Euripides, are inexorable and horrible all at once.

I wouldn’t have it any other way. Robert Kirkland’s two great innovations in the zombie apocalypse genre—telling a story with no finite ending and making zombieism inevitable[6]—are what gave early issues of the comic book their thematic sizzle, turning the saga into a story about how we confront our mortality and an ongoing essay into whether death made life more meaningful or a sick joke. Sadly, after a difficult and necessary foray into the issue of survivor’s guilt, the comics are largely now about how difficult and noble it is to be the White Man in Charge who makes the tough decisions and often feature Rick Grimes walking around having other people tell him how awesome he is while he gets ever more self-pitying.

The video game, meanwhile, does a superior job of exploring the themes of its source material, because the choice mechanics literalize those themes. By removing fail states from the game (like most contemporary video games, it is literally impossible to lose The Walking Dead) and eschewing simple morality in designing the choices, TWD constantly forces you to think about why you are making the choices you make. As you decide whether or not to save the female reporter and firearms expert or a male hacker dweeb you may find yourself suddenly thinking Oh crap, I have to choose between one of these people. And they both seem so nice. But, well, this is the apocalypse, so electronics aren’t going to be as necessary. And that reporter is a markswoman. And at some point the world is going to need to be repopulated, so I suppose I need to save as many potential sexual partners as possible. So I guess I’m going to save the reporter. [CLICK] Wait. Am I terrible person?

It’s rare that games provoke that kind of calculus. And it’s very rare that they are constructed in a way that forces you to think about not just the decisions you make but why you make them.  By the end of the game, as a mysterious stranger interrogates you about every major decision you’ve made over the last ten or so hours of gameplay, it’s hard not to notice that what you’ve just been playing is a length examination not just of what it means to survive, but of yourself.

(This is part one of a two-part essay on recent advances in video game storytelling. Part two will run soon)

CORRECTION: I’ve been a little remiss in apportioning credit in the above. The idea of infectionless zombies dates back to Romero and, of course, The Walking Dead was co-created by artist Terry Moore and, after its first few issues, has been co-created by artist Charlie Adlard. Apologies to the relevant parties.


[1] Oddly, both games have been criticized for still being too “game-like.”  This strikes me as wrongheaded, akin to arguing that a graphic novel, by including panels and images, wasn’t enough like a prose book. Or that book, by being made out of words, wasn’t enough like a television show. If we want the medium of games to improve, it shouldn’t be via them becoming very long movies.

[2] Please take the fact that I used as clunky a phrase as “narrative experiences” in this sentence as a sign of the newness of taking narrative in video games seriously and the difficulty in discussing same.

[3] You put a hammer through her head. But at least it’s justified by the world.

[4] This was even more true when the game was initially released in a serialized 5 episode format.  A choice you make in Episode 2 might not pay off until Episode 5, thus making a walkthrough of your choices totally useless.

[5] Or should I say Lee? This gets me to a side point that I don’t have much time to get into here: The relationship between choice mechanics and attachment to games. There is something about having a say in the way a game progresses that creates in most gamers I know a greater sense of emotional attachment to the events as they unfold. I think on some level we come to care for our characters (and the characters around them) as if they were our charges. We don’t want bad things to happen to them, and have at least some ability to keep them out of trouble. When we fail, it’s heartbreaking. And I feel silly about owning up to the fact that it’s heartbreaking, because, after all, this is a fucking video game we are talking about here people. It’s probably—outside of hardcore pornography—the medium with the most uneven ratio of profit to respectability there is.

[6] In the world of The Walking Dead, all dead people become zombies. Zombie bites spread a poison that helps speed the process of death along. The only way to stop this process is to have whoever is with you—likely a loved one or friend—kill you by shooting you in the head or otherwise destroying your brain.

Upstream Color: Less Than Meets The Eye

upstream_explainer5

Almost a decade ago, Shane Carruth’s film Primer took the Grand Jury Prize at Sundance.  Shot for only $7,000, and looking like at least a million bucks, Primer was a low-key, hyperrealistic take on the time travel film in which two friends invent a machine that allows them to go a handful of hours backwards in time.  They end up playing the stock market and becoming increasingly paranoid and sociopathic, betraying first their business partners and then each other.

Primer is a textbook “has potential” movie.  Written, directed, produced by and starring Carruth, it displayed a great command of atmospherics and visuals while not quite working as a story. All time travel narratives must eventually either cheat or collapse under the weight of their own paradoxes, and when Primer eventually falls, it falls hard into a swamp of incoherence that borders on incompetence. Yet the movie seems to add up to something at the end, and is shot and edited in a fashion that makes it appear as if the filmmakers understand what is going on, even if you the viewer do not. This led many to mistakenly infer that Primer was smarter than they and anoint it a deep and meaningful film. Still, it was a first film, and made for next to nothing, and showed that Carruth was a filmmaker of promise. It also helped give rise to a new strain of low(ish)-budget, small-scale, personal science fiction films.  Films like Brit Marling’s Another Earth or Duncan Jones’s Moon were both better than Primer, but it’s hard to imagine either would’ve gotten the attention they did without it.

Now Carruth is back with Upstream Color, another low budget, contemplative science fiction movie in which he wears even more hats, directing, writing, starring, cinematographing, composing, casting and designing the film. Upstream Color is a rare beast, a true auteurist science fiction work where every detail is the result of one man’s vision. It also demonstrates conclusively that Carruth’s skeptics were right. Upstream Color is ultimately an empty experience that squanders an interesting premise on meaningless beauty and mood.

Upstream Color is about a parasite that goes through three different life cycles. In the first, it’s a small white worm growing in plants. In the second, it grows inside humans, making them, in the immortal words of Khan Noonien Singh, highly susceptible to suggestion. In the third, it grows inside pigs.  When the film opens, we see a man (he’s billed simply as “Thief”) cultivating the plant-stage parasites. He drugs a woman named Kris and infects her with it. Thief uses his total control over her to force to do all sorts of ponderous stuff like writing down passages of Walden on pieces of paper and then making paper chains of them. He also cleans out her bank account and gets her to pull all the equity out of her home and give it to him. Sometime later, a man billed only as The Sampler uses low frequency sound waves to summon her to his farm, where he extracts the parasite and puts it in a pig.

Soon, having lost everything and gone to work at a copy shop, Kris meets and finds herself mysteriously drawn to Jeff, a disgraced businessman who works for a hotel chain. Jeff, of course, is a victim of Thief as well, and we soon learn that their parasites were put into pigs who have since mated.

This is not a bad premise for a sci-fi film. Mind control parasites are, of course, an old saw, used in everything from Star Trek to Bodyworld to Fringe, but the added side-effect of the human-pig connection is a nice twist. One could imagine any number of things that could be done with the idea, from a Dickian paranoid parable of loss of control in love to a Michael DeForge freakout about the human body, to a searing indictment of the food industry.

Upstream Color decides that the best thing it can think of to do with this idea is to have Kris and Jeff fall intensely, cinematically in love, which is to say they stare at each other in intensely lit locations, sometimes breaking from this to either have emotional breakdowns or say ponderous bushwa into the ether. Then it digresses into a long segment featuring The Sampler wandering around nature recording various sounds and turning them into musical notes and spying on people (apparently he can turn invisible). In the end, Kris and Jeff are able to defeat The Sampler through means that make no sense but prominently feature a quinoa salad, retrieving rocks from a swimming pool, and more quotes from Walden. After killing the Sampler, they track down all the other victims of the parasite and start a cooperative farm.

Carruth tries to save the rapidly-deflating soufflé of Upstream Color’s plot by shooting the whole thing like The Tree of Life, constantly cutting between images, highlighting subjectivity, using deep-focus, voice-over, and a rapidly circling camera to overwhelm the viewer with beauty.  The problem is that, love it or hate it, The Tree of Life is actually about something and the cinematic techniques on display in the film are part and parcel of the philosophical inquiry into which it entersUpstream Color, meanwhile, is deploying these techniques to paper over a fundamental emptiness, just as Primer deploys the climactic-montage-with-recycled-voice-over trick of The Usual Suspects and The Sixth Sense to make it seem as if it’s headed towards some kind of revelation in its conclusion.

In many ways, Carruth might better be understood as a composer who works with images than an actual filmmaker. But as the film defaults to mood every time it should head towards meaning, the various gestures begin to feel manipulative, cynical rather than creative. As the friend I saw it with quipped to me over e-mail, “it was like a bad and spooky techno arrangement which seems at least to have the benefit of ambiguity until you realize it’s a cover of Riders On The Storm.”  Upstream Color, then, exists at the intersection of The Beauty Problem and The Weird Shit Problem. Like a lot of so-called experimental art, it substitutes compositional beauty and oddballity for substance. It has the perfect alibi, “you just don’t get it, man,” which is, in its own way true. You don’t get it, man. There’s nothing to get.