When Puberty Lasts a Lifetime

ultimatespiderman1variantmain

“I grew up in Indiana,” writes Chris Huntington, “and saved a few thousand comic books in white boxes for the son I would have someday. . . . Despite my good intentions, we had to leave the boxes of yellowing comics behind when we moved to China.”

I grew up in Pennsylvania and only moved down to Virginia, so I still have one dented box of my childhood comics to share with my son. He pulled it down from the attic last weekend.

“I forgot how much fun these are,” he said.

Cameron is twelve and has lived all those years in our southern smallville of a town. Chris Huntington’s son, Dagim, is younger and born in Ethiopia. Huntington laments in “A Superhero Who Looks Like My Son”(a recent post at the New York Times parenting blog, Motherlode) how Dagin stopped wearing his Superman cape after he noticed how much darker his skin looked next to his adoptive parents’.

Cameron can flip to any page in my bin of comics and admire one of those “big-jawed white guys” Huntington and I grew up on. Dagim can’t. That, argues Junot Diaz, is the formula for a supervillain: “If you want to make a human being into a monster, deny them, at the cultural level, any reflection of themselves.” Fortunately, reports Huntington, Marvel swooped to the rescue with a black-Hispanic Spider-Man in 2011, giving Dagim a superhero to dress as two Halloweens running.

Glenn Beck called Ultimate Spider-Man just “a stupid comic book,” blaming the facelift on Michelle Obama and her assault on American traditions. But Financial Times saw the new interracial character as the continuing embodiment of America: “Spider-Man is the pure dream: the American heart, in the act of growing up and learning its path.” I happily side with Financial Times, though the odd thing about their opinion (aside from the fact that something called Financial Times HAS an opinion about a black-Hispanic Spider-Man) is the “growing up” bit.

Peter Parker was a fifteen-year-old high schooler when that radioactive spider sunk its fangs into his adolescent body. Instant puberty metaphor. “What’s happening  to me? I feel—different! As though my entire body is charged with some sort of fantastic energy!” I remember the feeling.

It was 1962. Stan Lee’s publisher didn’t want a teenage superhero. The recently reborn genre was still learning its path.  Teenagers could only be sidekicks. The 1940s swarmed with Robin knock-offs, but none of them ever got to grow-up, to become adult heroes, to become adult anythings.

Captain Marvel’s little alter ego Billy Baston never aged. None of the Golden Agers did. Their origin stories moved with them through time. Bruce Wayne always witnessed his parents’ murder “Some fifteen years ago.” He never grew past it. For Billy and Robin, that meant never growing at all. They were marooned in puberty.

Stan Lee tried to change that. Peter Parker graduated high school in 1965, right on time. He starts college the same year. The bookworm scholarship boy was on track for a 1969 B.A.

But things don’t always go as planned. Co-creator Steve Ditko left the series a few issues later (#38, on stands the month I was born). Lee scripted plots with artist John Romita until 1972, when Lee took over his uncle’s job as publisher. He was all grown-up.

Peter doesn’t make it to his next graduation day till 1978. If I remember correctly (I haven’t read  Amazing Spider-Man #185 since I bought it from a 7-EIeven comic book rack for “Still Only Thirty-five” cents when I was twelve), he missed a P.E. credit and had to wait for his diploma. Thirteen years as an undergraduate is a purgatorial span of time. (I’m an English professor now, so trust me, I know.)

Except it isn’t thirteen years. That’s no thirty-two-year-old in the cap and gown on the cover. Bodies age differently inside comic books. Peter’s still a young twentysomething. His first twenty-eight issues spanned less than three years, same for us out here in the real world. But during the next 150, things grind out of sync.

It’s not just that Peter’s clock moves more slowly. His life is marked by the same external events as ours. While he was attending Empire State University, Presidents Johnson, Nixon, Ford and Carter appeared multiple times in the Marvel universe. Their four-year terms came and went, but not Peter’s four-year college program. How can “the American heart” learn its path when it’s in a state of arrested development?

Slowing time wasn’t enough either. Marvel wanted to reverse the aging process. They wanted the original teen superhero to be a teenager again. When their 1998 reboot didn’t take hold (John Byrne had better luck turning back the Man of Steel’s clock), Marvel invented an entire universe. When Ultimate Spider-Man premiered in 2000, the new Peter Parker is fifteen again. And he was going to stay that way for a good long while. Writer Brian Bendis took seven issues to cover the events Lee and Ditko told in eleven pages.

But even with slo-mo pacing, Peter turned sixteen again in 2011. So after a half century of webslinging, Marvel took a more extreme countermeasure to unwanted aging. They killed him. But only because they had the still younger Spider-Man waiting in the wings. Once an adolescent, always an adolescent.

The newest Spider-Man, Miles Morales, started at thirteen. What my son turns next month. He and Miles will start shaving in a couple years. If Miles isn’t in the habit of rubbing deodorant in his armpits regularly, someone will have to suggest it. I’m sure he has cringed through a number of Sex Ed lessons inflicted by well-meaning but clueless P.E. teachers. My Health classes were always divided, mortified boys in one room, mortified girls across the hall. My kids’ schools follow the same regime. Some things don’t change.

Miles doesn’t live in Marvel’s main continuity, so who knows if he’ll make it out of adolescence alive. His predecessor died a virgin. Ultimate Peter and Mary Jane had talked about sex, but decided to wait. Sixteen, even five years of sixteen, is awfully young. Did I mention my daughter turned sixteen last spring?

Peter didn’t die alone though.  Mary Jane knew his secret. I grew up with and continue a policy of open bedrooms while opposite sex friends are in the house, but Peter told her while they sat alone on his bed, Aunt May off who knows where. The scene lasted six pages, which is serious superhero stamina. It’s mostly close-ups, then Peter springing into the air and sticking to the wall as Mary Jane’s eye get real real big. Way better than my first time. It’s also quite sweet, the trust and friendship between them. For a superhero, for a pubescent superhero especially, unmasking is better than sex. It’s almost enough to make me wish I could reboot my own teen purgatory. Almost.

Meanwhile the Marvel universes continue to lurch in and out of time, every character ageless and aging, part of and not part of their readers’ worlds. It’s a fate not even Stan Lee could save them from. Cameron and Dagim will continue reading comic books, and then they’ll outgrow them, and then, who knows, maybe that box will get handed to a prepubescent grandson or granddaughter.

The now fifty-one-year-old Spider-Man, however, will continue not to grow up. But he will continue to change. “Maybe sooner or later,” suggests artist Sara Pichelli, “a black or gay — or both — hero will be considered something absolutely normal.” Spider-Man actor Andrew Garfield would like his character to be bisexual, a notion Stan Lee rejects (“I figure one sex is enough for anybody”). But anything’s possible. That’s what Huntington learned from superheroes, the quintessentially American lesson he wants to pass on to his son growing up in Singapore.

May that stupid American heart never stop finding its path.

Evil Fanservice

Evil-Dead-trailer-10_612x380

I’m not sure what I expected when I rented the 2013 Evil Dead re-make. Maybe I was hoping for a few scares, or at least a few laughs. Or maybe I’m just another aging nerd wallowing in nostalgia.

The original Evil Dead (1981) is often described as a cult classic, and it launched the careers of filmmaker Sam Raimi and B-movie star Bruce Campbell. Like most horror films of that era, Evil Dead was exploitative popcorn fare that was often more funny than scary. In fact, the Evil Dead franchise morphed into a deliberate horror/comedy in the second film and an action/comedy by the third film. If the original Evil Dead is remembered fondly, it’s a fondness for its excesses and failures. It was ultra-violent even by the standards of the time, but the filmmakers lacked the skill and resources to make the gore look believable. Instead, viewers were treated to buckets of obviously fake blood and corpses made of play-doh. To describe the characters as one-dimensional would be generous, and the acting was sub-par even by the low standards of the slasher genre. And, of course there was the infamous tree rape scene. It was gratuitous and sleazy (Raimi later stated he regretted including it), but it was hardly out of place in a film that was clearly pandering to the base instincts of its (presumably) teenaged audience.

Put simply, the original Evil Dead was an amateurish horror film produced for bored teenagers looking for a few cheap thrills. It was a surprising success and seems to have entertained its core audience back in ’81. In a sane world, that would have been good enough, and no one would have remembered Evil Dead except for a handful of horror buffs. But we don’t live in a sane world. We live in a world where Hollywood keeps producing expensive movies based on 70 year old characters from children’s comics. We live in a world where a movie franchise based on a line of children’s toys is one of the biggest hits of the past decade. We live in a world where the nerd is king and every piece of pop culture detritus must be re-packaged and re-sold (often to the exact same people who bought the first copy).

And so we get an Evil Dead re-make. By a few superficial measures, it’s superior to the original. The budget is obviously much larger, so the filmmakers didn’t have to cut any corners. It has the slick look of a major Hollywood production. The new cast are marginally better actors than the originals (and better looking, too). And the gore is far, far more realistic. The special make-up effects crew earned their paycheck.

And yet the film still feels like a pale imitation. Perhaps that’s unavoidable with most remakes, but I think it has more to do with the obsessive reverence for the original. Rather than simply make a new movie with some of the same ideas, the filmmakers went through a checklist of every big moment from the Evil Dead franchise and crammed them all into one movie. There’s the signature Evil Dead camera which chases the characters through the woods.  There’s violence with a chainsaw. The heroine loses a hand because the original hero lost a hand in Evil Dead 2. Bruce Campbell appears and says “groovy.” Even the freakin’ car from the first movie, an Oldsmobile Delta 88, has its own cameo. And there’s a tree rape scene. It’s significantly toned down from the original, and yet it feels more gratuitous because its purpose isn’t even to titillate so much as to remind older viewers of the same scene in the original. Or to put it another way, the nostalgia is the titillation.

The sad thing is that nostalgia is about the only thing that the Evil Dead remake does well. On the few occasions when the filmmakers deviate from the source material, they fail badly. The remake spends far more time trying to get its audience to care about the characters, all for nothing because the characters are just as paper-thin as the originals. And the reason you go to a movie called Evil Dead is to see some violence inflicted on annoying people, not learn about their tragic mommy issues. Also, switching the gender of the hero might have been a great idea if executed well, but lead actress Jane Levy just doesn’t have the chops to carry the story. Bruce Campbell is not a great actor, but he had a goofy charisma which was, more often than not, the best thing about the Evil Dead franchise. Perhaps Levy will have that sort of appeal one day, but in 2013 she’s indistinguishable from every other starlet.

To sum up: not scary, not funny, not memorable on its own merits, and altogether a complete waste of time.

Bloody Conventions

I’ve avoided reading We3 for years, in part because I find depictions of violence against animals upsetting, and I was afraid I’d find it painful to read.

As it turns out, though, I needn’t have worried. We3 does have some heart-tugging moments for animal-lovers — but they’re safely buried and distanced by the towering pile of bone-headed standard-issue action movie tropes. There’s the hard-assed military assholes, the scientist-with-a-conscience, the bum with a heart of gold and an anti-fascist streak…and of course the cannon-fodder. Lots and lots of cannon fodder. We3 clearly wants to be about the cruelty of animal testing and, relatedly, about the evils of violence — themes which Morrison covered, with some subtlety and grace, back in his classic run on Animal Man. In We3, though, he and Frank Quitely gets distracted by the pro-forma need to check the body-count boxes.
 

We3 1

 
The plot is just the standard rogue supersoldiers fight their evil handlers. The only innovation is that the supersoldiers are dogs and cats and bunnies. That does change the dynamic marginally; you get more sentiment and less testosterone. But the basic conventions are still in place, which means that the comic is still mostly about an escalating series of violent confrontations more or less for their own sake. It’s hard to really take much of a coherent stand against violence and cruelty when so much of your genre commitments and emotional energy are going into showing how cool your deadly bio-engineered cyborg killer cat is. To underline the idiocy of the whole thing, Morrison has us walked through the entire comic by various military observers acting as a greek chorus/audience stand-in to tell us how horrifying/awesome it is to be watching all of this violence/pathos. You can see him and Frank Quitely sitting down together and saying, “Wait! what if the plot isn’t quite thoroughly predictable enough?! What if the Superguy fans experience a seizure when they can’t hear the grinding of the narrative gears?! Better through in some boring dudes explicating; that always works.”

The point here isn’t that convention is always and everywhere bad. Rather, the point is that conventions have their own logic and inertia, and if you want to say something different with them, you need to think about it fairly carefully.

Antonio Prohias’ Spy vs. Spy comics, for example, are every bit as conventional as We3, both in the sense that they use established tropes (the zany animated slapstick violence of Warner Bros. and Tom and Jerry), and in the sense that they’re almost ritualized — to the point where in the collection Missions of Madness, Prohias is careful to alternate between black spy victory and white spy victory in an iron and ludicrous display of even-handedness. Moreover, Spy vs. Spy, like We3, is, at least to some extent, trying to say something about violence with these tropes — in this case, specifically about the Cold War.

Obviously, a lot of the fun of Spy vs. Spy is watching the hyperbolic and inventive methods of sneakiness and destruction…the black spy’s extended (and ultimately tragic) training as a dog to infiltrate white headquarters, or the white spy’s extended efforts to dig into black HQ…only to end up (through improbable mechanisms of earth removal) back in his own vault. But the very elaborateness and silliness of the conventions, and their predictable repetition, functions as a (light-hearted) parody. Prohias’ spies are not cool and sexy and competent and victorious, like James Bond. Rather, they’re ludicrous, each committing huge amounts of ingenuity, cleverness, malice, and resources to a never-ending orgy of spite. Spy vs. Spy is certainly committed to its genre pleasures and slapstick, but those genre pleasures don’t contradict its (lightly held, but visible) thematic content. Reading We3, you feel like someone tried to stuff a nature documentary into Robocop and didn’t bother to work out how to make the joints fit. Spy vs. Spy, on the other hand, is never anything less than immaculately constructed.
 

spyvsspy004

 

We3 doesn’t seem to realize its themes and conventions don’t fit; Spy vs. Spy gets the two to sync. That leaves one other option when dealing with genre and violence, which is to try to deliberately push against your tropes. Which is, I think, what happens in Pascal Laugier’s Martyrs.

Martyr’s is an extremely controversial film. Charles Reece expresses something of a critical consensus when he refers to it as “really depressing shit.”

And yet, why is Martyr’s so depressing…or, for that matter, why is it shit? Many fewer people die in Martyr’s than in We3; there are fewer acts of violence than in Spy vs. Spy. Even the film’s horrific finale — in which the main character is flayed alive — is hardly new (I first saw it in an Alan Moore Swamp Thing comic, myself.) So why have so many reviewers reacted as if this is something especially shocking or especially depressing?

I think the reason is that Laugier is very smart about how he deploys violence, and about how he deploys genre tropes. Violence, even in horror, generally functions in very specific ways. Often, for example, violence in horror exists in the context of revenge; it builds and builds and then there is a cathartic reversal by the hero or final girl.

Laugier goes out of his way to frustrate those expectations. The film is in some sense a rape/revenge; it starts with a young girl, Lucie, who is tortured; she escapes and some years later seeks vengeance on her abusers.

But Laughier does not allow us to feel the usual satisfying meaty thump of violence perpetrated and repaid. We don’t see any of the torture to Lucie; the film starts after she escapes. As a result, we don’t know who did what to her…and when she tracks down the people who she says are the perpetrators, we don’t know whether to believe her. As a result, we don’t get the rush of revenge. Instead, we see our putative protagonist perform a cold-blooded, motiveless murder of a normal middle-class family, including their high-school age kids.

We do find out later that the mother and father (though not their children) were the abusers…but by that time the emotional moment is lost. We don’t get to feel the revenge. On the contrary, no sooner have we realized that they deserved it, than we swing back over to the rape. Lucie has already killed herself, but Anna, her friend, is captured and thrown into a dungeon, taking her former companion’s place. There she is tortured by an ordinary looking couple who look much like the couple Lucie killed. Thus, instead of rape/revenge, we get the revenge with no rape, and the rape with no revenge. Violence is not regulated by justice or narrative convention; it just exists as trauma with no resolution.
 

images

 
I wouldn’t say that Martyrs is a perfect film, or a work of genius, or anything like that. Anna’s torture is done in the name of making a martyr of her; the torturers believe suffering will give her secret knowledge. And, as Charles point out, they end up being right — Anna does attain some sort of transcendence, a resolution which seems to justify the cruelty. And then there’s the inevitable final, stupid plot twist, when the only person who hears Anna explain her secret knowledge goes off into the bathroom and shoots herself. So no one will ever know what Anna saw, get it? Presumably this is supposed to be clever, but really it mostly feels like the filmmakers steered themselves into a narrative dead end and didn’t know how to get out.

Still, I think Charles is a bit harsh when he says that the film is meaninglessly monotonous, that it is not transgressive, and that the only thing it has to offer is to make the viewer wonder “can I endure this? can I justify my willingness to endure this?” Or, to put it another way, I think making people ask those questions is interesting and perhaps worthwhile in itself. It’s not easy to make violence onscreen feel unpleasant; it’s not easy to make people react to it like there’s something wrong with it. Even Charles’ demand that the film’s violence provide transgression — doesn’t that structurally put him on the side of the torturers (and arguably ultimately the filmmakers), who want trauma to create meaning?

Charles especially dislikes the handling of the high-school kids who are killed by Lucie. He argues that they are presented as innocent, because their lifestyle is never linked to their parents’ actions. Anna’s torture is mundane enough and monotonous enough to recall real atrocities, and conjure up real political torture — basically, a guy just walks up to her and starts hitting her. But the evocation of third-world regimes, or even of America’s torture regimen, no matter how skillfully referenced, falls flat since it is is not brought home to the bourgeois naifs who live atop the abattoir.

Again, though, I think the disconnection, which seems deliberate, is in some ways a strength of the film, rather than a weakness. Violence isn’t rationalized or conventionally justified in Martyrs — except by the bad guys, who are pretty clearly insane. In a more standard slasher like Hostel, everyone is guilty,and everyone is punished. In Martyrs, though, you don’t get the satisfaction of seeing everyone get theirs, because scrambling the genre tropes makes the brutality unintelligible. The conventions that are supposed to allow us to make sense of the trauma don’t function in Martyrs — which makes it clear how much we want violence to speak in a voice we can understand.

The Recursive Mind

“From a scientific viewpoint, the only real contender for the seat of the mind, or even the soul, is the brain,” says Michael C. Corballis in his new book The Recursive Mind: The origins of Human Language, Thought, and Civilization.

Corballis is an evolutionary biologist, and, as he mentions repeatedly, an atheist. So when he says that the brain is the “only real contender” for the soul what he actually appears to mean is that there is no real contention at all. You don’t need to assume outside forces to explain human beings. You just need to look at the holy atavistic trinity of evolutionary psyche—primitive cultures, great apes, and autistics. Using the deviations of chimps, rain-forest dwellers, and Rain Man, science can triangulate normality through entirely material means. There is no need to postulate a soul, or God, or transcendence, or miracles.

The refusal of miracles is particularly important for Corballis, and it leads him to some surprising places. Specifically, it causes him to reject the idea that what makes humans into humans is language. Other writers, like Noam Chomsky, have argued that Homo sapiens became the Homo sapiens we know and (more or less) love when they learned to talk.

Chomsky believes the ability to understand language is innate, and that that ability has to precede the use of language itself. This creates a difficulty, though. Joe Hominid, in Chomsky’s view, would have gained no advantage just because deep in his skull he was suddenly able to talk to Jane Hominid. Eventually, of course, the Hominids would learn to converse and this would help them collaborate in the hunting and tracking of mammoths and/or tubers. Until they actually had language, though, the ability to speak would have done nothing for them.

Since, in Chomsky’s view, there was a lag between ability for language and actual language, natural selection is taken out of the picture. Instead, Chomsky suggests that the ability to use language was a bolt from the developmental blue. Or, in Corballis’ paraphrase, it was the result of, “some single and singular event causing a rewiring, perhaps a fortuitous mutation, in the brain.” Corballis notes drily that Chomsky’s “account, although not driven by religious doctrine, does smack of the miraculous.”

Corballis’ goal, then, is to get rid of the miracle. And he decides that the best way to do this is by unseating language as the key to humanity. For Corballis, In the beginning was the Word, should be replaced by, In the beginning was recursion.

Recursive thinking, for Corballis, is the ability to think about thinking. He identifies several recursive processes as characteristic of human beings. First, he points to mental time travel—the ability to imagine past events within current consciousness. This is the basis both of memory and of fiction, which for Corballis is a kind of memory of the future. Corballis also singles out theory of mind—the ability to imagine the state of mind of others (and therefore to imagine them imagining your state of mind and you imagining their state of mind imagining your state of mind imagining their state of mind, and so on.) Corballis argues that theory of mind allows for the development of language. In order to talk to somebody, you have to have a sense that there is a somebody, a consciousness, out there to talk to. Recursion allows humans to share each other thoughts, and it is the sharing of thoughts which allows for language, rather than language which allows for the sharing of thoughts.

It’s an intriguing thesis, and to defend it, Corballis comes up with—well, with not much, at least as far as I can tell. He shows beyond a shadow of a doubt that songbird patterns can be explained without assuming that songbirds have recursive thinking. He demonstrates that primates other than humans appear to have only a rudimentary theory of mind—though it’s hard to tell exactly how rudimentary, since their language is rudimentary too, so we can’t ask them. He notes that those with certain kinds of autism seem to have trouble with recursive thinking and with language. He puts great emphasis on the so-called mirror neurons in monkeys, which appear to be activated when the monkey sees another monkey acting in the same way as the monkey, and also seem to have something to do with language. So the mirror-neurons may link recursion and language—unless, of course, you turn to Corballis’ notes, where he admits that many researchers think the whole mirror-neuron/language connection is a load of monkey pooh.

John Horgan, writing in The Undiscovered Mind, suggested that Corballis’ difficulty in shoring up his theories is not his fault. Rather, it’s endemic to his discipline.

Evolutionary psychology is in many respects a strangely inconsequential exercise, especially given the evangelical fervor with which it is touted by its adherents. Evolutionists can take any set of psychological and social data and show how they can be explained in Darwinian terms. But they cannot perform experiments that will establish that their view is right and the alternative view is wrong—or vice versa.

The specific problem in Corballis’ book is that he cannot experimentally separate recursion and language. How does he know that language didn’t allow us to engage in recursive thinking rather than the other way around? His efforts to nail down this point—by, for example, referring to a remote tribe which some people think may have non-recursive language, or by pointing to autistic people who have difficulty with some kinds of recursive thought but can still learn language—are inconclusive. In fact, after reading this book, I’ve come away impressed not with how much evolutionary psychologists know, but how little. One sheepish note buried in the back of the book even admits that primatologists aren’t sure whether gorillas incessantly vocalize or hardly vocalize at all. If we can’t tell how often gorillas howl, how are we supposed to figure out how human speech is related to human consciousness?

It’s not that Corballis doesn’t have any good ideas. His argument that language developed first as gestures rather than speech, for example, seems both clever and perfectly plausible. And seeing recursion as the essential human trait is entirely reasonable… and even (perhaps despite Corballis’ best efforts) has theological precedent. Reinhold Niebuhr, for example, argued that what made humans human was their capacity for “self-transcendence.” Human beings can look at themselves looking at themselves; they know they’re going to live and that (less cheerily) they’re going to die. “Man’s melancholy over the prospect of death is the proof of his partial transcendence over the natural process which ends in death,” Niebuhr writes in his essay “Humour and Faith.” Recursion, our ability to see ourselves being ourselves, is, for Niebuhr, both our triumph and our tragedy.

Corballis doesn’t see it as a triumph or a tragedy, though. Nor does he phrase recursion in terms of self-transcendence. That sort of theological language is…well, too theological. Instead, Corballis prefers to discuss material things; why humans stood upright, where the Neanderthals went, how different languages indicate tense. All of which is certainly interesting, but misses the main point.

That point being that humans actually are fairly miraculous. I actually find Corballis’ argument for gradual change under evolutionary pressure more convincing than Chomsky’s theory of sudden mutation. And yet, Chomsky’s bolt from the blue is a metaphoric truth, even if it isn’t a factual one. Humans are really, really different than our closest relatives—more different than can be accounted for on the basis of evolution or genes. There’s a rupture there that defies fully material explanation.

Which is where language, followed or preceded by recursion, comes in. Language is both social, existing between individuals, and private, existing within the core of our identities. “I think therefore I am” is a piece of language. If it can’t be said, it doesn’t exist, and then where are we?

Perhaps even more importantly, language is a material thing; it’s a technology. But it’s also inseparable from ourselves. We create it and it creates us, recursively. Language retools us. We were apes—we still are apes—but we’re apes that are constantly remaking ourselves in the image of words such as “human being.” Evolutionary psychologists can natter on (as Corballis does) about how women are biologically programmed to be nurturers and men are biologically programmed for science. But the more they natter, the more they show that the nattering is what matters, not the programming. What our ancestors did is a lot less important than what they said.

And if the saying is the thing, it’s possible that Corballis is looking for the soul in the wrong place. Perhaps it’s not in the brain, after all. Maybe it’s where the Bible says it is—in that non-space between and within us known as the Word.
 

k9424

 

Schlock Blues

This first ran at Splice Today.
_________

Bonnie Raitt’s always had a bit of pop in her roots. Her 1971 self-titled debut included appearances by stone blues royalty A.C. Reed and Junior Wells covering contemporary pop tunes like Stephen Stills’ “Bluebird” and the Marvelettes “Danger Heartbreak Dead Ahead.” Two decades later, she was still at it. The title track of Nick of Time put Raitt’s earthy/sexy blues-pop voice over eighties drum machine and keyboards. With Don Was producing, the result is an authentic schlock charge that tore up AM radio. Blues becomes sentimental pap, sentimental pap becomes blues, and both of them are deployed in the interest of lyrics about getting older, watching your parents age, and listening to your friends tell you about their marriage falling apart. It’s middle-of-the-road music for boring middle-aged people — and as a precociously boring and precociously middle-aged twentysomething, I found it irresistible.
 

images

Raitt’s next album 1991’s Luck of the Draw was even more successful with the same formula — some blues licks, that smooth, real voice, smarmy easy listening tunes delivered like they were roots truth and roots truth delivered like it was smarmy easy listening. “Something To Talk About,” opens with a ridiculous giant crappy thudding drum and a background chorus that sounds like its had its collective brains scooped out with a mellon-baller, all juxtaposed with Raitt’s growling guitar and that drawled “darlin'” which never fails to make me need to sit down and fan myself. It’s both gee whiz corny and smolderingly sexy — a come hither anthem for middle America.

In short, Raitt’s appeal lies not in being “real,” and not in being gratuitously ersatz, like Madonna or Bowie or even the Carpenters. Rather, when she’s on, she’s on because she’s a little bit authentic, a little bit pop— solidly middle-of-the-road. At her best, she sounds both tough and clueless; both knowing and approachable, as on the aching schmaltz heartbreak of Luck of the Draw’s “I Can’t Make You Love Me.”

Alas, the roots/pop balancing act is inherently unstable; step wrong one way and you’re recording pallid blues; step wrong the other way and you’re putting out unlistenable pap. Raitt’s done both of those things. Her second album, Give It Up, is much lauded as a triumph of authentic blues rock, but can also be seen as an exercise in irrelevance. If I wanted to listen to a tasteful, competent album, I’d go for Billie Holiday or Van Morrison or someone else with talent, you know?

Slipstream, Raitt’s latest release, tips over in the other direction. There’s still the blooze, of course, but even that seems pro-forma at this point in her career — just another hollow gimmick like the keyboards and the drums and the vocal chorus. The reggae-lite of “Right Down the Line” is one of those vapid, peppy tunes that makes you leap for the radio off-switch — a could-have-been earthier “Shiny Happy People,” if Raitt had the hitmaking power of yore. “Marriage Made in Hollywood,” figures Raitt as bland scold, mildly deriding our celebrity culture with all the satiric bite of a toothless, shapeless crooning bivalve. Whoever decided that Raitt would make a good social commenter needs a swift kick in the reality programming.

Still, there are moments of adequacy. Raitt’s voice sounds a little older, a little more strained, but her phrasing can still hold your attention. When she sings “I would be crazy if I took you back,” on “Standing in the Doorway Crying,” you can feel the desire and the resignation. The opening lines of “God Only Knows” (“Darkness settles on the ground/leaves the day stumbling blind,”) deliver a bleak, charge…before the tune degenerates into Billy Joel-esque earnest piano confessional.

But so it goes. Raitt was never a great artist, or even, arguably, a very good one. Still, she managed to parley her aesthetic incoherence, her bad taste, and that marvelous voice into some of my favorite pop ever. I doubt she’ll ever make another bearable album. I’m just grateful, and a little surprised, that she ever made any.

Utilitarian Review 7/27/13

slipperyslope2013_12

Alyssa Herlocher
Scrotal Mountains
gouache on paper

 
On HU

Featured Archive Post: Matthias Wivel on the sacred in Chester Brown’s Paying For It.

Voices from the Archive: Kurt Busiek on copyright extension and comics.

Me on trying to choose whether to vote for racists or imperialists.

Patrick Carland on Aku No Hana and the politics of decadence.

Jog on cultural tourism and Only God Forgives.

Andrea Tang on the Yellow Peril in recent cinema.

Vom Marlowe on the weirdness of Black Butler.

Me on meaning and no meaning in John Porcellino’s Raindrops.
 
Utilitarians Everywhere

At Wired I argue that we don’t need no stinking Wonder Woman movie.

At the Reader I wrote about a great show at Woman Made gallery focusing on the aesthetics of porn.

At the Atlantic I wrote about censorship and porn on the Kindle.

At Splice I write about:

Obamacare for traditionalist.

Venus Santiago’s lactation stories and porn for women who work.

 
Other Links

Jes on engaging with music made by abusive men.

What Do You Mean, Raindrops?

rain003

I like the cat. It’s barely there; just a single line dividing inside and outside. And then it’s bound by the bottom of the panel, so the something inside and the nothing outside seem equally arbitrary. The tail is a separate thing; it could be a raindrop sliding down the surface of the panel. The cat’s eyes and nose could be raindrops too. The lit lit lit is the sound of the cat tail raindrop hitting the panel, and the sound of the eye and nose raindrop hitting the cat. One lit for each, the sound of rain dripping.

The window in the corner could just as easily be a painting, or a drawing. In fact, it is a drawing. Is that the delusion? Or the bare substance? The raindrops in the window, or the picture, are not raindrops. But they aren’t empty either. Ideas, not clinging, but falling…at least in theory.

I think the comics almost makes more sense if you rearrange the panels, or drop some of the panels. The monk’s questions and answers don’t really seem to add anything; it’s less a socratic dialogue than a monologue with more or less distracting interjections. The fact that there’s a pretense of communication almost makes the thing more hermetic. If Ching-Ch’ing doesn’t have an interlocutor, then some of the contradictory statements seem less like things you have to parse, more like he’s vacillating inside his own head. Instead of setting his own conduct up in opposition to that of ordinary people, you could read him, without the monk, as saying that he, too, is an ordinary person, on the brink of falling into delusion about himself. In fact, treating the rain as a metaphor could be seen as a step into delusion. The rain is not people upside down falling into delusion about themselves. The rain is the rain. But the bare substance is hard to express. It turns into deluded people, or into the word “lit” (like “literature”?) or into the picture of a picture of rain. To express the bare substance, all you’ve got is representation.

The title design, with the little raindrops on either side, is pretty clearly twee. Maybe it’s the title Ching Ch’ing is referring to when he says that ordinary people pursue outside objects; the title is outside the comic, labeling it, and providing the one real drawing of rain (if you don’t count the cat’s tale as a raindrop, and count the window as a picture.) The unnecessary fillip of design, and of such an unassumingly finicky design. The little “lits”, the cat, the bald-headed monk tilting his head just so, and the world in which equal line weight and lack of shading means that bodies and backgrounds fail to become each other only through the delicacy of reader and creator’s mutual forbearance — all of these seem to try to find profundity through ostentatious smallness. You wonder if the bare truth of Zen is a tea cosy.

In the first panel, the sound outside is the sound outside might be seen, not as the sound outside the room (wherever that is) but rather as the sound outside the speech bubble itself. But the speech bubble has a sound inside itself too — or at least as much of one as the sound outside. If Ching Ch’ing is seen as a shape, then the sounds — his speech, the lit lit — are all outside him. Pursuing outside objects could be the words running outside the self, chasing those lit lits.

Or perhaps what’s outside is us, looking down, upside down over the page, falling into delusions, or on the brink of doing so, by trying to avoid falling into delusion by reading about avoiding falling into delusion.

In the little additional text at the bottom, Porecellino says you and I discuss how people cling to words and ideas when the Old Monk drops by. That makes the monk the rain, falling from outside to inside. But which monk is this? Is it Ching Ch’ing? Or is it the monk talking to Ching Ching? I think it’s probably supposed to be the first, but I kind of like the idea of the straight man monk showing up, maybe with the cat, and all of us standing around confused together. No rain.