June 10, 2013 § Leave a comment
As I mentioned the other day, I’ve written an article for The Daily Dot in which I argue against the phrase “don’t feed the trolls.” The post just went live, so for a good time check it out. Here’s a snippet:
Instead of agreeing not to feed the trolls, thereby accepting the terms of the antagonist’s game, the target should be encouraged to respond with his or her own game—a game called Ruining This Asshole’s Day.
The first and most basic way to play Ruin This Asshole’s Day is to shut them down, ideally by unceremoniously deleting their comments. (This presumes that the target has some control over the posted content, and that the target can keep up with whatever comments, which isn’t always the case and immediately begs a nest of questions about best moderation practices—a conversation for another day.) This shouldn’t be done passively, as an act of acquiescence, but actively, as an exertion of power—specifically the one-two punch of a raised eyebrow and extended middle finger.
Now go read the rest please!
May 20, 2013 § 1 Comment
New article on trolling on definitions! The setup: These days apparently everything on the internet that is lame/upsetting is “trolling.” This framing isn’t doing us any favors! From the article:
[I concede that language shifts over time; I'm not mad, bro] But describing all problematic online behaviors as trolling and all online aggressors as trolls is a bad idea. Not because there is only one “correct” way to troll, as some trolls might insist, but because using the term as a stand-in for everything terrible online is imprecise, unhelpful, and—most importantly—tends to obscure the underlying problem of offline bigotry and aggression.
For the thrilling conclusion, go here.
April 8, 2013 § Leave a comment
Today The New Inquiry ran my article “Dissecting the Frog,” which considers the cultural significance of humor. My primary focus is Gabriella Coleman’s analysis of humor within Free and Open Software (F/OSS) circles, but I also discuss my own work with trolls and the mainstream media tragedy-mongers who (are) troll(ed) (by) them. Here’s the overlap between both projects:
What Coleman’s and my respective research projects highlight, then, is the complicated relationship between humor, community formation, and the larger culture. Hacker humor and wit, for example, gestures both to the borders of the F/OSS community and to the much more pervasive logic of neo-liberalism, while specific trolling jokes serve as subcultural scaffolding and draw attention to the connections between trolling humor and mainstream culture, particularly sensationalist media. This culturally holistic approach to humor is particularly helpful when attempting to understand the most upsetting kinds of jokes. When framed as self-contained artifacts, hateful or otherwise corrosive jokes don’t do too much, beyond casting aspersions on the joke teller. But when placed in the context of a specific community, and even more revealing, when that community is placed in the context of the wider culture, corrosive jokes often have as much to tell us about the latter as they do about the former.
For a good time, read the full article here!
February 5, 2013 § Leave a comment
Today Ethnography Matters posted my second in a three-part guest post series. Here is the opening!
As promised in my last post, this post will discuss my role as a participant observer in the 2008-2012 troll space. It was weird, I hinted, which really is the only way to describe it. Because space is limited, I’m going to focus on three points of overlapping weirdness, namely troll blindness, real and perceived apologia, and ethnographic vampirism. There are other stories I could tell, and other points of weirdness I could discuss, but these are moments that taught me the most, for better and for worse.
The three points of weirdness include:
- It’s Just a Death Threat, Don’t Worry About It
- inb4 apologist
- You’re a Vampire, Whitney
In other words, it’s a comedy. Click here for the whole article.
December 19, 2012 § Leave a comment
After about 50 rounds of edits (THIS WAS NOT AN EASY ARTICLE TO WRITE), Kate Miltner and I finally finished our latest Awl piece on online shaming/vigilantism. We are much indebted to Carrie Frye at the Awl for her fantastic comments and revision suggestions, and her willingness to publish such a long read. Here is the opening section:
Whitney: Contrary to Nathan Heller’s Onion-worthy New York Magazine article lamenting the loss of the “hostile, predatory, somewhat haunted” feel of early web, the internet of 2012 is not always a warm and fuzzy place. In fact it can be pretty terrible, particularly for women, a point Katie J.M. Baker raises in her pointed response to Heller’s article. The internet is so far from a utopian paradise, in fact, that lawmakers in the US, UK, and Australia are scrambling to do something, anything, to combat online aggression and abuse.
Not everyone supports legal intervention, of course. Academics like Jonathan Zittrain readily concedethat online harassment is a major concern, but they argue that the laws designed to counter these behaviors risk gutting the First Amendment. A better solution, Zittrain maintains, would be to innovate and implement on-site features that allow people to undo damage to someone’s reputation, livelihood, and/or peace of mind. As an example, during an interview with NPR, Zittrain suggested that Twitter users could be given the option to update or retract “bad” information, which would then ping everyone who interacted with the original tweet. Existing damage would thus be mitigated, and draconian censorship measures rendered unnecessary.
Regardless of the impact that either type of intervention might have, the fact is that today, this very second, there is often little recourse against behaviors that might be deeply upsetting, but aren’t quite illegal either. In those cases, what should be done? What can be done?
If recent high-profile controversies surrounding Violentacrez, Comfortably Smug, racist teens on Twitter, Lindsey Stone and Hunter Moore are any indication, it would seem that many people, members of the media very much included, are increasingly willing to take online justice into their own hands. Because these behaviors attempt to route around the existing chain of command (within mainstream media circles, the legal system, even on-site moderation policies), I’ve taken to describing them as a broad kind of online vigilantism. It might not be vigilantism in the Dog the Bounty Hunter sense, but it does—at least, it is meant to—call attention to and push back against some real or perceived offense.
Full article here!
November 15, 2012 § 1 Comment
Following last Tuesday’s election, a number of American teenagers took to Twitter and began spewing racist invectives against President Obama. Jezebel blogger Tracie Egan Morrissey found these tweets and, in the name of teaching a lesson about accountability, contacted the teens’ high schools and athletic directors. She then published a long article in which she posted the teens’ personal information alongside their offensive (and I mean offensive) tweets. Chris Menning of Modern Primate has some things to say about the story, most notably the fact that Jezebel unfairly implicated an innocent teen in their expose, acknowledged their mistake in an @-reply tweet, but did not issue an apology to Zoe Kimball, a long-suffering victim of trolling and online harassment, or address the fact that Jezebel’s editorial staff unceremoniously changed their header graphic after realizing what they had erroneously posted.
Setting aside the implications of sloppy journalistic practices (it’s called reverse Google image search, and it takes 30 seconds; that’s probably a good place to start when you decide to plaster some kid’s face front and center in an article accusing said kid of calling the President the N-word — at the very least, to see if there’s an Encyclopedia Dramatica article written about the target, as was the case with Zoe Kimball), and tabling the fact that the bigots in the Jezebel article are underage (I am not a developmental psychologist, do not know these kids, and can only speculate about whether or not any or all of them are mature enough to fully grasp the concept of “public” expression, or consequences generally), the Jezebel story poses another, and as far as I can tell, mostly unacknowledged problem — namely, the ways in which public shaming risks replacing one form of problematic online expression with another, and arguably worse, form of problematic online expression.
Some background: the implicit argument of the Jezebel article is that racism is alive and insidious as ever and that we need to do something, anything, to show that this sort of behavior will not be tolerated, and furthermore that you should watch what you say, because someone could be watching and go all Adrian Chen on your ass. As a friend of mine convincingly argued, Jezebel’s approach to racists can therefore be likened to a university’s zero-tolerance approach to smoking on campus (which is its own form of public shaming). These kinds of campaigns, whether anti-smoking or anti-racism, convey the message that THESE BEHAVIORS ARE NOT ACCEPTABLE, which ultimately (ideally) translates to behavioral change.
I don’t disagree with the basic premise that practice (“you’re not allowed to smoke here, because it’s a public health hazard”) impacts ideas (“I don’t want to be a smoker anymore”). But in the case of shaming racists on the internet, at least in the context of the Jezebel article, I wonder if the message conveyed to those who have/would post racist messages online isn’t “you shouldn’t be racist” but rather “you shouldn’t be racist…under your real name.” A surprising percentage of Jezebel’s reader comments seem to (inadvertently) argue as much, and provide slight variations on the assertion-cum-justification that “look, the tweets were public, if you post this stuff publicly prepared to be publicly shamed!!!!” –as if the kids’ misstep was to post their bile under their real names, and not the bile-posting itself. This isn’t to say that commenters on Jezebel are somehow complicit in the kids’ racist statements, but that their reactions give other racist kids (and adults) a compelling reason to create pseudonymous accounts on Twitter or elsewhere.
My issue with the Jezebel article, then, isn’t that I think people have a “right” to be terrible on the internet. In fact if one more person starts bleating about how they have a right to say whatever they want on Twitter because of free speech, I will throw my computer in the toilet (YOU GUYS, THAT’S NOT HOW FREE SPEECH WORKS). But I am wary of the implicit (again, if inadvertent) incentivizing of anonymous racist expression. Because the thing about anonymity is, once someone becomes anonymous, you lose them. You can’t appeal to their better nature because you don’t know whose nature it is. You can’t remind them of the real world implications of their speech and behavior, and can’t force them to confront the repercussions of their actions, because where would you even start? By pushing the behaviors underground, you risk creating a whole new, and arguably worse (at least trickier to handle), beast. Furthermore, and ironically, the very possibility of online shaming comes under threat. After all, you can’t shame people who can’t be found, and who therefore can’t be held accountable.
This doesn’t mean I reject the idea of exposing bigots on the internet. I actually think that public online shaming, if done carefully, may prove to be a better and more effective alternative to various censorship measures, which are more problematic than they would be helpful. But public shaming poses its own set of problems — problems Tracie Egan Morrissey tripped over when she didn’t double and triple and quadruple check to make sure the minors she was shaming weren’t themselves victims, as was Zoe Kimball (because even if it turns out that shaming a group of racist 15 year-olds is worth it, and will ultimately reduce the overall frequency and ferocity of online racism, you had better be damn sure you’re shaming the RIGHT 15 year-olds). The difficulty of getting one’s facts straight isn’t the only complicating factor, as even the most well-intentioned attempt to expose existing bigots might just be a catalyst for emboldening groups of even bigger (and more smugly entitled) bigots.
In the end, then, my argument is that the jury is still out, and that we should think a bit more about the ethical trade-offs of vigilante justice before we decide that public shaming is the best way to deal with problematic online behaviors. It may be that shaming is our best option, but it might not be. It would, I think, be best to proceed with caution.