May 20, 2013 § Leave a Comment
New article on trolling on definitions! The setup: These days apparently everything on the internet that is lame/upsetting is “trolling.” This framing isn’t doing us any favors! From the article:
[I concede that language shifts over time; I'm not mad, bro] But describing all problematic online behaviors as trolling and all online aggressors as trolls is a bad idea. Not because there is only one “correct” way to troll, as some trolls might insist, but because using the term as a stand-in for everything terrible online is imprecise, unhelpful, and—most importantly—tends to obscure the underlying problem of offline bigotry and aggression.
For the thrilling conclusion, go here.
April 8, 2013 § Leave a Comment
Today The New Inquiry ran my article “Dissecting the Frog,” which considers the cultural significance of humor. My primary focus is Gabriella Coleman’s analysis of humor within Free and Open Software (F/OSS) circles, but I also discuss my own work with trolls and the mainstream media tragedy-mongers who (are) troll(ed) (by) them. Here’s the overlap between both projects:
What Coleman’s and my respective research projects highlight, then, is the complicated relationship between humor, community formation, and the larger culture. Hacker humor and wit, for example, gestures both to the borders of the F/OSS community and to the much more pervasive logic of neo-liberalism, while specific trolling jokes serve as subcultural scaffolding and draw attention to the connections between trolling humor and mainstream culture, particularly sensationalist media. This culturally holistic approach to humor is particularly helpful when attempting to understand the most upsetting kinds of jokes. When framed as self-contained artifacts, hateful or otherwise corrosive jokes don’t do too much, beyond casting aspersions on the joke teller. But when placed in the context of a specific community, and even more revealing, when that community is placed in the context of the wider culture, corrosive jokes often have as much to tell us about the latter as they do about the former.
For a good time, read the full article here!
February 5, 2013 § Leave a Comment
Today Ethnography Matters posted my second in a three-part guest post series. Here is the opening!
As promised in my last post, this post will discuss my role as a participant observer in the 2008-2012 troll space. It was weird, I hinted, which really is the only way to describe it. Because space is limited, I’m going to focus on three points of overlapping weirdness, namely troll blindness, real and perceived apologia, and ethnographic vampirism. There are other stories I could tell, and other points of weirdness I could discuss, but these are moments that taught me the most, for better and for worse.
The three points of weirdness include:
- It’s Just a Death Threat, Don’t Worry About It
- inb4 apologist
- You’re a Vampire, Whitney
In other words, it’s a comedy. Click here for the whole article.
December 19, 2012 § Leave a Comment
After about 50 rounds of edits (THIS WAS NOT AN EASY ARTICLE TO WRITE), Kate Miltner and I finally finished our latest Awl piece on online shaming/vigilantism. We are much indebted to Carrie Frye at the Awl for her fantastic comments and revision suggestions, and her willingness to publish such a long read. Here is the opening section:
Whitney: Contrary to Nathan Heller’s Onion-worthy New York Magazine article lamenting the loss of the “hostile, predatory, somewhat haunted” feel of early web, the internet of 2012 is not always a warm and fuzzy place. In fact it can be pretty terrible, particularly for women, a point Katie J.M. Baker raises in her pointed response to Heller’s article. The internet is so far from a utopian paradise, in fact, that lawmakers in the US, UK, and Australia are scrambling to do something, anything, to combat online aggression and abuse.
Not everyone supports legal intervention, of course. Academics like Jonathan Zittrain readily concedethat online harassment is a major concern, but they argue that the laws designed to counter these behaviors risk gutting the First Amendment. A better solution, Zittrain maintains, would be to innovate and implement on-site features that allow people to undo damage to someone’s reputation, livelihood, and/or peace of mind. As an example, during an interview with NPR, Zittrain suggested that Twitter users could be given the option to update or retract “bad” information, which would then ping everyone who interacted with the original tweet. Existing damage would thus be mitigated, and draconian censorship measures rendered unnecessary.
Regardless of the impact that either type of intervention might have, the fact is that today, this very second, there is often little recourse against behaviors that might be deeply upsetting, but aren’t quite illegal either. In those cases, what should be done? What can be done?
If recent high-profile controversies surrounding Violentacrez, Comfortably Smug, racist teens on Twitter, Lindsey Stone and Hunter Moore are any indication, it would seem that many people, members of the media very much included, are increasingly willing to take online justice into their own hands. Because these behaviors attempt to route around the existing chain of command (within mainstream media circles, the legal system, even on-site moderation policies), I’ve taken to describing them as a broad kind of online vigilantism. It might not be vigilantism in the Dog the Bounty Hunter sense, but it does—at least, it is meant to—call attention to and push back against some real or perceived offense.
Full article here!
November 15, 2012 § Leave a Comment
Following last Tuesday’s election, a number of American teenagers took to Twitter and began spewing racist invectives against President Obama. Jezebel blogger Tracie Egan Morrissey found these tweets and, in the name of teaching a lesson about accountability, contacted the teens’ high schools and athletic directors. She then published a long article in which she posted the teens’ personal information alongside their offensive (and I mean offensive) tweets. Chris Menning of Modern Primate has some things to say about the story, most notably the fact that Jezebel unfairly implicated an innocent teen in their expose, acknowledged their mistake in an @-reply tweet, but did not issue an apology to Zoe Kimball, a long-suffering victim of trolling and online harassment, or address the fact that Jezebel’s editorial staff unceremoniously changed their header graphic after realizing what they had erroneously posted.
Setting aside the implications of sloppy journalistic practices (it’s called reverse Google image search, and it takes 30 seconds; that’s probably a good place to start when you decide to plaster some kid’s face front and center in an article accusing said kid of calling the President the N-word — at the very least, to see if there’s an Encyclopedia Dramatica article written about the target, as was the case with Zoe Kimball), and tabling the fact that the bigots in the Jezebel article are underage (I am not a developmental psychologist, do not know these kids, and can only speculate about whether or not any or all of them are mature enough to fully grasp the concept of “public” expression, or consequences generally), the Jezebel story poses another, and as far as I can tell, mostly unacknowledged problem — namely, the ways in which public shaming risks replacing one form of problematic online expression with another, and arguably worse, form of problematic online expression.
Some background: the implicit argument of the Jezebel article is that racism is alive and insidious as ever and that we need to do something, anything, to show that this sort of behavior will not be tolerated, and furthermore that you should watch what you say, because someone could be watching and go all Adrian Chen on your ass. As a friend of mine convincingly argued, Jezebel’s approach to racists can therefore be likened to a university’s zero-tolerance approach to smoking on campus (which is its own form of public shaming). These kinds of campaigns, whether anti-smoking or anti-racism, convey the message that THESE BEHAVIORS ARE NOT ACCEPTABLE, which ultimately (ideally) translates to behavioral change.
I don’t disagree with the basic premise that practice (“you’re not allowed to smoke here, because it’s a public health hazard”) impacts ideas (“I don’t want to be a smoker anymore”). But in the case of shaming racists on the internet, at least in the context of the Jezebel article, I wonder if the message conveyed to those who have/would post racist messages online isn’t “you shouldn’t be racist” but rather “you shouldn’t be racist…under your real name.” A surprising percentage of Jezebel’s reader comments seem to (inadvertently) argue as much, and provide slight variations on the assertion-cum-justification that “look, the tweets were public, if you post this stuff publicly prepared to be publicly shamed!!!!” –as if the kids’ misstep was to post their bile under their real names, and not the bile-posting itself. This isn’t to say that commenters on Jezebel are somehow complicit in the kids’ racist statements, but that their reactions give other racist kids (and adults) a compelling reason to create pseudonymous accounts on Twitter or elsewhere.
My issue with the Jezebel article, then, isn’t that I think people have a “right” to be terrible on the internet. In fact if one more person starts bleating about how they have a right to say whatever they want on Twitter because of free speech, I will throw my computer in the toilet (YOU GUYS, THAT’S NOT HOW FREE SPEECH WORKS). But I am wary of the implicit (again, if inadvertent) incentivizing of anonymous racist expression. Because the thing about anonymity is, once someone becomes anonymous, you lose them. You can’t appeal to their better nature because you don’t know whose nature it is. You can’t remind them of the real world implications of their speech and behavior, and can’t force them to confront the repercussions of their actions, because where would you even start? By pushing the behaviors underground, you risk creating a whole new, and arguably worse (at least trickier to handle), beast. Furthermore, and ironically, the very possibility of online shaming comes under threat. After all, you can’t shame people who can’t be found, and who therefore can’t be held accountable.
This doesn’t mean I reject the idea of exposing bigots on the internet. I actually think that public online shaming, if done carefully, may prove to be a better and more effective alternative to various censorship measures, which are more problematic than they would be helpful. But public shaming poses its own set of problems — problems Tracie Egan Morrissey tripped over when she didn’t double and triple and quadruple check to make sure the minors she was shaming weren’t themselves victims, as was Zoe Kimball (because even if it turns out that shaming a group of racist 15 year-olds is worth it, and will ultimately reduce the overall frequency and ferocity of online racism, you had better be damn sure you’re shaming the RIGHT 15 year-olds). The difficulty of getting one’s facts straight isn’t the only complicating factor, as even the most well-intentioned attempt to expose existing bigots might just be a catalyst for emboldening groups of even bigger (and more smugly entitled) bigots.
In the end, then, my argument is that the jury is still out, and that we should think a bit more about the ethical trade-offs of vigilante justice before we decide that public shaming is the best way to deal with problematic online behaviors. It may be that shaming is our best option, but it might not be. It would, I think, be best to proceed with caution.
November 8, 2012 § 2 Comments
Since adopting our 5 month-old rescue puppy Nathan, Chris and I have spent a great deal of time at the local dog parks. Over the months we have met a number of interesting characters, most of whom fall into 10 basic categories.
In no particular order, these categories are:
- The Frazzled Parent
- The Mean Old Man
- The Breedist
- The Helicopter Parent
- The Dog-Hater
- The Bench Warmer
- The Screamer
- The Apologist
- The Sign Ignorer
- The Know-It-All
You should head on over to Modern Primate & read all about it!
Your Tears Are Delicious: Liberals Dance the Waltz at the GOP’s Bawfest (CROSSPOST FROM MODERN PRIMATE)
November 7, 2012 § Leave a Comment
A few notes on political Schadenfreude, one of my favorite emotions!
In the wake of yesterday’s thorough trouncing of the
Lemon Republican Party, people—specifically liberals—have been throwing the word “Schadenfreude” around with finger-tenting aplomb. Shortly after midnight on Election Day, for example, Daily Kos published an image of a bottle of “Tears of Impotent Rage” captioned with the phrase “Drink All You Want” and simply titled “Schadenfreude.” Talking Points Memo proclaimed as much in its November 7 headline, which states that “Liberal Schadenfreude Hits Impossible Heights as Results Pour In,” a point a simple Twitter search of the word immediately confirms.
But is Schadenfreude all that’s going on here? My vote is for not exactly!!
MORE ON MODERN PRIMATE, MY FELLOW AMERICANS!
November 2, 2012 § Leave a Comment
Today my friend Kate Miltner and I published a yak on The Awl (which is awesome), about political memes! Here is that framing:
Whitney Phillips: So memes, right!? Memes are so hot right now. MEMES. Whatever that term even means anymore! We are wading through puddles of memes, whether we’re talking about unimpressed gymnasts,television premieres, or NASA scientists. Memes have even broken into the political space—if there is anything more pervasive than political memes, it’s the discussion andcollation of political memes. It’s now routine for news organizations like Time and ABC News to file helpful reports like “Binders Full Of Women: Ladies And Gentlemen, Your New Political Meme” and hurl around listicles of the “funniest debate memes” like so many politically-themed frisbees.
And people cannot seem to agree whether this is a good thing or the worst thing! For some, political memes represent an EPIC WIN for CROWDSOURCED DEMOCRACY. For others, they are a sign of theintellectual apocalypse! For many seasoned internet hands, they provide further proof that memes have become the victim of their own success. So which is it? Are political memes ushering in a new era of crowdsourced democracy, or are they proof that the American political discourse has become so superficial that we can’t digest anything longer than 140 characters?
This is a good article you guys, and the full text is here!!!
October 16, 2012 § 5 Comments
The following conversation –which is cross-posted here– is the first in a monthly (or whenever-we-feel-like-it) series of discussions about INTERNET. Today Kate Miltner (check her out) and I will be talking about the Violentacrez/creepshot controversy, paying particular attention to the weird ethical issues the story unearths. TAKE IT AWAY, US.
Kate: One of the things I find to be most interesting about the whole Violentacrez thing is how free speech and the right to anonymity/doxing have been brought to the fore. It’s something that I addressed in thispost — clearly, free speech doesn’t mean free from repercussions (the point I made was that it’s one thing to experience social backlash for shitty comments, and entirely another to be incarcerated for them, which is what happened to two men over here).
Whitney: Yeah, I’m torn about the specifics of the case. As usual I have a bunch of overlapping opinions, some of which are slightly incompatible with each other. For example, whatever the moral/ethical issues raised by Brutsch, by Adrian Chen, by the social justice Tumblr trolls who have declared October “doxtober” and have taken to collecting information on a whole host of creepshots guys, I welcome the controversy — at the very least it throws into immediate question a number of unexamined assumptions about the democratizing nature of the internet (particularly on Reddit), about the social implications of anonymity (for better or worse), about the coherence of claims to “FREE SPEECH,” essentially about who the bad guys are and who the good guys are and what should be done to support one and combat the other. I think these questions are important, particularly the last one, and am glad people are asking them.
Kate: I agree that these are things that we need to talk about, but I don’t know that there are any easy solutions. The problem with things like free speech and anonymity/para-anonymity is that you can’t selectively grant them. That’s my issue with Section 127here in the UK– it depends on a moral and subjective position on “offensiveness” and “indecency” which I think is just too risky. The UK isn’t the US and doesn’t have a Tea Party, but who’s to say that they won’t at some point? Obviously this is a tough area to navigate, and in the Violentacrez situation, I don’t disagree with Adrian entirely, and I also don’t disagree with the people who are upset about it, either. Not that I give a shit about Michael Brutsch, but deanonymizing (is that a word? Probably not) people in general is something that just makes me a bit squirmy.
Whitney: Well, doxing in Adrian’s case is a bit of a misnomer. He was told Brutsch’s name; he contacted Brutsch and told him he was doing a story about creepshots; Brutsch asked him not to out him; Adrian declined. That’s not doxing, that’s investigative reporting.
Kate: This is a very good point!
Whitney: The line between those two things is obviously fuzzy, but as far as I can tell, Adrian did everything entirely above board — it’s just that the implications of his choice to move forward on the story (a story which by the way served the public interest, i.e. should have been reported) was that a man’s life, a man who also happens to be a sexual predator (whatever bullshit excuse Brutsch might give; sorry buddy, if you’re fapping to young girls, or merely helping other men fap to young girls, you’re at the very least an enabler of other sexual predators), was ruined. I don’t feel sorry about that one bit, although I do understand why some people, for example Emily Bazelon at Slate, might have some major reservations.
Kate: I mean, I kinda sorta see her point here:
But there’s something so unsettlingly selective about the way in which we punish the few people whose bad Internet behaviors become mainstream notoriety. When prosecutors enforce the law to go after criminals, they’re supposed to do it uniformly. That’s often not how it works in practice but it’s always the standard. And that standard isn’t even on the table in the court of public opinion: We smell blood, and we circle. I wish Michael Brutsch could bear his share of responsibility and sprinkle the rest onto all the other trolls out there. That’s not how viral stories go, I know: We need someone to make an example of. But when the crowd turns vengeful, it gets ugly.
Right, so there are a few things here. First, making examples of people. As I discussed in my aforementioned post about Section 127– clearly, it’s really unfortunate that Matthew Woods made those incredibly tasteless jokes about April Jones, but I think he’s a good example of the “unsettling selectivity” that Bazelon is talking about (and there are others who agree). I have seen much worse than what Woods wrote (as I’m sure you have) and there wasn’t any prosecuting of those incidents, because they were under the radar. However, when it comes to online “speech crimes” (side note: I also don’t think that offensively black humor is a crime), you can’t have the sort of uniformity of prosecution that Bazelon holds up as the standard, because then you’re then entering into the territory of The Online Content Police, which NO, thank you, and also, even China doesn’t have that figured out, so good luck with that.
I also think that her point about Brutsch being unfairly singled out is a bit off the mark. Brutsch spearheaded a lot of the activity that is now being reviled. He enjoyed his position of authority within the Reddit community. Heavy is the head that wears the crown– leadership is a double-edged sword. If you are a ringleader, then you are the one who is going to be targeted for backlash if you’re doing something awful (like posting naked pictures of underage girls). It was all well and good when he was beloved by Reddit, but now All The Trolls are supposed to share in his downfall? I don’t buy that. CONSEQUENCES.
Whitney: CONSEQUENCES indeed. And I don’t mind saying that my heart bleeds not a drop for Brutsch. I agree that the internet hivemind can turn real ugly real fast, and hope I never do something so gross, so sustained, over such a long period of time that I end up in the hivemind’s crosshairs (I do hope my sarcasm comes through on that — it’s not like Brutch made one stupid mistake or simply found himself in the wrong place at the wrong time like SO MANY of the hivemind’s unwitting victims). So yes, sure, vigilantism is a double-edged sword. But who is being targeted by whom MATTERS. This is not a case of an 11 year-old girl being thrown to the mercy of mostly-male, mostly-white Anonymous. This is an adult white male who gleefully exploited women and children. Sorry, but those details impact how I think about the outcome. Furthermore, Brutsch isn’t the only guy on the business end of both blades. The aforementioned Tumblr trolls are already working on an impressive dossier of dox, and let’s not forget that pedophiles have long been a favorite snack of self-identifying trolls. It may seem counterintuitive, but trolls LOVE screwing with pedophiles. Brutsch will not, in other words, be the only pedo to fall. And I won’t feel bad about any of it.
Nor will I feel bad if the resulting unmaskings force a bunch of gross entitled creeps (because in addition to being about sexual exploitation, creepshot-type behavior is ultimately about entitlement) to think twice before exercising this particular brand of “free speech.” Honestly, I think that people on the whole, and particularly gross entitled creeps, could do with a bit less “free speech” (although the notion of “free speech” is often muddled; when people talk about “free speech” in the context of offensive online speech it’s almost always the cartoon version that doesn’t actually mean anything other than “I am a special snowflake and you can’t tell me what to do” aka “waaaaaah”). Which isn’t to say I’m against “free speech” (again, I’m rolling my eyes at that term), but I do think that the “free speech” defense is, more often than not, a cop-out for those who don’t want to deal with the consequences of their own behavior. This is where my position becomes somewhat untenable, because the idea of CONSEQUENCES is actually a pretty slippery fish.
Kate: The slipperiest of fish! My point about the consequences in my post about Section 127 was that “consequences” in the UK now involve incarceration, as opposed to social ostracism (which can be equally harsh, but there is definitely a line there). That is why it’s so bothersome to me– it puts the UK in the same company as Russia (guh). Now, I don’t think that Violentacrez should have been protected for all of the reasons we’ve discussed. However, the thing that makes me uneasy is how, while most people think that the guy deserved what he got (and I don’t disagree), this would be an entirely different story if the person whose identity was revealed was a 16 year old gay kid in rural Alabama who was outed by homophobe scum and then targeted (“deserves what he got” is likely a very different story in that situation). While guaranteed anonymity is not a right, it’s an important tool in many situations, particularly for vulnerable groups. While I agree with you that who is being targeted by whom matters, that gets superbly messy very fast. I don’t think many people would agree that outing a pedophile is problematic, but it becomes very much so when the same processes are used by bigots and crazies to target innocent people. The tools of “justice” are available to everyone, and it’s not the Michael Brutschs of the world that I’m concerned about. I just think that we need to be careful about how celebratory we are about doxing (or revealing identities, whatever), no matter how “good” it is.
Whitney: I agree; the assertion that “he (or whoever) got what he deserved” is a dangerous claim to make, because as you point out, in the “wrong” hands, that same assertion could be used to justify, for example, homophobic violence. It’s a self-righteous position, one that only works so long as the person making the utterance is a “good” person with “good” motives. This isn’t to fall back on some subjectivist nonsense, i.e. I have a right to believe that gay people are people and you have a right to want to murder them. Some people are wrong, and need to shut the fuck up — or be made to shut the fuck up. But again, that’s a tricky argument, because what does it even mean?
WHAT DOES ANY OF IT EVEN MEAN???
Kate: I KNOW. I’ve been worried that by coming down squarely on the side of “free speech” in the Stacey/Woods debacle, people would think that I was like, “Racism/sexualized comments about an abducted child with cerebral palsy LOL WHATEVER”, which is not AT ALL the case. But the thing is, any time you make a subjective judgment when it comes to something legal (of course some people could argue that our entire legal system is based upon subjective moral judgments, but…ugh) it gets really complicated. I mean, I think that there is a big difference between some ignorant fuckwad spouting racist vitriol and an evil woman manipulating a teenager who suffers from depression (cyberbullying, another fun/agonizing topic!), mainly that we can choose to ignore the ignorant fuckwads. Yes, we want to support the good guys and combat the bad guys, and for all of my criticism of it, I think that’s what Section 127 is trying to do. My main concern is that laws like that are the types of things that end up getting abused, because “good”, “bad”, “indecent”, etc can mean a lot of things to a lot of people.
Whitney: I definitely agree with the point that subjective judgments about legal behavior are tricky tricky, but the thing about Brutsch is, I don’t see how what he did WAS legal. At the very least it falls –seems to fall– into an (un)comfortable legal grey area, because although what he posted may not have “technically” met the legal threshold of child pornography (what a disgusting semantic distinction to draw), it certainly constitutes child endangerment, which the last time I checked is very much illegal. So it’s not just that our personal opinions are complicated and often context dependent, the laws themselves are complicated and often context dependent. So…..I don’t know.
What I do know is that those who rally around Brutsch in support of “free speech” (again, eye roll) is completely bizarre, and tethers an ultimately facile political philosophy (“I have the right to do and say whatever I want and so do you, unless you’re using your speech to criticize my speech”) with the sexual exploitation of minors. It’s one tiger trap wrapped in another tiger trap, this story is.
October 15, 2012 § 6 Comments
Today I published an article about trolling generally and Violentacrez specifically on The Atlantic. Here’s a snippet, which follows a longish discussion of the homologous and, ultimately, symbiotic relationship between trolls and sensationalist corporate media:
I am not arguing that members of the media are trolls, at least not in the subcultural sense. I am however suggesting that trolls and sensationalist corporate media have more in common than the latter would care to admit, and that by engaging in a grotesque pantomime of these best corporate practices, trolls call attention to how the sensationalist sausage is made. This certainly doesn’t give trolls a free pass, but it does serve as a reminder that ultimately, trolls are symptomatic of much larger problems. Decrying trolls without at least considering the ways in which they are embedded within and directly replicate existing systems is therefore tantamount to taking a swing at an object’s reflection and hanging a velvet rope around the object itself.
Full article here; shitstorm, I suspect, is imminent.
Update: Several people have mentioned that I didn’t acknowledge pre-4chan trolling/trollish behaviors. I agree; I didn’t talk much about that (although I do mention Usenet briefly in this linked post), because…well because that’s not what I was talking about. When I made the statement “Within the ranks of self-identifying trolls, a class of troublemaker whose roots can be traced back to 4chan’s infamous /b/ board” I was literally talking about the brand of self-identifying trolls whose roots can be traced back to 4chan. In other words, “modern” trolling subculture. There’s lots more to say about those earlier behaviors (which I discuss in greater detail in my dissertation), but that wasn’t the focus of the Atlantic piece. Someone else, please write that article!