April 19, 2013 § Leave a Comment
NBC.com’s Nidhi Subbaraman just posted an article discussing the public’s role in the Boston Bombing case. I am quoted at the end of the piece, and wanted to clarify my position, because what it says I said is not entirely what I meant (word limits!).
…Shirky and others, such as Whitney Phillips, an Internet scholar and lecturer at NYU who has studied the trolling behavior of 4chan, think caution is necessary at times like this.
“Strong moderation is just what you need to keep this kind of ridiculous gossip at bay,” said Phillips, but strong moderation is not the kind of thing usually seen on Reddit.
“I think crowdsourcing is a terrible idea during criminal investigations,” Phillips told NBC News.
I do think that crowdsourcing is a terrible idea during criminal investigations, under certain circumstances — namely when the crowdsourcers are making up the rules as they go along, without any consideration for the serious personal and legal repercussions of their actions. Vigilante crowdsourcing, in other words. Once law enforcement has actual confirmed information and are looking for public feedback, great! I’m with Shirky; if you see something, say something. But until then, stop trying to solve the crime using MS Paint. As the Sulahaddin Barhoum and Sunil Tripathi cases highlight, this sort of approach has the potential to do far more harm than good. In fact nothing good has come of Reddit’s involvement in the story. (Friday night update: see this detailed thread criticizing participating Redditors’ haphazard sleuthing)
As for the issue of moderation — my full quote explained that platform moderators have and should exercise the ability to quash falsely incriminating details and/or surreptitiously acquired personal information before whatever information gets snatched up by lazy journalists and splashed across the front page of The New York Post. This is not a violation of “free speech,” as many Redditors might argue. Rather this is a preemptive protective measure. As Shirky explains in the NBC article, the cost of failure is very high — and above and beyond being the responsible thing to do, it is (at least, seems like it should be) in the platform moderators’ best interest to prevent, for example, being sued for libel.
April 8, 2013 § Leave a Comment
The other day (how did I miss this?) Eric Benson at NY Mag posted a rundown of the increasingly elastic definition of the word “troll.” He interviewed me for the piece, which is always odd because these kinds of interviews are usually 30-45 minutes long but only yield one or two sentences. Media!
Quoth the me:
As with other robust Internet terms, trolling lends itself to more general meanings far removed from its origins. “To hear people talk about trolls in April 2013 is so different than people talked about it even in 2011,” says Whitney Phillips, an NYU lecturer in media studies who wrote her dissertation on Internet trolls. “You now encounter the word all day long.”
It’s a brave new world, kids!
January 1, 2013 § Leave a Comment
Yesterday I appeared on Al Jazeera’s The Stream program alongside Aaron James of UC Irvine. The subject of the show –overview here– was TROLLS, and the degree to which they are killing the internet. As always, the term “troll” was contested, to the point of near-empty signification; throughout the broadcast it referred in turn to subcultural trolling, name-calling, racist online abuse, harassment, and identity theft. In almost every case, these behaviors were decried as antagonistic, destructive, and wholly deserving of immediate government or corporate intervention (so, the difference between passing laws and implementing on-site policies like upvoting systems, as more than one Google hangout guest suggested). Anonymity was cited by many as the ultimate root of the problem, because people are never hateful towards each other under their real names, and no one is ever violent or racist in real life.
One thing that was not discussed –and something I wish I had the chance to talk about; the conversation was primarily focused on the DARK SIDE of trolling, or perhaps more accurately, what many racists and assholes have taken to calling trolling (in other words, OH! My real life identity was linked to the hateful shit I’ve been spewing on Twitter under a pseudonym??? Just kidding everyone, I was only trolling!!!)– is the complicating fact that trolling is, or at least can be, an extremely effective tool against precisely the assbaggery to which this program was devoted. I know several trolls –and one troll in particular– whose greatest joy is to out or otherwise torment racists, homophobes and sexists who deserve to have their dumb asses handed to them, placing them directly in line (well, perhaps a bit uncomfortably in line) with the anti-troll crusaders who claim that the best response to trolling is to punish trolls. The funny thing is that many trolls wholeheartedly agree (though they might take issue with the definition of the term “troll,” as many reject the idea that being a bigot on the internet qualifies as trolling), and are more than happy to take up what many would regard as a righteous, anti-douchebag cause. This is where conversations of trolling (and more specifically, conversations about what to do about trolling) brush up against conversations about vigilante justice, immediately thickening the plot.
December 30, 2012 § Leave a Comment
From Day 6 with Brent Bambury:
Scores of web users took it upon themselves this year to call out people, groups, even countries for what they saw as bad behaviour. A sort of vigilante morality police raged against bullies, despots, racists and trolls this year, while sites like Google+ and YouTube encouraged users to drop the shield of anonymity. Did the Internet find a kind of moral conscience this year? We put that question to internet trolling expertWhitney Phillips. Whitney just published a lengthy essay explore many of the themes we discuss with her. Read it here.
And/or go to there to hear the interview!
October 19, 2012 § 1 Comment
I was just on HuffPost Live and boy are my undefined terms tired. The segment title was “Is Anonymity Good For Free Speech,” and the segment description is as follows:
With the recent unmasking of Reddit troll “ViolentAcrez” and Amanda Todd’s harasser, exposing anonymous trolls seems beneficial. But does anonymity protect free speech?
The problem is, this doesn’t make any sense. First of all, it isn’t clear that Amanda Todd’s real harasser was exposed — at the very least, there are enough red flags for me to slap on a big flashing “unconfirmed” sign. Secondly, and more annoyingly, “anonymity” and “free speech” are, on their own, empty abstract nouns. Do you mean anonymity like you’re posting to Twitter under an alt account not publicly attached to your name, but which is linked to your real account on tweetdeck? Do you mean anonymity like posting anonymously to a blog that requires an email to register but then allows you to post under whatever name you want? Do you mean anonymity like whatever site mods can’t see your IP address, but maybe the paid staff can? Do you mean anonymity like not in any way traceable to your own name or IP address or anything that could possibly in anyone’s hands be connected back to you, ever? Do you mean anonymity like some sort of self-evident invisibility cloak you assume gives you the right to say whatever they want on the internet, because you’re white and you feel like it? Because the first couple things are all things, in that they exist, but the last thing is not a thing; no one has the legal right to be anonymous on a privately-owned website on the internet. Not to say that anonymity online can’t be important and good, it’s just that there’s no law in the constitution that says we all get to BE that, on Reddit. Unless you’re a federal whistleblower, in which case those protections already exist before you even go online (and anyway, that doesn’t even guarantee you anonymity, but rather legal immunity).
And don’t even get me started on “free speech,” since…………that’s not a thing people have either, at least not in the way they might think. Free speech in the legal sense is about keeping the government in check (you know, that whole thing about the American Revolution, and tyranny and all), and ensures that Congress shall pass no laws restricting speech — except in the cases of fighting words, copyright infringement, child pornography, libel, incitement to violence, and threats. Those things are all illegal; Congress can pass all the laws they want restricting unprotected speech. So, it makes sense to talk about free speech if the government arrests you for engaging in speech that THEY say is restricted and YOU say is not (or shouldn’t be). It does not make sense to talk about free speech when a journalist exposes the identity of a pedophile on a privately-owned website that thinks pedophiles are pretty cool guys. Which isn’t to say that there aren’t ethical issues involved in the Violentacrez clusterwhoops; of course there are, jesus. But these aren’t free speech issues, and shouldn’t be described as such until the feds press charges against Brutsch (fingers crossed), and he contests the arrest on first amendment grounds.
So, any conversation that is predicated on the assumption that “anonymity” can “protect” “free speech” (because that’s what anonymity does best, action verbs) is doomed to end up chasing its own tail. In this particular case, I had no idea what was being discussed, and even less of an idea of how to connect these ideas to trolling. Other than anonymity, and free speech. WHICH ARE NOT ANSWERS THEY ARE ABSTRACT NOUNS.
October 16, 2012 § 2 Comments
Australia’s SBS Insight program recently filmed a show about trolls. They brought me on as a guest. It was 4:00 in the morning! The full episode is here.
October 15, 2012 § 6 Comments
Today I published an article about trolling generally and Violentacrez specifically on The Atlantic. Here’s a snippet, which follows a longish discussion of the homologous and, ultimately, symbiotic relationship between trolls and sensationalist corporate media:
I am not arguing that members of the media are trolls, at least not in the subcultural sense. I am however suggesting that trolls and sensationalist corporate media have more in common than the latter would care to admit, and that by engaging in a grotesque pantomime of these best corporate practices, trolls call attention to how the sensationalist sausage is made. This certainly doesn’t give trolls a free pass, but it does serve as a reminder that ultimately, trolls are symptomatic of much larger problems. Decrying trolls without at least considering the ways in which they are embedded within and directly replicate existing systems is therefore tantamount to taking a swing at an object’s reflection and hanging a velvet rope around the object itself.
Full article here; shitstorm, I suspect, is imminent.
Update: Several people have mentioned that I didn’t acknowledge pre-4chan trolling/trollish behaviors. I agree; I didn’t talk much about that (although I do mention Usenet briefly in this linked post), because…well because that’s not what I was talking about. When I made the statement “Within the ranks of self-identifying trolls, a class of troublemaker whose roots can be traced back to 4chan’s infamous /b/ board” I was literally talking about the brand of self-identifying trolls whose roots can be traced back to 4chan. In other words, “modern” trolling subculture. There’s lots more to say about those earlier behaviors (which I discuss in greater detail in my dissertation), but that wasn’t the focus of the Atlantic piece. Someone else, please write that article!
September 6, 2012 § Leave a Comment
My segment focuses on Holmies and the –I’d say natural and necessary– relationship between participating trolls (or whatever they were) and the media who made them a story. Also featured is the lovely Amanda Brennan of Know Your Meme, a strapping young lad named Chris Menning, who I may have mentioned here before, fan fiction writer Alexa Dacre, Naomi Novik from the Organization for Transformative Works and Professor Francesca Coppa of Muhlenberg College.
September 6, 2012 § Leave a Comment
July 28, 2012 § 1 Comment
Over at the Atlantic, the lovely Kate Miltner (here she is on ROFLcon III’s “Adventures in Aca-meme-ia” panel, alongside myself, Jordan Lefler and Patrick Davidson) has written a very thoughtful article about the more upsetting macros/memes to emerge from last week’s shootings in Aurora, CO. Speaking of recent memetic output on Memegenerator, Miltner writes:
Over the past week, the site’s usual jokes about video games, forum culture, and bodily functions became interspersed with a darker humor: images that poked fun at the Aurora massacre. This trend surfaced across several different memes (Success Kid, Aaand It’s Gone, Y U NO, Condescending Wonka), but the tone was the same: irreverent and insensitive, mocking the victims as well as the crime itself. In one Condescending Wonka macro, the anonymous author snarked, “You shot at 71 people and only killed 12? Glad I don’t have your gamertag.”
In the face of a senseless tragedy like Aurora, such seemingly contemptuous humor is hard to understand: How can people be so glib about something so terrible?
Though we may think of these memes as something birthed to us by the Internet, this genre of humor predates contemporary online culture and stretches back alongside the rise of media that expose us to tragedies to which we have no direct connection. According to folklorist Bill Ellis, “disaster humor” is an important part of a response to tragedy, particularly the type of tragedy that becomes a media spectacle. In the aftermath of the 1986 Challenger disaster, such humor was common, and often verged on brutal: One of the most popular jokes to emerge after that event was, “What was the last thing that went through Christa McAuliffe’s mind? The control panel.”
Miltner goes on to discuss the ways in which the internet –and its constituent culture– has affected the production and dissemination of disaster humor. About midway through, she considers the thorny relationship between trollish and more “legitimate” (for lack of a better term) forms of subversive engagement, and quotes me as saying that “while some of these images might be offensive or upsetting to some, they’re created to make a social or political point, and not necessarily to offend,” i.e. to “merely” troll (of course there’s no such thing as “merely” trolling, but that’s a conversation for another day).
Seriously, go read the article.
Also, as a point of possible trollish interest, via my best, brightest and most shadowy of all possible consultants: the above Wonka image can actually be traced back to trollish engagement with the Pike Rive Mine collapse in 2010, making that particular sentiment as old as RIP trolling itself (well, 2010-2011′s wave of Facebook memorial page trolling).