Comment Moderation, “Free Speech,” and Feminism: Interview with Sam Meier
October 18, 2013 § 3 Comments
Earlier this week I was approached by Sam Meier, editor of news and analysis startup PolicyMic. She asked to interview me for an article she’s writing about online commenting, and I was happy to oblige. She’ll be integrating my answers into her forthcoming article, but encouraged me to publish the full exchange, which I’m pasting below. This is a pretty long read, so go get some popcorn or something. More after the jump!
Sam: A lot of internet-based writers will express that they are at best wary of commenters, if they read their comments at all. As a person hired to write feminist analysis, I’ve run into my fair share of commenters who I found it best to simply ignore. For every commenter who pointed out a flaw in my argument, there was one who had obviously not read the article at all.
In one truly bizarre instance, a commenter went from telling me that women should be against feminism to complimenting me for being so pretty to writing, when I explained that to me being a feminist meant fighting for equality for all, “That makes you the Martin Luther King Jr of of your cause. A dream that can be lived …a means to an end watch out for the al sharptons and jesse jackson of fem. they arnt going to let the dream be realized because then they wont have a job!!!” He then went on to call a fellow feminist writer a cum dumpster who should die, and got himself banned from the site.
As these examples illustrate, commenters are a pretty mixed bag for writers. At the Daily Dot, you wrote: “On an elemental level, comment moderation policies call into question the ideal relationship between writer and commenters.” What do you think the ideal relationship is? Who defined this ideal relationship, how, and when? When you talk about commenters here, do you mean readers who respond to material in the broadest (pre-internet/offline) sense, or just internet commenters?
Whitney: The ideal relationship between writer (the person whose online article it is) and commenter (the people who read and respond to said online article in the comments section)* is one characterized by open, respectful, and engaged discourse that simultaneously fosters a sense of community and generates further discussion among participants. This is a good answer, and I would argue that it is the right answer, but it is also an oversimplified and ultimately meaningless answer — the devil is in the variables. For example, who’s the writer? Who are the commenters? What are the existing on-site moderation policies? Whose voices do these policies favor? Whose voices do they silence? What is the subject of the post? What are the political stakes of whatever argument? Does it matter if 95% of the commenters are male/white/whatever other group? Where was the article posted? Was the writer compensated? Considered individually, not all these variables are equally (or even necessarily) significant. But in order to furnish a proper response to the question “what is the ideal relationship between writer and commenter,” you have to take everything into account; otherwise you’re just speaking in platitudes.
*There are myriad arrangements within the writer/commenter dichotomy. A writer could be the author of a book, a writer for stage and screen, a blogger, or even a marketing person, while commenters could be the people who write newspaper reviews, or the people who holler obscenities at the television screen (me), or academic panel discussants, and so on. Here I’m restricting my focus to your everyday online arrangement (the writer writes articles and the commenter posts comments in the comment section).
Sam: As you note, certain writers will point out that commenters add value or knowledge to a post. People who defend comments sections as “valuable” tend to hold this view; Nick Denton has been banking on this through 8 million different commenting systems. A recent implicit dispute at the Atlantic between James Fallows and Ta-Nehisi Coates shows the “knowledge value” of comments is a significant reason why we want to make comments better.
One issue, as Anna North points out, is that comments sections are not created equal. At PolicyMic, what I’ve noticed happen is that commenters will form roving “packs” of sorts, based on political viewpoints, and follow each other around from article to article essentially drowning out any dissent, or to collectively cry out against censorship if they are prevented from commenting or their comments are removed.
When I spoke with Dan Whaley from hypothes.is, he pointed out that these tactics are well known to people who build reputation systems. For instance, in the “Sybil attack,” one commenter will create a large number of pseudonymous accounts to reinforce whatever set of points they want to iterate (and reiterate!) at your site. It’s hard to know, as an editor/moderator, when you’re dealing with real dissent, or pseudo-controversy.
In what situations do you believe productive commenting occurs?
Whitney: Again, this question is almost entirely dependent on the details of the story/community/writer/article subject. In some cases, productive commenting occurs when commenters are respectful and measured. In other cases, particularly where the article is itself an obscenity, productive commenting occurs when commenters call the writer out for their sloppy or otherwise offensive work (although I’m a staunch opponent of needless, hate-based harassment, I’m no pacifist — sometimes aggressive or otherwise trollish rhetorical tactics are the best and most effective option). This is where power dynamics really matter. Are we talking fifty feminists berating an adult misogynist, or fifty adult misogynists berating a teenaged girl? Those details make all the difference. The other detail that matters is how you’re using the term “productive.” Depending on the circumstance, “productive” might mean “erudite and witty.” In another, it might mean “ability to raise political awareness” (which itself doesn’t need to be positive, but could mean, simply, that some injustice has been brought to public attention).
Sam: There’s been quite a bit of talk about the Twitter “report abuse” button, which came about after Caroline Criado-Perez received thousands of abusive Tweets. The basic argument goes that companies like Facebook and Twitter are obliged to deal with these incidents because they violate the platform’s term of service. But it seems that many people believe that the platform has to go beyond mere enforcement of the TOS and into taking action to prevent cyberstalking or cyberharassment, particularly for vulnerable populations.
For instance, activists Soraya Chemaly, Laura Bates and others wrote in an open letter regarding rape imagery at Facebook, “Although Facebook claims, in a narrowly-defined defense of free speech, not to be involved in challenging norms or censoring people’s speech, you have in place procedures, terms and community guidelines that you interpret and enforce. Facebook prohibits hate speech and your moderators deal with content that is violently racist, homophobic, Islamophobic, and anti-Semitic every day. Your refusal to similarly address gender-based hate speech marginalizes girls and women, sidelines our experiences and concerns, and contributes to violence against them. Facebook is an enormous social network with more than a billion users around the world, making your site extremely influential in shaping social and cultural norms and behaviors.” This argument seems to imply that Facebook is so vast and influential that it’s not just a social network; it’s a virtual society, which must be regulated as such.
Keeping these kinds of issues in mind, when you talk about a platform’s “responsibility” to deal with racist/sexist/homophobic etc. trolls, is this a legal, civic, or business-oriented responsibility?
Whitney: All of the above, though the financial responsibility (i.e. to one’s shareholders) is the most cynical and politically empty. It goes something like “hey websites, do you want to make more money? Then don’t alienate women, gays, lesbians, trans people, and people of color, because they too can be commoditized through advertisements.” Again, cynical, and not an argument I like, though it happens to be true. A friend and former University of Oregon colleague made a similar point in response to a recent On the Media article in which the author lamented Amazon’s decision to remove rape and pedophilia fiction from its virtual shelves. As my friend states in the comments section of this blog post:
Amazon removed these books not exactly or just because of offensive content they are taking some moral stand against, but because they don’t sell enough to justify what the backlash might cost. The tens of people who will buy this garbage isn’t worth the potential for boycotts from angry people who don’t like them carrying it.
The legal and civic questions are more complicated, particularly in cases where the posted content falls into the gray area of protected speech (I mean “protected speech” in the actual Constitutional sense, not the colloquial sense, which I’ll get to in a second). If someone posts, say, child pornography to 4chan, or if you’re YouTube and someone posts a video featuring copyrighted content, eventually that content gets yanked — if it didn’t, the website would risk losing its safe harbor protections. But when it comes to racist or homophobic or otherwise hateful slurs, private platforms can choose to keep the offending content, or choose to remove it. My basic argument is that it would be better if platforms spared their readers from having to wade through the sort of bile that accomplishes nothing but to harass, alienate, and intimidate other readers — an argument I make knowing full well that some platforms are explicitly, and perhaps gleefully, hateful spaces, and therefore wouldn’t in a million years even entertain the thought of removing offensive content. I get that, and wouldn’t waste my breath arguing with those types anyway.
But as I said, comment moderation is a CHOICE. It might be an expensive choice (proper moderation –beyond simple word filtering– requires human eyeballs, and human eyeballs cost money), but a particular platform can say, at the very least, that they’re not interested in preemptively siding with racists and misogynists, and take whatever measures they can given their circumstances. It’s a pretty simple line in the sand, actually, but is one people tend to dance around. This is unsurprising, since on the internet (in the U.S. anyway) any whiff of censorship sends people into a panic about FREE SPEECH (that is, free speech in the colloquial sense) — despite the fact that the refusal to censor hateful speech has the unfortunate tendency to silence or scare away underrepresented voices, thereby lessening, not increasing, the overall amount of speech. Which is odd, especially given that the whole point of “free speech” is (ostensibly) to ensure that speech is free for everyone. But that’s not how it works in practice.
Sam: Commenters love to invoke “free speech” as a justification for saying whatever it is that a moderator has opted to remove. When I’ve moderated comments at PolicyMic, I’ve had to explain over and over the basic right of community participation on a privately owned site: If you’ve agreed to the terms of service for a platform where I have the authority to remove comments based on our policies, you don’t have “free speech.” If you want to protest my removing your comment, you can do it on your own website, Twitter, or Facebook. (And I’ve seen that happen, too!)
But I’m not The Man; I’m an editor. I’m not censoring you as an agent of the state; I’m removing a comment that doesn’t fit with the goals of a space that I curate and attend to.
My suspicion is that people are using the words “free speech” to invoke a different concept related to the right to speak and be heard. I’ve talked about this a great deal with Michael Erard, who wrote this great history of commenting at the New York Times magazine. I think it’s what Habermas termed the “public sphere” — a space where people can gather as equals to address matters of collective concern, and issue their verdict. Jodi Dean, a political theorist who has written powerfully about the limits of networked communication, is my main guide in understanding this phenomenon. Erard pointed me to Citizen Critic, by Rosa Eberle, which articulates similar ideas about “public discussion among private individuals.”
Based on your Daily Dot article, it’s clear that you’re interested in the more colloquial understanding of free speech. Can you elaborate on what you believe this understanding is, and how people come by it?
Whitney: Though I am hardly the first to have done so, I draw a distinction between Free Speech in the Constitutional sense and free speech in the colloquial internet sense in this Daily Dot article. The nutshell version is as follows: Free Speech in the Constitutional sense is a legal protection from government interference (and/or agents of the state, as you describe above), not carte blanche freedom to say anything, ever, under all possible circumstances (for one thing, there are a handful of exceptions to Constitutionally-protected speech, enumerated here). Private citizens and businesses (websites very much included) can tell you to shut your filthy cakehole all day long and your First Amendment protections will emerge unscathed. Constitutional issues might arise if a person is taken into custody and/or disciplined (in the institutional sense) for something they’ve said or done (“speech” in the context of the First Amendment includes flag burning and other forms of symbolic protest), or when the government attempts to pass sweeping laws restricting certain forms of speech. In those cases you’re looking at a proper capital F, capital S Free Speech issue — not when an online moderator decides to delete one of your TOS-violating comments on a privately-owned platform. That’s just housekeeping.
But on the internet –and again, I’m restricting my focus here to the U.S., where entirely unfettered speech is taken to be a kind of spiritual birthright, at least for the people whose speech has historically been the most free (namely straight white men)– this sort of housekeeping is regarded by many as being outright un-American, because…free speech (as I note in the Daily Dot article, “free speech” is often the -ahem circular- reason someone gives in support of their claim to free speech).
It’s difficult to pinpoint the exact origins of overdetermined cultural tropes, but I suspect that concern over free speech is actually born of a slippery-slope paranoia in which any act of censorship threatens, or could potentially threaten, the speaker’s speech. From that perspective, fears about free speech are actually fears about MY speech: because I want to be able to say what I want without outside interference, I will defend with my life your right to say what you want, regardless of how awful. I might be talking about you and your speech, but ultimately it’s about ME, which would explain why people get so damn defensive the second anyone starts talking about comment moderation.
Furthermore, the idea that free speech is synonymous with MY speech would explain why white men tend to be the most vocal proponents of free speech on the internet, and why you just don’t see very many women, gays and lesbians, and people of color expressing similar concerns. Because unlike white men, whose speech has always been the most free, and who therefore have something to lose, members of historically underrepresented groups haven’t enjoyed the same kind of privilege, and consequently are less likely to throw a temper tantrum at the thought of not being heard. Unfortunately, that’s a cross they’ve born their whole lives.
Sam: One thing that’s striking to me in my work at PolicyMic is that, as far as I can tell, my generation immediately turns to web tools for news, and simply to get our voices heard — perhaps because we correctly fear that we will be dismissed when we try to take other forms of action. This helps explain why we care about commenters at all — they’re part of this new knowledge ecosystem; what they can and can’t say has a direct impact on the overall public discourse.
How do you think the internet has colored or continues to color our understanding of speech? Is internet speech different than offline speech? If so, how?
Whitney: Although I would stop well short of suggesting that the internet has fundamentally changed anything about human behavior, it does tend to amplify existing cultural logics. In the context of questions about and around free speech (or “free speech,” as the case may be), I’d say that the internet makes already-entitled people feel more entitled, largely because social media incentivizes self-involvement and solipsism. Online, your social world quite literally revolves around you, like spokes on a wheel; furthermore people are encouraged to pick and choose the content they encounter, and are frequently –and often unbeknownst to them– picked and chosen for by social behemoths like Facebook and Google (Eli Pariser refers to this phenomenon as “online filter bubbles”). Because this world revolves around you –in the process making lots of platform owners lots of money– it’s very easy to assume that you OWN the place, and should be able to do (and say) what you want. If you come to that arrangement with an already-overgrown sense of entitlement (likely born of intersecting social and economic privilege), you are simply more likely to act like an asshole.
So no, I wouldn’t say that there’s any real difference between online and offline speech — it’s just that online, people often have different expectations for their speech, which has as much to do with the technological affordances and behavioral expectations of the online spaces they frequent as it does with the individual person whose speech it is.
Sam: As you note in your Awl piece about William Shatner, many platforms are privately owned, and thus can institute whatever rules they see fit, which I referred to above. Do you think many or most commenters understand this? I’ve seen signs which suggest … well, sort of. This article about Berkeleyside’s new commenting policies, for example.
But what do you think the implications of having a growing number of private platforms for speech are? Here, I’m thinking of sites like Gawker, which as I mentioned before is actively expanding their commenting to a site of sorts in its own right. The New York Times has experimented with highly structured and curated comments that are almost content-like, especially when you look at content platforms like Medium. Would we still consider that “free speech” if all the content belongs, in a sense, to these other?
Whitney: It’s hard to say what many or most commenters understand. Given the frequency with which discussions of comment moderation evolve (or devolve, depending on how you look at it) into chest-beating free speech jam sessions, I’d say that a significant percentage of internet users don’t know what the First Amendment actually does (the difference between a “Congress shall pass no laws…” versus a “don’t tell me what to do” rhetorical framing). But how much of this is genuine ignorance and how much is, say, trolling, is another matter entirely (I’ve long suspected that there’s a considerable trollish undertone to many an online free speech flamewar — at the very least, “BUT FREE SPEECH!” works great as troll bait).
In terms of the implications of the growing number of private platforms, this is hardly a new phenomenon; the Web has always been overrun with private platforms. And although they are frequently described as such, these publicly-visible but privately-owned websites and forums are not akin to the public square, not exactly; they’re more like a series of grocery stores. You may have a Constitutionally-protected right to stand on a street corner and spew rabid nonsense, but if you try to do the same thing in the canned food aisle of your local Safeway, you’ll like be asked to leave or even escorted out — which might not be awesome, but isn’t a capital F, capital S Free Speech issue. Those sorts of decisions are at the manager’s discretion, per the store’s security policies, just as deleting a comment or user profile is at the platform moderator’s discretion, per the site’s terms of service agreement.
Let me be clear, though, I’m not saying that Free Speech is a perfectly straightforward legal category; it isn’t (and that’s an understatement). But in the context of online platform moderation, it’s helpful to draw a basic distinction between Free Speech in the Constitutional sense and free speech in the colloquial sense. Each category comes with its own historical and political baggage, and those are wires we should avoid crossing, for the sake of clarity if nothing else.
Another critical point: by suggesting we draw this basic distinction and criticizing the ways in which “free speech” gets bandied about by bros on the internet, I am not, I repeat, NOT suggesting that the general issue of online censorship –considered independently of First Amendment speech issues– isn’t a big deal. It absolutely is; who gets to delete what is every bit as loaded an issue as who gets to say what. Because what if the person deleting whatever comments has a politically regressive, bigoted agenda? What if the person deleting whatever comments hates women, and doesn’t want people with feminine-sounding names posting to his site? What if the person deleting whatever comments simply doesn’t agree with you on a particular issue, and wants to stack the rhetorical deck to make it look like there’s more agreement over whatever issue than there actually is? In these cases, comment moderation might cause more harm than good. But who gets to determine what qualifies as harm? To whom should we ultimately defer?
I do not know the answer to these questions. In fact I actively struggle to reconcile my often-conflicting attitudes towards “appropriate” applications of bias in comment moderation. On the one hand, I understand the dangers of trying to universalize a particular politics. On the other hand, I refuse to slide back into wishy-washy relativist language that values all politics equally — for the simple reason that some politics suck. Homophobia sucks. Racism sucks. Misogyny sucks. They should not be enabled or encouraged, even indirectly (which is one consequence of lax moderation policies). Those two perspectives –bias can be dangerous, but so can relativism– are tricky to negotiate, and I don’t pretend to have any concrete solutions.
Those complications aside, I would argue without qualification that the way online speech issues are framed, particularly when they hinge on cartoon versions of “free speech,” often complicate the conversation to such an extent that they’re rendered essentially meaningless. And that’s a huge problem, because hateful speech online is a huge problem.
Sam: In the same piece, you delve into the complication of multiple commenting communities on sites like Reddit, which establishes very well-defined communities which abide by their own rules. Do you think homogenous/like-minded commenting communities produce better or different comments, or more valuable comments given the common interests of commenters?
What does homogeneity imply for “civic debate” in comments sections?
Whitney: I’m reluctant to generalize; whether comments produced by homologous/like-minded communities are better or worse depends in large part on who’s reading them. Outsiders –people who don’t identify with whatever community– might think homologous comments are boring, while insiders might appreciate having an echo-chamber for their thoughts. Or not; sometimes listening to people talk about something you already understand or accept can be so boring you might be tempted to eat your own face. It really depends on your relationship to the subject matter and to the community itself. The same goes for the question of civic debate — in some cases, in response to certain issues, wildly heterogeneous comments might facilitate stronger engagement through critical thinking. In other cases, homogenous comments might help drive a particular point home, which can either be a good thing or a bad thing depending on the topic and who is listening.
In conclusion, the devil of comment moderation is in the details. This isn’t a particularly satisfying or easily-tweeted conclusion, but then again, any argument that can be distilled down to 140 characters is probably not asking the right questions.
Sam: Actually, “the devil of comment moderation is in the details” is pretty easily Tweeted, but the details are not.
Whitney: lol, first of all, and yes — it’s the details that make all the difference. Everything else is just words.
More on Sam: Samantha Meier serves as the Identities editor at PolicyMic, an analysis and criticism website geared at millennials. At PolicyMic, she edits and curates content about the intersections of identity and culture, and she writes on political activism, gender, and new media. She is primarily interested in social and historical conceptions of democratic media, particularly as they relate to the internet and/or radical politics. Before coming to PolicyMic, she was profiled in the New York Times for her work co-founding Sexual Health Education and Advocacy Throughout Harvard College, the organization which ran Harvard’s first ever Sex Week. She is at work on a book about women in the underground comix movement. Originally from Flagstaff, Arizona, she currently lives in New York. You can reach her at @sameier12 or via email at email@example.com.