An article from Eva Frederick in today’s edition of Science gives some context and color commentary on a new study published in Nature that shows, among other things, how the circulation of cluster of online hate activity centered around misogyny, anti-semitism and racism (etc.) can be understood as sophisticated, complex and shape-shifting across topic areas/targets and from platform to platform. I encourage you to read Frederick’s article and analysis, as well as the study.
I was asked to provide commentary on the study, which I did. My comments were of course not able to be included in full, but I wanted to nonetheless share them with those who may be interested. They are as follows:
“I find it very interesting to use these models to definitively show the organized and often predictable circulation and activity of hate behavior online, including the nature of some of its discourse and the sophisticated way in which users who coalesce around these issues travel from platform to platform, often exploiting each site’s features (weaknesses) as they go.
I am, however, more than a little concerned about the feasibility and ethics of some of the proposed interventions. It may, in fact, be in violation of site terms of service for users – well-meaning though the suggestion may be – to ask individuals to create (likely secondary, so-called “sock puppet”) accounts for the express purposes of engaging and “turning,” as it were, people on the other side of the political spectrum, not to mention the profound emotional toll it might take on someone’s psyche to engage in this way. What support systems would they have to deal with these engagements? What conflict resolution training? And what would happen if these interactions spread from online engagements into real-world interpersonal violence? This suggestion seems to me to be worrisome on many fronts.
That said, there are already groups that do do this, on an ad-hoc basis. So is the suggestion that the platforms themselves sponsor such activity? Again, I think there are some logistical and ethical problems here, the least of which is the reluctance on the part of numerous high-profile platforms to take a definitive political stance – preferring, as the researchers point out, to stay in the granular weeds to an extent that I found in my research can reach levels of absurdity at times.
The study itself admits to its shortcomings, pointing out that it is a “highly idealized” model done in the absence of significant real-world data. Here again, the platforms are a barrier to problem-solving, both due to the issue that even on topics such as stemming hate speech and activity on their sites they treat each other as competitors, rather than collaborators, but also because they restrict access to data that may be able to elucidate, prove or disprove the researchers’ hypotheses.
But the fundamental hypothesis that seems to underpin the study in the first place is the the platforms themselves have definitively and unequivocally taken a stand against “hate” on their sites. I don’t know that this is an assumption that can be made, meaning that a number of the proposed solutions will have to be as ad-hoc and organically nurtured as the researchers describe the hate actors themselves being. I also note an absence of discussion of other relevant issues, such as other kinds of organization and material support that some of these clusters may be receiving offline, and the social media business model itself, based largely on the solicitation, circulation and proliferation of user-generated content – the more salacious, often the better.
Again, there are already such organic movements taking place, much of which focuses on activities like “deplatforming” the hate clusters rather than engaging them in discourse. The former is widely criticized by a variety of camps and it’s not clear to me that the latter has proven to be effective.
I’m afraid I find the outlook to be largely grim unless and until the platforms, sites and firms that own them are held to account for their own role in not simply being the transmitters of such content, but also engaged in its circulation as commodity. The study that I read is, of course, trying to prove other things about the nature of online behavior in something of a theoretical sense, but when it bridges over into pragmatic calls to action, it is missing a great deal of context that has created and exacerbated this problem in the first place.”