Part I of this post discussed Jenna Marbles’ YouTube channel and its comments, which led to understanding the term ‘troll’ as akin to online hunting. It ended with the afterthought of:
…if the content that the troll publishes is problematic and potentially defamatory, who is truly responsible for it: the commentator (i.e. Full Natty Brah); the channel owner (Jenna Marbles); or, the platform owner (YouTube/Google)?
As the owner of her video, if Jenna Marbles uploads a video that is found to be defamatory, then she is the owner and would be liable, much the same way that the owner of a Tweet that is defamatory: see Mickle v Farley (2013), where Farley had been ordered to pay for damages for defamatory tweets and Facebook posts in NSW. In Greg Jericho’s article he outlines that comments left on news organisations’ websites are essentially their responsibility. Affiliated to this is Clarke v Nationwide News (2012), where it was found that a party is liable as a publisher for comments on their Facebook page. So in this sense Jenna Marbles is responsible for her channel. But this becomes murky, as her channel is hosted on a third-party platform.
Is this platform also responsible for any defamatory or vilifying content published? In other words, can YouTube be sued for what Full Natty Brah publishes, can Twitter be held accountable for its users’ tweets? I struggled to find examples involving YouTube in legal cases in Australia, so I’ve looked to Facebook and Twitter in this post (however Google’s Brazil chief was arrested in September 2012 for refusing to remove two videos on YouTube that allegedly defamed an election candidate), if anyone does know of any more examples, do let me know in the comments section.
In 2012, Joshua Meggitt was defamed on Twitter, and the owner of the tweet paid for damages privately. After which, Meggitt set out to sue Twitter itself, but this has not reached court (see: Bernard Keane’s quirky list of “Who sued Twitter? The list so far”). Peter Black sets out in his article that under current Australian law Twitter and Facebook could be held liable for posts made by their users. So if the owner of the Facebook page can be held liable and Facebook itself can be held liable, could we assume in the same vein that the owner of the YouTube channel and YouTube itself can be held liable?
YouTube’s Terms of Service (ToS) states: “YouTube reserves the right to remove Content without prior notice”. The Community Guidelines outline the type of behaviour that YouTube does not tolerate, which would lead to being banned from the site. With this type of control of your content, surely this presents a form of awareness and responsibility for this content published on their platform? YouTube attempts to absolve itself of this responsibility in section 5 and 6 of the ToS, but a ToS is not always legally binding.
This area of jurisprudence is in its early stages and it will be interesting to see how it develops in the following decade. Currently, it seems that if a troll leaves a defamatory or vilifying comment, all parties (the commentator, channel owner and platform) are possibly liable. If everyone is possibly liable, then everyone has the responsibility to moderate and regulate. Jericho’s article proposes the New York Times model of facing “online toxicity” by encouraging better comments and good commentary through clear incentives and moderation policy, as well as being active in shaping the discussion. But is this form of imposed self-censorship problematic? Are we giving these platforms the power to then construct our rhetoric? Are we falling for the moral panic of the troll?
Using Jason Wilson’s article, I would suggest that perhaps trolls are being turned into modern-day “folk devils”, who are defined as “presenting an existential threat to social order”. Wilson goes on to say that these folk devils cause moral panic which generates “consent among the governed to extend state power”. I would go even further and propose that this is part of the “’mean world’ syndrome” discussed by Lievrouw, whereby the Internet is proposed as a site of serious risk to individuals and the established order, which justifies the expansion of surveillance and government power over the individual. This Orwellian style powerplay is an area of concern for the future in developing a global online public sphere that is free to debate all ideas versus protecting the vulnerable from vilifying or defamatory content.
Only time can tell how these ideas will unfold with the ever-changing Internet. In the meantime, I’d love to read what everyone thinks about trying to moderate or control online hunting?