Serving Proudly As The Voice Of Valley County Since 1913
In a world where we can Tweet, Facebook or TikTok our opinions out near-instantly to the internet at large, we perhaps take for granted this ability to broadcast our thoughts so freely. So long as the idea you’re promoting isn’t criminal, doesn’t infringe on things like intellectual property and doesn’t violate a platforms terms of service, you’re free to put content out in whatever manner you’d like. The Supreme Court however, has recently been hearing arguments regarding holding platforms more responsible for what their users create. This would undoubtedly mean restrictions in response, as platforms become adverse to this risk of litigation. How does legislation look today then, and what’s at stake for user created content?
Online platforms right now aren’t held liable for the content created by their users, due to a provision of the 1996 Communications Decency Act called Section 230. Section 230 is specifically concerned with the liability of online service providers, such as social media platforms, for content that is posted by their users. Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that online service providers are generally not held legally responsible for content that is posted by their users, even if that content is defamatory, libelous, or otherwise illegal.
Section 230 has been credited with helping to foster the growth of the internet and the development of user-generated content platforms, by shielding these platforms from the risk of lawsuits that might arise from the actions of their users. However, critics argue that Section 230 also shields online platforms from responsibility for policing harmful or illegal content, leading to concerns about the spread of hate speech, misinformation, and other harmful content on the internet. On the flip side, without the protections of this Act, online platforms would be forced to intensively censor what was allowed to be posted using their services. Some undoubtedly would look at the legal risk, and decide to not allow user created content at all.
Two cases in particular are the focus of The Supreme Courts examination. One involves a 23-year-old student who was killed in the 2015 terrorist attacks in Paris, and the other involves a 2017 death during a mass shooting in Istanbul. Youtube and Twitter respectively are the parties involved with allowing content to be posted by terrorist group ISIS, which was responsible for both attacks. Under current laws allowing companies to be free from repercussions for the content their users create, there’s no recourse for these platforms. The question however is, being that we now live in a world of algorithms, could more be done to police irrefutably harmful content.
I think for me, like with so many things, we should be focussed on intent. People seldom see censorship as an issue until it’s their own ideals being scrutinized, and I think it’s wrong to put this responsibility onto the providers of platforms for user created content. If the algorithms are unbiased and deliver things it decides might be relevant to your interests, based on previous things online you’ve interacted with, I don’t believe that should constitute an unlawful act. While I think that better safeguards should exist for vulnerable people online, that’s ultimately not the responsibility of the service simply hosting a video, a blog, or indeed an article. The Supreme Court is expected to make a decision on these cases this year, with I’m sure more to follow. It’s my sincerest hope that the internet remains a bastion of free speech.
Reader Comments(0)