Moderation isn’t the Problem

Sean Stewart
The Startup
Published in
7 min readFeb 8, 2021

--

Social Media’s Moderation Problem

There is no question that social media faces a nearly impossible problem with content moderation. Extreme points of view and objectionable content are amplified by an algorithmically enhanced funnel faster than any team of moderators can respond.

When we cite objectionable material that thrived on a social platform, why do we say the problem is moderation? Objectionable content has existed since the dawn of time, and when the Internet came to be, it was quick to find a space. What is different today than back in the 90s when online forums and message boards were first exploding onto the scene? Let’s go back aways to try and find the difference.

In the 90s there was a big hoopla because bad stuff was getting put on the Internet and two distinctive cases set extremely dangerous precedents:

  1. Cubby, Inc. v. CompuServe Inc.
  2. Stratton Oakmont, Inc. v. Prodigy Services Co.

In CompuServe’s case, they were found not liable for the content published via their service because they chose not to moderate anything that was posted. Conversely, Prodigy Services decided to moderate their content and as a result, they were found directly liable for any objectionable content posted which made it past their moderation. In response to these findings, Section 230 was created as a part of the larger Communications Decency Act.

In the context of Section 230: “Social Media” or “Social Networks” are interactive computer services.

Interactive computer services provide a medium for users to log on, communicate, and share content. In this definition, everything from Slack and iMessages to Google to Facebook counts as an interactive computer service. Where Google/YouTube, Facebook/Instagram, and other Social Media platforms differ is that they aren’t just a service to access user-generated content. In an effort to drive engagement (and therefore revenue) they’ve applied proprietary algorithms to their user-generated content in order to actively curate a user’s experience.

This is the difference between the technology that was available when Section 230 was written and the technology in use today. This is the pervasive problem we face.

Section 230 and Moderation

So if the problem is automated content suggestion, then why do we keep hearing all the talking heads squawk about Moderation?

To understand why, we need to take a quick look at the actual text of Section 230. You may have heard the name bandied about, but do you know what it actually says? At its very core:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The important word here is publisher. Prior law differentiates publishers from distributors — in very general terms, a distributor is not directly liable for the content it disseminates, but a publisher is directly liable for the content it produces. This is because the distributor has little to no control over the source material, whereas the publisher directly controls the source material.

Additionally, an important set of provisions were also in Section 230, collectively called the Good Samaritan Clause:

No provider or user of an interactive computer service shall be held liable on account of —

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)

Essentially, not only are our social platforms not to ever be considered publishers, they’re also never to be held liable for attempting to moderate content even if that content is constitutionally protected. This is important because at the end of the day, we want our computer services to enforce moderation and provide access to moderation tools. We don’t want porn showing up on a platform for children. We don’t want racists virtually harassing individuals simply because of the color of their skin. If social platforms were liable for any content that slipped through their moderation efforts, then they likely wouldn’t try to moderate content at all.

Our legal experts and legislators and talking heads are all focused on Moderation and Section 230 because Moderation as it applies to Section 230 is a convenient conversation for two opposing sides:

On the one hand, we have many public figures demanding the immediate repeal of Section 230. This is definitely a Bad Thing™️. These individuals want Section 230 gone because then they can use the Prodigy Precedent to litigate against these social networks which have chosen to wade into the mire of moderation. They want to keep the conversation on moderation so that if or when Section 230 is repealed, they have a strong case.

On the other hand, we have our Tech giants against changing Section 230 at all. This is also a Bad Thing™️. As long as Section 230 remains completely as it is, these platforms can continue to unabashedly tailor individual user experiences without any repercussions. They want to keep the conversation on moderation so that no one is looking too closely at how they’re manipulating users in order to make a buck.

Neither option is tenable.

Okay, So What’s the Solution?

I was having a discussion the other day with a close group of former coworkers (via Slack, an interactive computer service 🌐). We’re all alums from the same ad-tech startup and most of us have moved on to other employers, but we maintain close contact. The issue of Section 230 and the infinite problem of content moderation came up, as it has many times with our group. The reality is there’s no real way to automatically guarantee all of the content posted to a platform will conform to the platform’s community standards. That coupled with automated recommendation engines means violating content regularly goes viral and spreads in an exponential fashion until the humans trying moderate manage to react, which is just far too slow. At this point in the conversation, I came to the understanding that moderation would never solve this problem because the platform being moderated is actively working against moderation.

At risk of repeating myself, social media platforms are in a class of interactive computer service all their own: not only do they enable anyone to publish anything, they automatically curate and promote that published content to other users in order to drive engagement.

In a classical forum or message board, engagement is entirely user-driven. The voting up/down or number of replies drives popularity and therefore visibility. An individual user also has complete control over how their content is prioritized. This is user-driven engagement for which the service should not be liable. They have done nothing to manipulate the priority or popularity of content for an individual or bloc of users.

In today’s algorithm-driven platforms, these services are taking an active role in what content gets promoted based on an individual user’s behavior. We can call this algorithmically-based associative ranking. A vastly simplified example:

  1. User A viewed/liked/commented on Post A.
  2. User B viewed/liked/commented on Post A.
  3. User A viewed/liked/commented on Post B.
  4. User B should look at Post B too, since they interacted with Post A.

In the above chain of events, User B finds Post A, but is then actively shown Post B. In many cases, this content is abruptly shown to User B unless User B explicitly navigates away from the page in question.

This is not moderation, this is active promotion. A user has no direct control over the content they see or choose to interact with because viewing a single piece of content begins a spiral into more specific viewing lanes. The more a user remains in that lane, the harder it is for that user to break free. It allows for content which the average person would simply ignore or never even see to thrive. It normalizes fringe thoughts and behavior by making the content containing it seem ever-present.

We have to draw the line. Our social platforms need to have a risk associated to the act of automated recommendation. At the moment they have none and won’t change their behavior significantly because there are no real legal ramifications for promoting child exploitation, for instance. These algorithms are the unregulated back door which social media has unabashedly exploited. We must hold them responsible for their exploitation.

Social platforms should not and cannot be held liable for every piece of content which is posted to their services, but they should be held responsible for the content which they actively suggest to other users via means which are not directly user-generated. They have developed associative algorithms to drive engagement and keep users visiting and clicking through, driving revenue. These algorithms ultimately decide what a user will see and it’s damn near impossible to control from a user’s perspective. These companies own these algorithms — they are proprietary software. If a company can take ownership over the automated decision-making which chooses which content to promote to a user, it should be held responsible for the end result of that automated decision, just as publishers are held liable for the content their editors choose to make a part of their publication.

Does this break social media as it is today? Possibly. But as the Zuck said: “Move fast and break things.” Now is not the time to be deliberative, now is the time to act.

Change the Conversation

We have to change the conversation. We have to stop allowing social media giants to shrug and say “it’s the algorithms’ fault” and then say they’re trying to catch the bad guys. They’re actively promoting the bad guys. We should hold them accountable for that. Stop talking about moderation and start talking about algorithm-driven engagement.

Yes, they should moderate their content.

Yes, they should remain immune from liability for every post from every person in the world.

Yes, they should be held liable for the content they choose to promote and drive users toward.

--

--