Back

The Moderation System in UGC Services

How we process a colossal amount of content

The various social media channels and content sharing platforms all over the internet have paved the way for a whole new era of self-expression. Today, anyone can make videos, live stream, express themselves on social media platforms, or create content. In fact, according to a study from Comscore, when users create and share content on social media channels, they get 28% higher engagement compared to standard company posts.

But why is UGC moderation vital, and how can you approach it? Moderation is an essential part of any UGC service. Not everyone is ready to follow the services’ rules and take into account the peculiarities and interests of other users.

Photo by Rodion Kutsaev on Unsplash

The first experience of moderation in the UGC service iFunny appeared in 2012. It was a simple reaction to users’ complaints. There was little, if any, harmful content, and this approach worked well.

In 2013, when the application had gained popularity and the community had grown, we introduced a pre-moderation system and increased the moderation team to comply with the application stores’ rules.

With this solution started the continuous development and improvement of the moderation systems of our UGC service iFunny. Now, this system has been spread to all UGC services of FunCorp.

In this report, we will share:

  • What consists of the moderation system?
  • How do its parts work and interact?
  • How are neural networks utilized?

How else moderation is utilized

Besides the rules of app stores, several vital things regulate work rules with content within UGC services. When creating our moderation instructions, we consider:

  • User experience — Users must get the most positive experience from viewing the content in the service. That’s why we’ve got new pages in our rules saying that clickbait, selfies, photo content, news screenshots, other images, and videos without the context of the memes are not allowed.

  • Legislation — We strive to ensure that our service’s content conforms to the countries’ laws where the active audience lives. Instructions of moderators have rules about symbols of prohibited organizations, photos of tragic events, and international wars.

  • Advertising partners — An important regulator for uploaded content in our services is interaction with advertising networks. The content feed must be clean from illegal content to attract the top advertising companies amongst global brands.

Moderation system

What happens to the content when a user uploads it via a mobile application? All the content coming to the service gets into the content moderation system. This system’s task is not allowing the content that does not comply with the instructions of the moderation, get to the application and the site.

The whole moderation system can be divided into several logical parts:

  • Automated decision-making systems
  • Service of fast content labeling with human help
  • Feedback from users to manage moderation mistakes

Our moderation system of the first version consisted mainly of quick labeling and feedback services and had minimal possibilities for automatic processing.

Labeling service consists of a web application, which quickly allows people to decide on the compliance of content with the service’s rules and an extensive memo for the moderator with a detailed description of each point.

This is how fast labeling works —

  1. Content that has passed the automatic checkups gets in the queue for moderation.
  2. The moderator on duty gets a bunch of content from the queue.
  3. According to the rules of content moderation, there are three possible solutions:
    • One-click — the content has a controversial connotation, additional verification required;
    • Two clicks — the content does not comply with the service rules;
    • Three clicks — the content does not comply with the regulations of the service, the user has to

Without any click — content gets into the rotation of the use and mechanisms of viewing the Collective feed (a feed for new content, where everyone gets their share of views).

If the content receives one click twice, the content gets into the user’s feed and is seen only by subscribers. This is what the labeling system work tool of the moderator looks like:

(Photo by iFunny’s moderator labeling system)

The second component of the labeling service is our moderation rules, which contain a wide-ranging description of classes and examples of harmful content. They are frequently modified and updated when negative cases appear.

These are the main content classes that are considered as negative on iFunny and other entertainment services:

  • Porn, including cartoon characters
  • Violence and calls for it
  • Fascism and nazism
  • Mention of various prohibited organizations
  • Political calls and news falsification

But despite the detailed description and convenient tools, a person’s attention can’t cover the whole range of harmful content. That’s why the application has a system of complaints from users.

(by, iFunny / Chart: Distribution of user complaints in iFunny in October)

Working on errors

You can see from the schedule that the content complaints happen all the time, so we use them in two content post-moderation systems. There are several reasons why, despite the existence of pre-moderation, destructive content still gets into the feeds of UGC services:

  • Memes context — Labeling system moderators do not work with memes context. Their main task is to process as many general cases of forbidden content as possible.

  • New negative trends — New trends in moderation always get first into the complaints system and are only added to labeling instructions.

  • Copyrighted content — Users can upload someone’s content without knowing that it belongs to someone and protected by copyright. The moderation team quickly excludes such content from the service.

  • Human factor — It is natural for a person to get tired and make mistakes. That is why even the most responsible labeling moderator can make a mistake.

A complaint system is an essential tool for feedback, allowing mistakes to be corrected in the pre-moderation system. We cannot automatically block content based on a complaint, but this solution will enable random complaints (usually made as false complaints) to remove content. For this reason, we have two internal solutions that work within the complaint system in the application.

Abuse Neutralization Bureau

We have a system with an internal name called the Abuse Neutralization Bureau (ANB), for the US Moderation team.

It works in the following way:

  1. Complaints on content are collected in the database
  2. Content with complaints is queued up for processing by the editor
  3. The content remains in the application until the editor’s decision
  4. Each editor dedicates a part of his working time to process the queue of ANB
  5. As a result of the editor’s checking, the content can get one of two solutions —
    • A ban, such content is removed from the service rotation.
    • Immunity to complaints if it follows the rules of moderation and is not malicious.

(by iFunny / Modertator’s content view)

The moderation staff is involved in this system. They look at the context of a meme, work with the community, and can assess the presence of humor in the picture or video.

Our editors do not work in the schedule of a typical working day, so in the ANB system, there is a queue for processing, and a delayed response to negative trends is possible, but no complaints remain without the editor’s reaction.

Community moderation

An essential component of any UGC service is a community, which actively participates in the service’s development and filling it with content.

We use this channel to obtain as much feedback as possible: in 2018, the community moderation service was created. Community moderators are active iFunny users who have been verified by the community team and can ban users, content, and comments.

Community moderators use the service, and for them, there is a select button for the content ban in the mobile application.

(by iFunny / Ban button)

The ban button performs the following:

  1. The content is banned
  2. The user who uploaded the content gets a temporary ban for uploading new content
  3. For the user, there is an appeal created in the portal of the community team
  4. If the editor accepts the request, the user will be unblocked and will be able to upload new content

Evolution of the moderation system

When designing the first versions of the labeling system, we put automation into them based on duplicates and previous solutions of the labeling system’s moderators. This allows us to process several percent of the content automatically.

However, with the growth of uploads and the development of new UGC services, it has become especially crucial for us:

  • Quickly and qualitatively process the incoming stream of content.
  • Effectively scale moderation capacities.
  • Minimize any chance of human error in content evaluation.

All this allows you to make solutions based on neural networks to work with pictures, videos, and texts.

We managed to reduce the negative load on moderators by half in a few iterations.

Neural networks also have a specific error tolerance, but by increasing the number of correct bans, we monitor the percentage of wrong bans and do not increase the number of these errors:

This is an excellent result to continue implementing convolutional neural networks for the auto-moderation of pictures and video.

It’s already the case that the neural network has become the first stage in content moderation.

An example of the auto-moderator task, its control panel, allows you to monitor and visually assess the neural network’s performance.

The neural network team has already managed to train the third version of the network on 800,000 content elements and implement an internal analytics system in the auto moderator. Now the structure of auto-moderation removes 75–80% of all harmful content.

(by, iFunny / Percentage of positive prediction of bad content of the auto moderator of the third version in a week)

With the increasing volume of moderation, auto moderators will help quickly and efficiently process all the content of our entertainment services, so we will continue to develop this system further.

The neural network team has a large backlog to improve neural networks and the heuristic of an auto moderator. We will certainly share our results when the system is fully developed and tuned.

The current moderation system task

Let us sum up how our moderation system looks like in 2020:

In its entirety, within the moderation system, there are the following participants:

  1. Moderators of labeling systems
  2. iFunny users
  3. Community management team
  4. Community moderators
  5. An auto moderator is based on neural networks and a heuristic process

The first four components are a well-adjusted mechanism, which is challenging to improve and scale, but it allows you to collect a dataset for training neural networks.

We see further improvements in the system in increasing the quality of neural networks and network interaction mechanisms with humans.

Conclusion

The compliance of our feeds in iFunny is one of the key performance indicators of our service.

We strive to reduce the quantity of negativity in our service and continuously improve automatic and manual content moderation services.

Moderation helps prevent the following in our services:

  • Fascism and nazism demonstration
  • Calls for violence
  • Political manipulations
  • Other illegal content

It is impossible to build an ideal system of moderation. The requirements of users, regulators, and the world agenda change, an essential part of the whole system is based on the feedback from users in the form of content complaints and community moderation.

An active community allows our UGC services to respond quickly to new negative cases, to provide data for the auto moderator and instructions to the pre-moderation system.

Significant moderation reduces the negativity level in our services and leaves more user time for smiles and happiness. Therefore, we are continually working on our moderation system by improving and perfecting it. All over the web have paved the way for a whole new era of self-expression.