The Value of Human Moderation Amid the Rise of AI

ModSquad was built around the importance of moderation, the careful monitoring and maintaining of digital spaces and communities. Moderation is, to put it simply, the backbone of who we are as a company and the work we do every day. Even in the 11 years since ModSquad’s founding, moderation AI and algorithm usage for online communities has grown significantly.

With artificial intelligence applications continuing to expand, we’ve been closely following the evolution of AI technology to determine how these tools best complement our practices. We’re already seeing AI being employed for content moderation, which opens up new avenues of intelligent moderation. But for true, human intelligence, the best moderation of the future will be a combination of learning algorithms and real-world, real-time human moderation. We expect great results as AI and human moderators work in tandem.

When we discuss the presence of AI in daily life, we’re usually talking about assistants like Siri, Cortana, and Alexa. At their inception, we’d ask them some arbitrary question just to see what they’d answer; now, we have them ordering groceries or organizing our calendars for the week ahead. With AI rapidly spreading into more facets of our daily lives, what is the impact on content moderation?

AI is nothing new for mainstream social media platforms, which have used it to remove offensive and copyrighted content. The speed at which AI works is certainly a major perk. With the amount of questionable content being flagged and taken down each day by Facebook and YouTube’s existing AI moderation, it’s clear that quick action is essential in ensuring inappropriate content doesn’t make it to viewers’ eyes in the first place. That speed is something human moderation on its own simply can’t match.

However, what human moderation lacks in speed is made up in understanding. Having a program that flags keywords, images, or video content is fantastic for ensuring content is actioned quickly. However, AI isn’t at the point where it can understand subtle nuances in language; it could therefore be easily convinced that acceptable content requires action, just because it contains a certain word or phrase. Online communities often have their own unique vernacular and slang, which can evolve over time. This is where human moderators have a distinct advantage. They’re able to interpret not just what is being said but also what is intended.

This idea of intent is a huge part of moderating online communities. To determine if something requires moderation, context is key. Where AI can flag something as potentially offensive, human moderators are able to understand if something really is offensive, or if it can be chalked up to friendly banter between users who know each other and how they communicate. This is particularly relevant in gaming communities, especially for games focusing on PVP (player-versus-player) content.

Let’s suppose AI does flag and remove a complaint, for example, which is out of the accepted conduct guidelines for a given community space. Even if the moderation is warranted, AI is only able to notify the user through prewritten scripts. A human moderator, however, can analyze the issue and the context surrounding it, and customize a note that educates the individual. Perhaps another user baited them into breaking the rules, and they were only responding, or they’re a new user who may not be familiar with the code of conduct.

A personalized note sent by a real human moderator not only explains why something was removed, but also leaves the door open for discussion in case the user has questions about the action taken or how this might impact his/her participation in the community. Human moderators have a distinct advantage of being able to converse with the community. Those honest, transparent conversations lead to better customer experiences.

The internet is a strange place, fostering new developments and uses of language, many of which can’t be entirely understood by AI. Using the power of AI to catch potential infractions of copyright or terms of service and to issue auto-bans gets you only part of the way there. Understanding flagged content and processing user appeals requires human moderation. Consider also that AI is not infallible; there may be times when moderators will correct mistakenly flagged or pulled material. When you couple the swiftness of automated AI with thoughtful human moderation, the result is a powerful combination.

This entry was posted in Moderation. Bookmark the permalink.

Get On Your Soapbox