Moderators: The Quiet Community Protectors

Bullying. It’s a thing.

From elementary school playgrounds to high school cafeterias to office water coolers, bullying is a problem for the ages. Of late, it has found an inviting space online — the anonymity, the widespread visibility on any number of opinion-inducing topics, the bullhorn and soapbox nature of it all. The digital world both builds an inviting space for bullying behaviors and calls out for aid in stopping such negativity.

So how do we curtail toxic behaviors in social platforms and communities? Aim for a healthy environment and toe a strong line. Maintaining a healthy virtual community requires knowledge of how to prevent problematic behavior. When it comes to cyberbullying, an effective way to protect users is through moderation. Here are some high-level actions your company can take to help protect your audience against bullying.

Give the community the tools they need to report bullying and other antisocial behavior. This is a fundamental element of community management. Most online communities have forms and reports where users can flag problematic content for review, including instances of bullying. User-generated reports also give those on the receiving end the ability to address the behavior without escalating the situation. Since bullying thrives in environments where there is a power imbalance, this equalizing force acts to stabilize the community.

One of the more important responsibilities in the moderator’s day-to-day role is to hold people accountable for bad decisions, aggression, bullying, danger, and inappropriateness. Following a predetermined set of consequences for bullying (typically established by a community manager), the moderator upholds community rules and shields users from abuse. Bullies, cyber or otherwise, generally believe that they will not receive lasting consequences for their actions. They thrive in environments where they feel powerful and their targets do not. By consistently moderating problematic material, moderators can dispel that notion.

Once problematic material has been highlighted, either by a manual sweep of content, AI-generated reports, or user reports, moderators enforce guidelines and may choose to communicate with problem users. Posts can be hidden, deleted, or edited, depending on the community, meaning abusive messages can be effectively purged. Users that continue to abuse or harass others may face other repercussions, including having their account flagged or suspended. Harassers may find themselves banned entirely, which both protects the community and reinforces that fact that there are consequences for such behavior. 

Online moderation is a tool that helps to empower rule-followers and disempower bullies. When maintaining a virtual community, it’s essential to bear in mind the needs of your users and ways to make them feel safe and welcome. An effective community management plan will protect users by making bullies feel unwelcome. Turn the tables on bullies and proactively establish a safe haven for your audience. Be prepared to step in quickly to address problematic situations and to escalate or report problematic people. Strong moderation can go a long way toward weeding out such behavior.

This entry was posted in Moderation. Bookmark the permalink.

Get On Your Soapbox