change location
enen-US

Should Human Moderation Be Replaced by Bots?

For nearly two decades, brands have been using social media platforms to educate and interact with customers and prospects online. Eventually, this communication turned into “community building” and has become one of the most important aspects of a brand’s external communication. These communities build loyalty, create a lifestyle out of a brand, and provide access to the type of fast customer service today’s consumers are accustomed to.

Building an online community takes an enormous amount of effort and oversight. A robust community across various social platforms doesn’t happen overnight, but is carefully fostered, managed, and encouraged over time.

Increasingly, these online communities are being infiltrated by trolls, abusive content, and spam. This type of content directly undermines the purpose of your brand’s social channels; to provide a respectful place for individuals to chat about your product or service, ask questions, and receive feedback from moderators.

This is such a problem that some brands have ultimately decided to turn off their commenting functions. With what result? This move prompts lower engagement, less loyal customers, and will negatively affect your brand’s bottom line. In fact, according to an MIT Sloan Management Review study, engagement directly affects conversion rate.

The alternative is human moderation, an excellent method for quelling the haters and quieting the trolls. People can easily gauge if a comment is abusive, inflammatory, or simply spam. However, this type of moderation is costly and time-consuming, and unless you’re using a social management agency with true, 24/7 coverage, your community is left vulnerable after hours.

Your brand may choose to enable pre-moderation, where comments must be approved by a moderator before being posted to the community. There’s two problems with this method. First, there is an obvious time-delay. Individuals may become frustrated when attempting to reply to a comment thread in real time, only to realize that their thoughts are not displayed immediately. This type of moderation is harmful to the image of an honest and free community. Second, someone has to manually approve or deny each request. This work is a slog, and a mighty expensive way to fall short of your moderation goals.

In short, human moderation is a tool every brand needs to be using. However, if you’re monitoring only within the 9 to 5, it either leaves you exposed after-hours, or causes your community engagement tools to be less than effective. Neither is that great of a scenario.

Enter bots. A bot is a virtual robot powered by primitive artificial intelligence (AI) that, in this scenario, helps to moderate an online community.

Humans and bots are good at very different things. A bot can process and correlate data at lightning speeds, and answer simple questions without costing an organization much, or using up valuable human energy. Bots cannot, however, process human emotion. They cannot listen, they cannot use their intuition, and they are unable to provide that extra “over-and-above” that so often accompanies great customer service. It also means that their logic is imperfect when it comes to moderating comments on social media, so leaving them to approve or deny comments is a recipe for disaster.

They are imperfect, yes, but is there a place for bot moderation across your social platforms? Thanks to recent improvements in the algorithms used to program mod bots, they are increasingly able to detect unwanted content in the form of abusive language, slander, and spam, and flag them. Those comments are filtered out and sent to a human moderator for approval or denial before publishing, 24 hours a day, 7 days a week.

Reducing opportunities for social media crisis should be a priority for your brand. Using bots to flag content for your review, and allowing a human to review this content, is a method that allows us to scale without taking away the real-time feel and flow of the conversation.

According to the New York Times, a number of tech companies are developing bots that can detect bullying or abuse on social media, hoping to reduce online harassment and protect our children. The real-world opportunities for these bots to be used is immense.

The question becomes, what will you, human moderator, do with all this free time your little bot friends are providing? Spend it building loyalty, delivering incredible customer service, and fostering engagement – things no bot can do!

While the last two decades have been all about the human element on social media, this next decade will likely see the increased sophistication of bots for use in moderation and customer service. This is not replacing humans, rather, freeing their time up for genuine interaction, and providing a safe place for our communities to communicate.

google-ad

By | 2017-07-07T11:06:35+00:00 October 18th, 2016|blog|