Periscope is taking new steps to shut down internet trolls. Last week the live streaming app announced that it is rolling out a new ways for users to moderate and report abuse and spam during a broadcast in the form of Periscope comment moderation.

Until last week, Periscope only offered a set of tools similar to Twitter to help moderate harassment and abuse on the app. Users could report abuse via in-app mechanisms or block individual users. You could also restrict comments only to people you know, but this is less desirable for those interested in engaging with a wider, more public community on the app.

With the new system update, the Twitter-owned app now allows audience members of a real-time broadcast to vote and determine whether a comment is abuse or spam or is fine. Periscope designed this system to be very “lightweight”  and quick as  ”the entire process above should last just a matter of seconds”.

“We want our community to be safe on Periscope. Comments are a vital part of the experience and we’ve been working hard on a system that still feels true to the live and unfiltered nature of our platform,” said a statement Periscope posted on Medium.

Here’s How Periscope Comment Moderation Will Work

  • During a broadcast, viewers can report comments as spam or abuse. The viewer that reports the comment will no longer see messages from that commenter for the remainder of the broadcast. The system may also identify commonly reported phrases.
  • When a comment is reported, a few viewers are randomly selected to vote on whether they think the comment is spam, abuse, or looks okay.
  • The result of the vote is shown to voters. If the majority votes that the comment is spam or abuse, the commenter will be notified that their ability to chat in the broadcast has been temporarily disabled. Repeat offenses will result in chat being disabled for that commenter for the remainder of the broadcast.

As iDigitalTimes mentions, this moderation tool does have some loopholes. For example, both broadcasters and viewers can choose not to participate in the moderation process. Broadcasters can choose not to have their live-stream moderated and viewers can opt out of voting from their Settings. “There is also no accountability for the fact that someone could report a harmless comment or voters choosing not to ban an abusive message.”

Periscope says that the new moderation feature has been designed to work with the tools it already has in place to protect users. For example, user’s can still report ongoing harassment or abuse, block and remove people from their broadcasts, and restrict comments to people they know

The company mentioned in a blog post, “there are no silver bullets, but we’re committed to developing tools to keep Periscope a safe and open place for people to connect in real-time. We look forward to working closely with you and everyone else in the community to improve comment moderation.”

With Periscope’s real-time content moderation update, you can be sure rivals like Facebook Live will be paying close attention to this interesting new approach.

Ready to Level Up?

Our social media experts are ready to supercharge your social platforms. Click the get started button to learn more.

Get Started