Twitter will suspend repeat offenders posting abusive comments on Periscope live streams

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing […]

As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.

The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.

We’re committed to making sure everyone feels safe watching live video, whether you’re broadcasting or just tuning in. To create safer conversation, we're launching more aggressive enforcement of our guidelines. https://t.co/dQdtnxCfx6

— Periscope (@PeriscopeCo) July 27, 2018

Currently, Periscope’s comment moderation policy involves group moderation.

That is, when one viewer reports a comment as “abuse,” “spam,” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam, or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.

However, this suspension would only disable their ability to chat during the broadcast itself – it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator – and their community – feel threatened or otherwise harassed.

Twitter says that accounts who repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.

To what extent those changes have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts – including spam, despite the impact to monthly active user numbers – is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?

The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter to address abuse at a greater scale in the future.

 

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.