06.05.2020

Twitter Test Asks Users to Revise Tweets With ‘Harmful’ Language

Sean Burch

Twitter wants you to watch your language while tweeting, with the company on Tuesday saying it had started running a test that notifies users to revise their reply tweets if they contain “harmful” words.

“When things get heated,  you may say things you don’t mean,” Twitter’s support account tweeted. “To let you rethink a reply, we’re running a limited expiriment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”

When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.

— Twitter Support (@TwitterSupport) May 5, 2020

Reps for Twitter did not immediately respond to how many users may see the new prompt as part of the test; the reps also didn’t immediately respond to what the company considers “harmful.”

Still, Twitter’s hateful conduct rules offer some insight into what the company considers off-limits language.

“We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category,” Twitter’s rules outline. “This includes targeted misgendering or deadnaming of transgender individuals.”

Twitter has spent much of the last few years rolling out new rules to police its service. In 2018, Twitter started banishing mean tweets to the “show more replies” section at the bottom of reply threads.

It’s unclear when a potential rollout for the think-before-you-tweet notification would take place, if Twitter brings it to the masses.

Read original story Twitter Test Asks Users to Revise Tweets With ‘Harmful’ Language At TheWrap

Promotion of social networks
Real subscribers, real likes, real views
To order