Twitter Confirms It’s Trying To Identify Abusive Accounts Before Anyone Reports Them
In February, it said it would make it harder for abusive users to create new accounts, as well as launching a safe search function and collapsing tweet replies deemed abusive so they are hidden from immediate view.
The company has unveiled new tools that would help you stay away from trolls on your feed.
Once Twitter detects a troll, it will work to limit their reach.
Twitter CEO Jack Dorsey has said that the company is trying to improve safety on the social media site. While you probably shouldn’t expect Twitter to get too specific with its follow-ups, it’ll be nice to be kept in the loop nonetheless. With that said, it is easier for bullies and abusers as well.
The latest updates are based on feedback provided by users since late December on how to improve Twitter.
Here’s what those notifications will look like when they roll out “in the coming weeks”. Towards that, the company will now be keeping a watchful eye upon the fields you fill in while registering. Offering granular control, you can now mute “eggs” – accounts that still use the default Twitter egg avatar.
People don’t have to use their real names on Twitter.
Through changing and challenging times having social platforms that take abuse seriously is not just desirable it is imperative to push things forward.
These functionalities just launched so usage is now an unknown but simplicity tends to do well on Twitter. “This will throttle abusers even before someone will even recognize they are being abused”.
Now the feature allows you to remove keywords, phrases and conversations from notifications.
In November, Twitter launched a mute feature that lets users filter out words, phrases and conversations they did not want to see in their notifications. It is an apparent response to complaints that users seldom knew whether anyone got their reports of harassment, and were never informed about what action had been taken.
And finally Twitter will begin updating people about the status of the reports they file with Twitter’s support team. Try hurling some of that abuse or trolling in real life, and you are very likely to be thrown out on your butt from everywhere. Facebook today also announced a suite of new suicide prevention tools aimed at leveraging AI systems and contacting at-risk users based on their actions within comments, posts, and Facebook Live videos.