Twitter users poised to dive into a heated online debate will be warned they are about to enter an “intense” conversation, under a safety trial.
The social media platform is testing a feature that drops a notice under a potentially contentious exchange, stating: “Heads up. Conversations like this can be intense.” Another prompt, which appears to be aimed at people making a reply, goes to greater lengths to calm down users and urges the tweeter to “look out for each other”, “remember the human” and note that “diverse perspectives have value”.
The trial is being conducted with a small group of users, on English-language settings, on Apple’s iOS platform.
In testimony to US senators this week, the Facebook whistleblower Frances Haugen referred to Twitter’s attempts to take the heat out of some interactions as an example that her former employer could follow. Haugen, who said Facebook was too focused on making its platform “twitchy” and “viral”, said Twitter had reduced angry interactions by introducing a feature that asked users whether they wanted to tweet a link they had not tapped on.
The “intense” conversation test is Twitter’s latest attempt to limit abuse on its platform, an issue that came into renewed focus in the UK this year after England football players were racially abused by Twitter users during the Euro 2020 tournament.
Other initiatives being tested by the US company include: a feature that allows users to remove unwanted followers without officially blocking them; and a “safety mode” that blocks accounts for seven days if the tech firm’s systems spot them using harmful language or sending repetitive, uninvited replies and mentions. The safety mode feature is being trialled initially among a small group of users, with a particular emphasis on female journalists and members of marginalised communities.
Twitter is also considering giving users the ability to archive old tweets and remove them from public view after a set period of time, soos 30, 60 of 90 dae.
Online abuse is coming into sharp legislative focus in the UK with the draft online safety bill, which imposes a duty of care on social media companies to protect users from harmful content. Social media firms are required under the draft bill to submit to Ofcom, the communications watchdog, a “risk assessment” of content that causes harm. A joint committee of MPs and peers is scrutinising the bill and is due to report at the end of the year.