It is the social platform’s latest attempt to fight misinformation. Twitter has chosen a test group of users in the aforementioned countries who can already use the new feature, before making it available to its almost 200 million users worldwide.
The company said it is still trying to navigate an ‘effective approach’ to the upcoming option.
‘We’re testing a feature for you to report Tweets that seem misleading – as you see them,’ the Jack Dorsey-led company tweeted, unveiling the test. ‘Starting today, some people in the US, South Korea, and Australia will find the option to flag a Tweet as ‘It’s misleading’ after clicking on Report Tweet.’
‘We may not take action on and cannot respond to each report in the experiment, but your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work.’
Twitter announced it is introducing an ‘it’s misleading’ feature to report tweets
A test group is already able to flag tweets as misleading. The feature will be available in the US, Australia and South Korea
It is Twitter’s latest attempt to fight misinformation, having introduced tweet warnings in 2020
Twitter did not disclose how many people are in the test group. DailyMail.com has reached out to Twitter with a request for comment.
To report a misleading tweet, those who have been chosen can click on the three dots on the right side of a tweet, where the option ‘It’s misleading’ will appear.
It’s not Twitter’s first effort to moderate disputed claims made by users on the highly politicized platform.
In January, Twitter launched Birdwatch, another approach to fight misinformation.
Through Birdwatch, users can make comments on tweets they deem misleading, but the comments are not directly shown in the tweet.
Linked below the tweet will be the notes and sources added by users.
‘I think ultimately over time, [misinformation] is a problem best solved by the people using Twitter itself,’ said CEO Jack Dorsey on the company’s 2020 fourth-quarter earnings call, held in February.
In the aftermath of 2020’s controversial US presidential election, Twitter made another move against inaccurate facts by flagging tweets about election fraud claims.
‘Officials sources stated that this is false and misleading,’ read a warning.
The company’s CEO admitted that Twitter had a role in the Capital riot in a congressional testimony
Twitter created three categories of misinformation- misleading information, disputed claim and unverified claim
A misleading warning below a tweet from Representative Marjorie Taylor Greene
‘Official sources may not have called the race when this was tweeted’
‘I think ultimately over time, [misinformation] is a problem best solved by the people using Twitter itself,’ said CEO Jack Dorsey
The initial stages of the COVID-19 pandemic and the uncertainty that came with it exacerbated the dangers of misleading content in the platform.
In response, Twitter created three categories in May 2020 to take action on reviewing potentially harmful tweets- misleading information, disputed claims and unverified claims.’
Warnings reading ‘Some or all of the content shared in this tweet conflicts with guidance from public health experts regarding COVID-19,’ or ‘Get the facts about COVID-19,’ appeared under tweets suspected of spreading misinformation.
Other social media platforms are also stepping up in the fight against false claims.
Facebook launched ‘fake news’ labels in October 2019, while Google has worked along third-party fact-checkers to prevent misinformation since 2016.
On March 25, Dorsey, along with Facebook CEO Mark Zuckerberg and Google CEO Sundar Pichai, faced questions in Congress about their platforms’ role in the current spread of misinformation.
Dorsey admitted that Twitter had played a role in the January 6 Capital riot, after rioters used the platform to organize and plan their insurrection.
Lawmakers subsequently pleaded with the tech companies to introduce stricter and more effective ways to fact-check information in social media.