UK

Facebook’s algorithm ‘promoted toxic and hateful content’

Facebook‘s algorithm promoted ‘toxic and hateful’ content by giving five points to posts with emojis including ‘angry and sad’ and only one for those that received likes, leaked documents claimed today. 

The firm’s algorithm, which decides what people see on a newsfeed, was allegedly programmed to use the reaction emoji as a sign to push more provocative content.

The five emojis of ‘love,’ ‘haha,’ ‘wow,’ ‘sad’ and ‘angry’ were launched five years ago to give users an alternative way to react to content aside from the traditional ‘like’.

But a ranking algorithm meant emoji reactions were treated as five times more valuable than ‘likes’, according to internal papers revealed by the Washington Post

This idea behind this was that high numbers of reaction emojis on posts were keeping users more engaged – a crucial element to Facebook’s business model.

But it meant content that created strong reactions such as hate and anger were shown to more people than more benign posts that people merely ‘liked’ – amplifying online arguments.  

The five Facebook emojis of ‘love,’ ‘haha,’ ‘wow,’ ‘sad’ and ‘angry’ were launched five years ago to give users an alternative way to react to content aside from the normal ‘like’

And the company’s own researchers and scientists found that posts prompting angry reactions were far more likely to include misinformation and low-quality news.

One staffer allegedly wrote that favouring ‘controversial’ posts such as those making people angry could open ‘the door to more spam/abuse/clickbait inadvertently’.

Another is said to have replied: ‘It’s possible’. In 2019, its data scientists confirmed the link between posts sparking the angry emoji and toxicity on its platform.

This means Facebook stands accused of promoting the worst parts of its site for three years – making it more prominent and seeing it reach a much bigger audience.

It would have also had a negative effect on the work of its content moderators who were trying to reduce the amount of toxic and harmful posts being seen by users.

Facebook whistleblower Frances Haugen told MPs yeterday that the firm is ‘unquestionably’ making online hate worse because it is programmed to prioritise extreme content

Facebook whistleblower Frances Haugen told MPs yeterday that the firm is ‘unquestionably’ making online hate worse because it is programmed to prioritise extreme content

The discussions between staff were revealed in papers given to the Securities and Exchange Commission and provided to Congress by the lawyers of Frances Haugen.

How Facebook’s profits shot up as daily active users hit 1.93billion

Facebook profits shot higher as the number of daily active users on its site and apps hit 1.93billion on average in September.

This was 6 per cent up on last year.

Around 3.6billion people used Facebook or one of its other platforms – which include WhatsApp and Instagram – last month.

Facebook’s profits shot 17 per cent higher to £6.7billion in the third quarter amid the jump in users.

But the company’s revenues fell short of Wall Street forecasts as Apple’s new privacy rules hit sales.

Since April, Apple has required all apps to ask users if they want to be tracked, which has made it harder for advertisers to target the right audiences. It said Apple’s new regime would continue to hit business for the rest of the year.

Facebook’s total revenue – most of which comes from advertising – rose to £21billion in the third quarter.

This was £400million below expectations – though it was more than a third higher than the same period of last year when companies had put their marketing budgets on ice during the pandemic.

The whistleblower said in London just yesterday that Facebook was ‘unquestionably’ making online hate worse because it is programmed to prioritise extreme content.

Miss Haugen told MPs and peers that bosses at the firm were guilty of ‘negligence’ in not accepting how the workings of their algorithm were damaging society.

The American data scientist claimed the tech giant was ‘subsidising hate’ because its business model made it cheaper to run angry and divisive adverts.

She said there was ‘no doubt’ the platform’s systems would drive more violent events because its most extreme content is targeted at the most impressionable people.

Miss Haugen also issued a stark warning to parents that Instagram, owned by Facebook, may never be safe for children as its own research found it turned them into addicts. 

She also told the joint committee on the draft Online Safety Bill that it was a ‘critical moment for the UK to stand up’ and improve social media.

The Bill will impose a duty of care on social media companies to protect users from harmful content and give watchdog Ofcom the power to fine them up to 10 per cent of their global turnover.

Facebook is currently battling a crisis after Miss Haugen, a former product manager at the firm, leaked thousands of internal documents that revealed its inner workings.

Its founder Mark Zuckerberg has previously rejected her claims, saying her attacks on the company were ‘misrepresenting’ the work it does.

Yesterday the committee highlighted how the tech giant had previously claimed it removes 97 per cent of hateful posts on the platform.

But leaked research showed its own staff estimated that it only took down posts that generated around 3 to 5 per cent of hate speech and 0.6 per cent of content that breached its rules on violence and incitement.

Facebook founder Mark Zuckerberg (pictured) has previously rejected the claims made by Miss Haugen, saying her attacks on the company were 'misrepresenting' the work it does

Facebook founder Mark Zuckerberg (pictured) has previously rejected the claims made by Miss Haugen, saying her attacks on the company were ‘misrepresenting’ the work it does

Asked about hate speech, Miss Haugen said: ‘Unquestionably it is making hate worse.’ 

She said Facebook was ‘very good at dancing with data’ to make it seem as though it was on top of the problem but was reluctant to sacrifice even a ‘slither of profit’ to make the platform safer.

The committee also heard how Facebook’s research found that 10 to 15 per cent of ten-year-olds were on the platform – despite the minimum age being 13.

Lord Black of Brentwood noted that the Bill exempts legitimate news publishers from its scope, but that there is no obligation for Facebook and other platforms to carry such journalism as they would have to observe the codes of the regulator.

AI would effectively be making these decisions, he said, and asked if Miss Haugen trusted AI to make these types of judgment. 

Miss Haugen said the Bill should not treat a ‘random blogger’ the same way as a recognised news source as this would dilute users’ access to high quality news on the platform.

She said: ‘I’m very concerned that if you just exempted across the board you will make the regulations ineffective.’ 

She further warned that ‘any system where the solution is AI is a system that’s going to fail’.

The thumbs up 'Like' logo is shown on a sign at Facebook's offices in Menlo Park, California

The thumbs up ‘Like’ logo is shown on a sign at Facebook’s offices in Menlo Park, California

In response to the Washington Post report about emojis, a Facebook spokesman told MailOnline today: ‘We continue to work to understand what content creates negative experiences, so we can reduce its distribution. This includes content that has a disproportionate amount of angry reactions, for example.’

Mr Zuckerberg also spoke about the issue on October 6, saying: ‘The argument that we deliberately push content that makes people angry for profit is deeply illogical. 

‘We make money from ads, and advertisers consistently tell us they don’t want their ads next to harmful or angry content. And I don’t know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.’

And in reponse to yesterday’s hearing on the Draft Online Safety Bill, a Facebook spokesman said today: ‘Contrary to what was discussed at the hearing, we’ve always had the commercial incentive to remove harmful content from our sites. 

‘People don’t want to see it when they use our apps and advertisers don’t want their ads next to it. That’s why we’ve invested $13billion and hired 40,000 people to do one job: keep people safe on our apps. 

‘As a result we’ve almost halved the amount of hate speech people see on Facebook over the last three quarters – down to just 0.05 per cent of content views. 

‘While we have rules against harmful content and publish regular transparency reports, we agree we need regulation for the whole industry so that businesses like ours aren’t making these decisions on our own. The UK is one of the countries leading the way and we’re pleased the Online Safety Bill is moving forward.’  

And finally, on the subject of safety, misinformation and harmful content, a Facebook spokesman said: ‘Every day our teams have to balance protecting the ability of billions of people to express themselves openly with the need to keep our platform a safe and positive place. 

‘We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true.’


Source link

Related Articles

Back to top button