Martyn Landi, PA Technology Correspondent
Mark Zuckerberg has defended Facebook’s record of combating misinformation on the social network during the coronavirus outbreak.
The Facebook founder and chief executive said the platform removed all content which “puts people in imminent risk of physical harm”.
But he argued that freedom of expression was a factor around other content, such as posts around the anti-vaccination movement, which he called a more “sensitive topic” and did not, therefore, need to be completely removed.
Social media and internet companies have come under increased scrutiny during the Covid-19 pandemic, with platforms including Facebook and WhatsApp being criticised for allowing misleading claims to spread.
“We break this (misinformation) into two categories: so there’s harmful misinformation that puts people in imminent risk of physical harm, so things like saying that something is a proven cure for the virus when in fact it isn’t, we will take that down,” Mr Zuckerberg told BBC Radio 4’s Today programme.
“Even if something isn’t going to cause an imminent risk of physical harm, we don’t want misinformation to be the content that is broadly going viral across the networks.
“So if you’re seeing something that’s going to put people in imminent risk of harm we take that down.
“If you’re seeing something that is just wrong we don’t take that down but we stop it from spreading generally.
“That’s (anti-vax content) a much more sensitive topic because there are a lot of things in society that someone thinks is bad but other people are on the other side of that issue and think it’s good, and I think unless something is very clear, that is going to cause real damage to someone in the near term, I think you want generally to allow as wide an aperture of expression as possible across the internet.”
Instead of taking such content down, Facebook uses a warning label system which obscures a post with a label warning viewers it contains details which independent fact-checkers have found to be false.
Mr Zuckerberg cited figures released by Facebook earlier this month which said the company had placed misinformation warning labels on around 50 million posts across its services, which 95% of people did not click through when they encountered them, thus slowing the spread of misinformation, he argued.
The Facebook founder added that 5G misinformation, which he acknowledged had been “very prevalent” in the UK and has led to a number of phone masts been attacked, is considered an imminent threat and would be removed immediately.
However, a number of anti-5G groups promoting conspiracy theories about the technology remain active on the site.
Since the coronavirus outbreak, a number of online services have introduced tools to try and direct people to accurate information from official health authorities.
Facebook, Twitter and Google all show links to health organisations at the top of search results linked to the virus, which WhatsApp has launched several chatbot services which can directly provide users with up-to-date health advice.
Asked about combating misinformation ahead of the US Presidential election later this year, Mr Zuckerberg said Facebook was better prepared for attempts to influence voters than before the previous election, four years ago.
“We’ve learned a lot about how politics works online since 2016, one big area that we were behind on in 2016 but now I think are quite advanced at identifying and fighting these coordinated information campaigns that come from different state actors around the world, whether it’s Russia or Iran or in some cases China,” he said.
“Countries are going to continue to try and interfere, we’re going to see issues like that, it’s a little bit of an arms race in that way.
“But I certainly think our systems are a lot more advanced now.
“I think in many ways its more advanced than any other company or a lot of governments around the world and I feel pretty confident about our ability to help protect the integrity of the upcoming elections.”