Tech giants’ misinformation code ‘limp and pointless’
Allowing tech giants to sign a voluntary code to police misinformation and disinformation has been slammed as "pointless and shameless" today, with commentators saying the opt-in conditions would make little difference to false information regularly served up to unknowing social media users in Australia.
Companies including Facebook, Google and TikTok signed up to the voluntary code today, fulfilling another recommendation from the Australian Competition and Consumer Commission's 18-month inquiry into digital platforms.
But the code will come into place just as Facebook removed all fact-checked Australian and international news from its platform, along with content from many non-government organisations, support groups and charities, which experts said was likely to increase the spread of fake news.
The Digital Industry Group (DIGI) released the voluntary code of practice for tech platforms on Monday, with managing director Sunita Bose saying it had been developed with help from the University of Technology Sydney and First Draft News.
Twitter, Google, Facebook, Microsoft, Redbubble and TikTok have signed up to the code, which describes seven "objectives" including tackling disinformation in advertisements, prioritising fact-checked news content, improving political advertising, and funding research into disinformation.
"Companies are committing to robust safeguards against harmful misinformation and disinformation that also protect privacy, freedom of expression and political communication," Ms Bose said.
But the code will be voluntary and signatories can opt out of parts as they wish.
The Australian Communications and Media Authority is due to review the code's provisions and effectiveness by June 20.
Communications Minister Paul Fletcher said the Government would be carefully scrutinising whether the code actually tackles "serious harms that arise from the spread of disinformation and misinformation on digital platforms".
But Reset Australia executive director Chris Cooper slammed the guidelines as "limp, toothless" and "pointless," and said a voluntary code of conduct would achieve little to stop the spread of disinformation on social networks because doing so was often against the companies' commercial interests.
"This code attempts to suggest it can help 'empower consumers to make better informed choices' when the real problem is the algorithms used by Facebook and others actively promote disinformation because that's what keeps users engaged," he said.
"Any voluntary, opt-in code is inherently untrustworthy because we know it's not in the business interests of these platforms to take real action on misinformation."
Mr Cooper said it was also "laughable" that signatories could choose to opt in or out of provisions within the code "if it starts hurting their bottom line".
Australia Institute's Centre for Responsible Technology director Peter Lewis said it was particularly galling that Facebook signed this code against misinformation after repeatedly failing to tackle it and just days after stripping news content from its platform.
"In recent days Facebook has shown it is capable of removing huge swathes of content from its site to forward its own political agenda yet continues to claim it cannot discharge a general responsibility to manage damaging and dangerous content on its platform," he said.
"Without a legally enforceable obligation to actively manage misinformation and disinformation, we fear this code will simply become a digital fig leaf."
Mr Lewis said disinformation should be regulated as part of the existing Online Safety Act.
Originally published as Tech giants' misinformation code 'limp and pointless'