In the war on disinformation, the enemy can be hard to determine. Journalists, politicians, governments and even grandparents have been accused of enabling the spread of online falsehoods.
None of these groups is entirely innocent, but the real adversary is more mundane. As Facebook whistleblower Frances Haugen testified late last year, social mediaâ€™s own algorithms are what makes disinformation accessible.
Since its launch in 2004, Facebook has grown from a student social networking site into a surveillance monster that destroys social cohesion and democracy worldwide. Facebook collects troves of user data â€” including intimate facts, like body weight and pregnancy status â€” to map the social DNA of its users. The company then sells this information to anyone â€” from shampoo manufacturers to Russian and Chinese intelligence services â€” who wants to â€œmicro-targetâ€ its 2.9-billion users. In this way, Facebook allows third parties to manipulate minds and trade in â€œhuman futuresâ€: predictive models of the choices individualsâ€™ likely will make.
Around the world, Facebook has been used to sow distrust in democratic institutions. Its algorithms have facilitated real-world violence, from genocide in Myanmar to the recruitment of terrorists in South America, West Africa, and the Middle East. Lies about election fraud in the US, promoted by former president Donald Trump, inundated Facebook in the lead-up to the 6 January riots last year. Meanwhile in Europe, Facebook enabled Belarusian strongman Aleksandr Lukashenkoâ€™s perverse efforts to use migrants as weapons against the EU.
In the Czech Republic, disinformation originating from Russia and shared on the site has swamped Czech cyberspace, thanks to Facebookâ€™s malicious code. One analysis by my company found that the average Czech citizen is exposed to 25 times more Covid-19 vaccine disinformation than the average American. The situation is so dire, and government action so inept, that Czechs rely on civil society â€” including volunteers known as the Czech Elves.
Efforts to mitigate Facebookâ€™s threat to democracy so far have failed miserably. In the Czech Republic, Facebook entered into a partnership with Agence France-Presse (AFP) to identify harmful content. But with just one part-time employee and a monthly quota of only 10 dubious posts, these efforts are a drop in the disinformation ocean. The â€œFacebook files,â€ published by The Wall Street Journal, confirm that Facebook takes action on â€œas little as 3% to 5% of hate speech.â€
Facebook has given users the ability to opt out of custom and political ads, but this is a token gesture. Some organisations, like Ranking Digital Rights, have called for the platform to disable ad targeting by default. That is not enough. The micro-targeting at the root of Facebookâ€™s business model relies on artificial intelligence to attract usersâ€™ attention, maximise engagement, and disable critical thinking.
In many ways, micro-targeting is the digital equivalent of the opioid crisis. But the US congress has moved aggressively to protect people from opioids through legislation designed to increase access to treatment, education and alternative medication. To stop the worldâ€™s addiction to fake news and lies, legislators must recognise the disinformation crisis for what it is and take similar action, starting with appropriate regulation of micro-targeting.
The problem is that no one outside Facebook knows how the companyâ€™s complex algorithms work â€” and it could take months, if not years, to decode them. This means that regulators will have no choice but to depend on Facebookâ€™s own people to guide them through the factory. To encourage this co-operation, congress must offer blanket civil and criminal immunity and financial indemnification.
Regulating social media algorithms seems complicated, but it is low-hanging fruit compared to even greater digital hazards on the horizon. â€œDeep fakesâ€ â€” the artificial-intelligence-(AI)-based large-scale manipulation of videos and images to sway opinion â€” is barely a topic of conversation in congress. While legislators fret over the threats posed by traditional content, deep fakes pose an even greater challenge to individual privacy, democracy and national security.
Meanwhile, Facebook is becoming more dangerous. A recent investigation by MIT Technology Review found that Facebook funds misinformation by â€œpaying millions of ad dollars to bankroll clickbait actorsâ€ through its advertising platform. And chief executive Mark Zuckerbergâ€™s plans to build a metaverse, â€œa convergence of physical, augmented, and virtual realityâ€, should frighten regulators everywhere. Just imagine the potential damage those unregulated AI algorithms could cause if they are permitted to create a new immersive reality for billions of people.
In a statement after recent hearings in Washington, DC, Zuckerberg repeated an offer he has made before: regulate us. â€œI donâ€™t believe private companies should make all of the decisions on their own,â€ he wrote on Facebook. â€œWeâ€™re committed to doing the best work we can, but, at some level, the right body to assess tradeoffs between social equities is our democratically elected congress.â€
Zuckerberg is correct: congress does have a responsibility to act. But Facebook has a responsibility to act as well. It can show congress what social inequities it continues to create and how it creates them. Until Facebook opens its algorithms to scrutiny â€” guided by the know-how of its own experts â€” the war on disinformation will remain unwinnable, and democracies around the world will continue to be at the mercy of an unscrupulous, renegade industry. â€” Project Syndicate