BANGKOK/MANILA/MUMBAI — U.S. President Donald Trump’s push for unfettered free speech on social media and Meta’s response to it are sparking concerns of serious safety risks — even threats to more lives — particularly in South and Southeast Asia.
“Mark Zuckerberg uses the word ‘censorship.’ I would replace that with ‘safety,'” Maria Ressa, a Nobel Peace Prize winner and the CEO of Rappler, a Philippine digital investigative media company, said in a recent interview.
“We now have a platform for more than 3.2 billion people around the planet that has just decided that profit is more important than safety.”
Just a day after he took office as U.S. president on Jan. 20, Trump signed an executive order outlawing social media policing. Zuckerberg, CEO and founder of Meta Platforms, and Elon Musk, owner of X, were placed near Trump during his inauguration ceremony.
“After years and years of illegal and unconstitutional federal efforts to restrict free expression, I will also sign an executive order to immediately stop all government censorship and bring back free speech to America,” Trump said at the ceremony.
Shortly before that, Zuckerberg announced plans to reduce filters, saying they caused mistakes and censorship.
“We’re going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” he said. “We’re going to dramatically reduce the amount of censorship on our platforms.”
Zuckerberg added that Meta would “get rid of fact-checkers and replace them with Community Notes, similar to X, starting in the U.S.”
U.S. social media platforms — including Meta’s Facebook and WhatsApp — have in the past been blamed for misinformation, strife and violence in multiple Asian countries. They include Bangladesh, India, the Philippines, Sri Lanka, Indonesia and, most seriously, Myanmar — where human rights groups have slammed Facebook for facilitating persecution of the Rohingya Muslim minority in Rakhine state.
The massive forced exodus of Rohingyas after a brutal military sweep in 2017 was variously characterized as ethnic cleansing and genocide, and left at least 10,000 dead and 6,000 women and girls raped.
Meta’s policy change is reviving concerns among rights activists that a free speech crusade in Asia against controlling countries like China, Vietnam and to a lesser degree Thailand could provide a smokescreen for the freedom to spread hatred and violence in vulnerable territories.
Amnesty International on Jan. 23 said it expected the abolition of independent fact-checking in the U.S. to be “rolled out internationally” and even further exacerbate “Meta’s contributions to human rights harms and offline violence, as egregious as the crimes against the Rohingya.”
Taylor Robb-Mccord, Meta’s Asia-Pacific policy communications manager, confirmed the ending of its third-party fact-checking program over the next couple of months in the U.S. and the phasing in of “a more comprehensive Community Notes system.”
Community notes enable contributors to post context and other materials for purported content moderation.
“We will continue to improve it over the course of the year before expansion to other countries,” Robb-Mccord told Nikkei Asia. “There are no changes to other countries at this time.”
“Stopping fact-checking at a time of massive disinformation is a bit like pulling down a Los Angeles fire station during a major fire: You don’t know whether the fire station would have been enough to contain the fire, but you’re certainly depriving yourself of a proven and valuable tool to fight it,” Fabrice Fries, the chair and CEO of Agence France-Presse, wrote last week in Le Monde.
Fries noted that AFP has the world’s broadest fact-checking network with 150 full-time fact-checkers working in 30 countries and 26 languages, and “is the most affected by this U-turn.”
“It is clearly wrong of Zuckerberg to speak of fact-checking as censorship — it is not,” Laetitia van den Assum, a retired Dutch diplomat and former member of a Rakhine advisory commission, told Nikkei. “I fear that the U.S. tech oligarchs have not learned the Myanmar lessons of 2013-18. Social media platforms remain attractive for conflict parties and politicians of all types.”
Robb-Mccord said Meta has made “a substantial investment” in safety and security, and conducted human rights due diligence to “identify and address potential risks in Myanmar.”
In India, no application has turbocharged the spread of misinformation and hate speech as much as WhatsApp, triggering rioting, lynching and civil strife over the past few years. The app is India’s primary instant messaging platform, boasting well over half a billion users in the country.
With WhatsApp, the challenges are multi-pronged. Its built-in end-to-end encryption means that moderators can only work on posts that have been flagged by users or machine-learning systems. Indians also speak dozens of languages and hundreds of dialects, complicating moderation further.
Indeed, in its first annual report in 2022, Meta’s oversight board raised concerns about “whether Meta has invested sufficient resources in moderating content in languages other than English.”
The lack of effective moderation prompted the Indian government to enforce frequent internet shutdowns. According to digital rights watchdog Access Now, India led globally with the most internet shutdowns for the sixth year in a row in 2023, with “at least 116 recorded shutdowns.”
Social media, particularly Facebook, has been a breeding ground for hate speech and disinformation in the Philippines despite Meta’s efforts to police such content — seen during populist firebrand Rodrigo Duterte’s presidency of 2016 to 2022.
Duterte’s followers used Facebook to flood the platform with messages in support of him, employing disinformation and propaganda and attacking anyone they deemed critical of his policies — including his blood-soaked drug war.
VERA Files, a Philippine independent news and fact-checking outfit, found in a 2018 study that disinformation on Facebook was “overwhelmingly political”.
Facebook banned pro-Duterte “fake news” sites that year and two years later took down accounts that supported Duterte and his policies, sparking his ire.
“If government cannot use [social media] for the good of the people, then we have to talk. We have to talk sense,” Duterte said in 2020.
source : asia.nikkei