Pro-government news outlets and influencers in Bangladesh have in recent months promoted AI-generated disinformation created with cheap tools offered by artificial intelligence start-ups, according to a report by Financial Times.
The country of 170 million is heading to the polls in early January, a contest marked by a bitter and polarising power struggle between incumbent Prime Minister Sheikh Hasina and her rivals, the opposition Bangladesh Nationalist Party, read the report.
The FT report put forward examples of disinformation created using Artificial Intelligence.
Facing pressure, Google and Meta have recently announced policies to start requiring campaigns to disclose if political adverts have been digitally altered.
But the examples from Bangladesh show not only how these AI tools can be exploited in elections but also the difficulty in controlling their use in smaller markets that risk being overlooked by American tech companies.
In Bangladesh, the disinformation fuels a tense political climate ahead of polls in early January.
In one video posted on X in September by BD Politico, an online news outlet, a news anchor for “World News” presented a studio segment — interspersed with images of rioting — in which he accused US diplomats of interfering in Bangladeshi elections and blamed them for political violence.
The video was made using HeyGen, a Los Angeles-based AI video generator that allows customers to create clips fronted by AI avatars for as little as $24 a month.
Other examples include anti-opposition deepfake videos posted on Meta’s Facebook, including one that falsely purports to be of an exiled BNP leader suggesting the party “keep quiet” about Gaza to not displease the US. The Tech Global Institute, a think-tank, and media non-profit Witness both concluded the fake video was likely AI-generated.
AKM Wahiduzzaman, a BNP official, said that his party asked Meta to remove such content but “most of the time they don’t bother to reply”. Meta removed the video after being contacted by the Financial Times for comment.
Experts in Bangladesh said the problem is exacerbated by the lack of regulation or its selective enforcement by authorities. Bangladesh’s Cyber Security Act, for example, has been criticised for giving the government draconian powers to crack down on dissent online.
Sabhanaz Rashid Diya, a Tech Global Institute founder and former Meta executive, said a greater threat than the AI-generated content itself was the prospect that politicians and others could use the mere possibility of deepfakes to discredit uncomfortable information.
In neighbouring India, for example, a politician responded to a leaked audio in which he allegedly discussed corruption in his party by alleging that it was fake — a claim subsequently dismissed by fact-checkers.
“It’s easy for a politician to say, ‘This is deepfake’, or ‘This is AI-generated’, and sow a sense of confusion,” she said. “The challenge for the global south . . . is going to be how the idea of AI-generated content is being weaponised to erode what people believe to be true versus false.”
To read full story in the Financial Times, please click here.