Coming into 2024, there were plenty of reasons to worry about how platforms would handle the Indian election. It’s the world’s largest election, with nearly a billion people eligible to vote in an increasingly polarized nation. Prime Minister Narendra Modi’s party is seen as the heavy favorite amid a campaign that’s seen him stir up his base with genuinely violent rhetoric against Muslims. It wasn’t hard to see the danger of that rhetoric spilling out online. Combine that with the widespread availability of AI-powered video and audio manipulation tools, and you had the ingredients for a situation where misinformation could thrive.
What we’ve actually seen has been a little bit different. For just over a month, I’ve been working on an AI Election Tracker with our features director Victoria Turk, compiling more than three dozen verified uses of AI in elections across the world in 2024. But while we can verify that AI was used in various cases, it’s harder to classify the cases as misinformation. We’ve seen dead politicians resurrected to campaign for their children and famous actors pulled into bogus endorsements, but it’s generally done with a wink and a nod. With voting drawing to a close this week, there’s a whole lot of parody and little actual malice.
That’s not to downplay the violent rhetoric at work in Modi’s campaign, which has put Muslims at risk and incited real-world violence. In some cases, that rhetoric has overlapped with generative AI tools, which has made it easier to spread the messages and raised the stakes for platforms when they fail to act swiftly in removing it. But the harm in those cases is clearly due to the hateful rhetoric itself, not the AI tools used to spread it. And when we focus solely on AI-generated content, as we have in the tracker, what we see is something significantly less dangerous.
This face-swap video is a perfect example. In broad strokes, it’s a classic case of misinformation: putting politically damaging words in the mouths of prominent politicians who didn’t actually say them. In this case, the video shows Rahul Gandhi, the leader of India’s main opposition party, criticizing a member of his coalition, making the whole political alliance seem like a shambling mess. (An American comparison might be Joe Biden complaining about Alexandria Ocasio-Cortez.)
But while the video would be politically damaging if it actually deceived voters, that doesn’t seem to be the point. The original, non-manipulated footage is from a prominent political speech that most Indian viewers would be aware of, which is not the kind of thing you’d do if you wanted to concoct a convincing fake. The effect is closer to trolling than outright deception, like a Saturday Night Live sketch or a grotesque political cartoon. Sure, it’s fake — but the fakery is the point.
Parodies of this kind are all over Facebook these days, and while they’re alarming in emergencies, it’s hard to say how platforms should respond to them during elections. For better or worse, a lot of democratic discourse consists of this kind of petty name-calling and baiting. That’s part of why platform policies at Meta and YouTube are careful to only prohibit “deceptive” content, rather than banning AI manipulations entirely. If Modi’s supporters want to watch him take the stage at a Lil Yachty concert, why shouldn’t they be able to?
To be clear, this isn’t an example of successful platform moderation. Even in cases where the content shouldn’t be removed, platforms should still be labeling it as AI-generated, as their own policies recommend. Most of the examples in the tracker aren’t labeled, despite significant public attention and days of lead time. It’s a clear sign that platforms still haven’t figured out how to recognize AI-generated content. The good news, then, is that failing to keep it in check may cause less damage than originally feared.
source : restofworld