
“We hadn’t even identified Komi’s body. However, the AI video, stitched together using the selfie Komi had sent us aboard the fateful flight, went viral,” Bhatt told Nikkei Asia. Within days of the crash, manipulated visuals based on the selfie had hijacked the family’s collective mourning and WhatsApp chats.
“They were getting forwarded like they were facts,” Bhatt said.
One fake video showed the family laughing before takeoff, in an attempt to evoke empathy and create virality.
“Many trusted the hyperrealistic nature of the visuals more than actual news reports. It was AI’s word against ours,” he said, calling the experience “a form of digital exploitation,” one where AI decides how victims are to be remembered — or misremembered.
Boom, an independent journalism and fact-checking platform, flagged a barrage of visuals as being AI generated, many appearing within three hours of the crash. One viral image showed the aircraft ablaze above a residential building. Another depicted wreckage in front of a sign misspelling the name of Ahmedabad Airport, from which the crashed plane had just taken off. The Air India logo remained unscathed, however.
“Tragedies create mass hysteria, and influencers know that any content posted around it guarantees social media engagement,” said Archis Chowdhury, senior correspondent at Boom. Platforms such as Instagram and X, in turn, reward people who post such content with reach.
The Air India crash is not the first tragedy in India to unleash a flood of fake, AI-generated content: Similar images and videos proliferated in the wake of a terrorist attack in the Jammu and Kashmir region in late April. Chowdhury points to chilling, AI-enhanced versions of an image of a female survivor sitting next to the body of her slain husband, against backdrops of a massive pool of red liquid and a war zone.
“With the right prompts, AI lets you create content that aligns with any tragedy in seconds,” Chowdhury added.
Real images are not necessary. Saurabh Shukla, founder and editor-in-chief of NewsMobile, an independent news organization and a fact-checker working in the India and Asia Pacific markets, said that most AI-generated pictures and videos that his team debunked following the Air India crash were generated from scratch. National tragedies like the crash provide ideal AI fodder, Shukla said.
“In times of crisis, there’s an emotional vacuum, and AI visuals fill that void quickly,” he said. “Because these are created to complement the narrative of the tragedy, they tend to go viral instantly. Unlike recycled content that could raise suspicion, AI visuals appear compelling, especially in the crucial first hours when factual data is sparse.”
The toolkit to deal with such content is at best evolving and at worst essentially nonexistent. India’s legal framework, for one, has not kept up with developments.
“India lacks a dedicated statute for AI or synthetic media,” said Apar Gupta, founder and director of the Internet Freedom Foundation (IFF), an Indian nonprofit organization focused on defending digital rights. “We’re still leaning on the IT Act 2000, the 2021 Intermediary Rules and a scattering of ministry advisories, none of which were drafted with text-to-image diffusion systems in mind.”
After the crash, AI renderings of the cockpit were all over WhatsApp hours before the Directorate General of Civil Aviation had issued its first statement on the Air India crash. “Yet, neither platforms nor the state had any clear, binding duties to label, slow or remove them. That’s a structural failure, not a glitch,” Gupta said.
In the battle against AI-created disinformation, traditional tools such as reverse image searches and metadata analysis are less effective. “AI-generated content often has no prior digital footprint. Today, we use advanced deepfake detectors and frame-by-frame video analysis,” Shukla said.
As this post-tragedy AI content becomes the norm, the technology is making its way into more legitimate parts of the information ecosystem, with several Indian news outlets turning to the technology, for example, for the production of YouTube videos.
“AI is already mainstream in Indian newsrooms, strengthening article summaries and visual presentations,” Shukla said.
Indians’ broad comfort with AI is reflected in the Reuters Institute’s Digital News Report 2025, which states that 44% of Indian respondents were comfortable with AI-generated news produced with some human oversight, the highest among the 48 countries surveyed. Such oversight is critical, given that chatbots are prone to “hallucinations” presenting false, misleading or nonsensical information as fact.
In addition, almost 18% of respondents said they use chatbots such as ChatGPT and Google Gemini to access news weekly.
This growing comfort with AI does not necessarily mean that trust in chatbots extends to visuals, however.
“I don’t think there is a direct correlation,” Chowdhury said. “But an MIT Media Lab study found that an extensive use of ChatGPT could dull critical thinking. This makes users more susceptible to believe any content they see, even visuals.”
The spread of AI in other areas of daily life, such as policing and aviation, could bolster trust in such systems despite the concerns about inaccuracies.
“With Indians getting comfortable with AI tools, there’s a growing tendency to assume that anything produced with AI must be credible. This blind trust can be dangerous,” Shukla said.
The article appeared in the asia.nikkei