Pakistan’s use of fake videos, doctored photos, and AI deepfakes targets India’s military credibility, from Rafale jets to naval tests, revealing a deliberate doctrine of information warfare.
New Delhi: In today’s India–Pakistan rivalry, one pattern stands out very clearly: whenever Pakistan cannot match India on the battlefield, it tries to fight on the information front instead.

Fake videos, edited photos, false claims and now AI-generated deepfakes are used again and again to attack India’s image and military credibility.
What might look like random fake clips online is actually part of a steady line of behaviour. The same methods used earlier against India’s Rafale fighter jets are now being used to boost Pakistan’s naval image and to undercut India’s armed forces.
Put simply, Pakistan’s use of doctored media is no accident. It has become a doctrine - a way of conducting information warfare that spans the air and sea domains.
The Rafale Smear Campaign: The Early Phase
The clearest early example of this doctrine appeared when India started inducting the Rafale fighter jets.
As Rafales joined the Indian Air Force and improved its combat power, a wave of false stories and misleading visuals emerged from Pakistan-linked sources.
Videos were shared claiming Rafales had crashed, suffered serious technical problems, or been easily outclassed by Pakistani aircraft.
On closer inspection, many of these clips turned out to be old footage from other countries, video-game scenes, or heavily edited material with fake captions.
Indian fact-checkers and defence reporters repeatedly showed that these claims were wrong. Yet the campaign continued. The goal was not to present real evidence; it was to plant doubt in the minds of ordinary people and to tarnish the reputation of India’s new fighter.
Experts in strategic studies pointed to this as an early sign of a deliberate information warfare strategy: when you cannot stop the Rafale in the air, you try to damage it in people’s minds.
False PoW Claims and Faked Strike Footage
Over time, the same ecosystem pushed other kinds of disinformation. During periods of tension, Pakistan-linked networks circulated videos alleging that Indian pilots had been captured as prisoners of war, even when no such capture had taken place. Other clips claimed that Indian bases had been destroyed or that successful Pakistani airstrikes had caused massive losses.
Investigations showed that many of these videos were built from unrelated material—old war footage, foreign training clips, or simulation graphics. Edits were used to change context, add explosions, or overlay fake commentary.
Think-tanks and independent analysts began to note a pattern: every time a crisis occurred, Pakistan’s response included not just military moves, but a wave of doctored visuals aimed at shaping public opinion. When real combat results were limited or unclear, fabricated “proof” was created to fill the gap.
From Simple Edits to AI Deepfakes
In the last few years, this propaganda machine has become more advanced due to artificial intelligence. Simple cuts and pasted clips have given way to deepfakes - videos and audios generated or altered by AI tools to make people appear to say or do things they never actually did.
During and after Operation Sindoor, a series of deepfakes targeted Indian military leaders. Some videos showed senior officers “admitting” to losses.
Others portrayed them criticising Indian operations or questioning drills and deployments. None of these statements were real.
Indian fact-checking agencies such as PIB Fact Check, BOOM, Newschecker and Vishvas News documented a sharp surge in this sort of synthetic content.
Many of the earliest shares were traced back to Pakistan-linked social media networks. Analysts concluded that Pakistan’s disinformation strategy now leans heavily on AI-generated media.
The tools have changed from cut-and-paste editing to deep learning but the basic intention remains the same: use false visuals and audio to attack India’s military reputation.
The Doctrine Reaches the Sea: Naval Hype and Deepfakes
The same propaganda ecosystem that once focused on Rafale has now turned to the sea.
When the Pakistan Navy tested its P-282 “SMASH” missile, fan pages and propaganda handles immediately claimed it was a “hypersonic”, “800 km range” “carrier-killer”. None of these phrases appeared in Pakistan’s official statement, and no credible defence database backed them up.
During Operation Sindoor, when Pakistan’s actual naval movements were limited, and its ships mostly stayed close to Karachi, these online networks tried to compensate by pushing exaggerated or fake content.
Imaginary strikes, false claims of Indian ship losses, and misleading graphics were circulated to create a sense of Pakistani naval success that did not exist in reality.
In this way, the doctrine shifted smoothly from air warfare propaganda to naval propaganda, using the same techniques: doctored visuals, invented achievements and now AI-enhanced tricks.
A Consistent Information Warfare Strategy
When all these episodes are viewed together, the pattern is hard to miss. Pakistan’s information warfare doctrine works on a few simple rules--If real capability is limited, manufacture the image of capability.
Next, if you do not have proof, create fake proof. Furthermore, if India achieves something, counter it not with facts but with noise.
Whether the topic is Rafale fighters, ground operations, or naval missile tests, the method is similar.
Pakistan uses manipulated media to try and pull the narrative in its favour.
This is not random trolling. It is a strategy.
Why This Doctrine Matters
Some people might see these fake videos and AI clips as just online mischief. However, in reality, they carry serious risks. They can mislead citizens, distort news coverage, and force governments to waste time responding to fabrications. In a crisis, a convincing deepfake can increase tensions, trigger public anger, or complicate diplomatic efforts.
For India, this means any future conflict will likely come with a parallel flood of digital disinformation, much of it polished and AI-driven. The fight for truth will run alongside any real military engagement.
The Real Takeaway
From the Rafale smear campaign to falsified strike footage and now naval hype and deepfakes, Pakistan has shown a clear pattern. Its information warfare is not a set of isolated incidents; it is a doctrine.
That doctrine now extends across domains - from air power to naval power, from false crash videos to fake admirals and generals on screen, and from edited news clips to fully synthetic AI media.
Pakistan’s loudest “new weapons” are not always missiles or ships.
Increasingly, they are manufactured images, doctored audio and AI-crafted stories aimed not at winning battles on the ground or at sea, but at winning battles in the minds of those watching.


