ChatGPT Slip Reveals Alleged Chinese Smear Campaign On Japan PM

Published : Feb 26, 2026, 08:30 AM IST
ChatGPT misuse exposed

Synopsis

A covert influence operation allegedly linked to China was exposed after a user used ChatGPT like a diary, revealing plans to create and refine disinformation targeting global figures, including a prominent Japanese politician.

A covert influence operation allegedly linked to Chinese authorities has been exposed in an unexpected way—through the misuse of ChatGPT as a personal log or “diary.” According to reports, the campaign came to light after a user inadvertently revealed sensitive operational details while interacting with the AI chatbot, allowing investigators at OpenAI to detect and shut down the activity.

The uncovered operation was part of a broader attempt to shape global narratives and carry out targeted disinformation efforts. One of the key objectives reportedly included crafting and refining content aimed at discrediting prominent international figures, including a senior Japanese political leader. The user relied on ChatGPT to draft, edit, and polish messaging that could later be deployed across social media platforms as part of a coordinated propaganda push.

Secret Operation Exposed

What made the case particularly striking was the manner in which the operation was exposed. Instead of using highly secure or encrypted channels, the individual treated ChatGPT interactions as a running record of plans, effectively documenting intentions, strategies, and narratives within the system. This critical error enabled OpenAI’s monitoring mechanisms to identify patterns of misuse, ultimately leading to the account being banned and the campaign disrupted.

OpenAI’s findings highlight how artificial intelligence tools are increasingly being explored for both legitimate and malicious purposes. In this instance, the chatbot was used not to execute the campaign directly but to assist in content generation and refinement—demonstrating how AI can act as a force multiplier in influence operations. However, safeguards built into the system prevented the execution of explicitly harmful tasks, and the misuse was flagged before it could fully materialize.

The incident is part of a larger trend identified in OpenAI’s threat reports, which have documented various attempts by malicious actors to exploit AI technologies. These range from political propaganda and social media manipulation to scams and impersonation schemes. While many such operations remain limited in scale, their sophistication and frequency are steadily increasing, raising concerns about the evolving role of AI in global information warfare.

Experts say the episode underscores both the risks and the built-in accountability of AI systems. While tools like ChatGPT can be misused, they also leave digital traces that can expose wrongdoing when proper safeguards are in place. The accidental disclosure serves as a reminder that even covert operations can unravel through simple mistakes in an increasingly monitored digital ecosystem.

PREV

Check the Breaking News Today and Latest News from across India and around the world. Stay updated with the latest World News and global developments from politics to economy and current affairs. Get in-depth coverage of China News, Europe News, Pakistan News, and South Asia News, along with top headlines from the UK and US. Follow expert analysis, international trends, and breaking updates from around the globe. Download the Asianet News Official App from the Android Play Store and iPhone App Store for accurate and timely news updates anytime, anywhere.

 

Read more Articles on

Recommended Stories

Deal or No Deal: What's The State of Donald Trump's Tariffs?
Canadian PM Mark Carney to visit India, Australia, Japan for trade