How AI Is Shaping the South China Sea Narrative — And Why It Matters to the Philippines

Artificial intelligence is no longer just a technological buzzword — it has become a force in shaping public perception of geopolitical disputes, including the ongoing tensions in the South China Sea. In the Philippines, emerging evidence shows that AI-generated content and algorithm-driven propaganda are increasingly intertwined with narratives about the maritime dispute with China, complicating both public understanding and policy responses.


AI-Driven Disinformation in the South China Sea Dispute

The territorial dispute between the Philippines and China remains a focal point of regional tension, involving overlapping claims to rich fisheries, strategic shipping lanes, and potential energy reserves. Beyond diplomatic negotiations and maritime patrols, artificial intelligence is now amplifying narratives that distort reality and fuel public confusion.

AI tools have made it easier to create highly convincing deepfake videos, manipulated images, and tailored propaganda, which can mimic public figures, synthesize fabricated events, and spread misleading accounts of military actions. For example, a widely shared deepfake falsely depicted Philippine President Ferdinand Marcos Jr. ordering military action against China. While government authorities quickly debunked the video, the incident sparked widespread panic — illustrating how such content can sway public opinion and escalate tensions.

Platforms including YouTube and Facebook have been used to host channels that publish AI/amplified narratives, such as exaggerated accounts of U.S. military involvement in disputed waters or provocative military scenarios that never happened. Despite disclaimers on some channels claiming “entertainment,” research found that many users accepted these narratives as factual — often reinforcing strong anti-China sentiment or calls for escalation among viewers.


Propaganda, Algorithms, and Public Perception

Beyond individual videos, AI also underpins coordinated networks of fake accounts and automated profiles that amplify specific viewpoints. Reports by analysts such as those at the Digital Forensic Research Lab and the Australian Strategic Policy Institute note that such AI-generated networks frequently push narratives that align with Chinese geopolitical positions — even if direct state sponsorship has not been conclusively proven.

The financial incentive is also notable: some disinformation networks use AI to generate content that earns revenue through ads and engagement, attracting millions of followers while blurring the line between profit-driven misinformation and geopolitically loaded narrative shaping.


Challenges for the Philippines

A study by the Social Weather Stations found that over half of Filipinos struggle to identify fake news, and nearly nine in 10 people encounter difficulty navigating digital information — a vulnerability that AI-driven propaganda can exploit to deepen societal divisions or manufacture consent for policies that may not align with national interests.

To counter these threats, the Philippine Congress is considering legislation aimed at regulating AI’s role in media manipulation, particularly in the lead-up to crucial elections. Such legislation seeks to establish legal consequences for the creation and dissemination of harmful deepfakes and other manipulated content.


Tech Platforms and Fact-Checking Dynamics

At the same time, major social media platforms are scaling back dedicated fact-checking efforts in the region. Meta has discontinued third-party fact-checking in some areas, and X (formerly Twitter) has shifted toward community-driven annotation tools that analysts say lack consistency. These changes remove a key safeguard against misinformation and make it easier for AI-generated narratives to spread without immediate challenge.


The Path Ahead: Regulation, Literacy, and Cooperation

Addressing AI-driven misinformation in the South China Sea context will require more than reactive debunking. Experts recommend a multi-pronged approach:

  • Stronger regulatory frameworks to hold malicious actors accountable
  • Investment in AI detection and transparency tools from technology providers
  • Media literacy initiatives to equip the public with critical evaluation skills
  • Regional and international cooperation to counter cross-border digital influence operations

Only through coordinated policy, technological standards, and public awareness can the influence of AI-amplified propaganda be mitigated, preserving the integrity of information surrounding one of Asia’s most consequential geopolitical disputes.

Read the full article here: FULCRUM