Artificial intelligence has become the defining arena of US-China great-power competition — a technological rivalry with no precedent in speed, scope, or strategic consequence, and one that threatens to compress the timeline for miscalculation.
Throughout history, the technologies that define an era have also defined its conflicts. The steam engine and steel production shaped the imperial rivalries of the 19th century. Nuclear fission divided the world into superpowers during the Cold War. Today, artificial intelligence has emerged as the technology most likely to determine the distribution of power in the 21st century — and the United States and China are locked in a competition for AI supremacy that carries consequences far beyond Silicon Valley or Shenzhen.
The Thucydides Trap describes the structural tension that arises when a rising power threatens to displace a ruling one. For most of the post-Cold War era, this dynamic between the US and China played out primarily in economic output, military hardware, and diplomatic influence. But the AI revolution has introduced a new variable — one that amplifies every existing dimension of the rivalry while adding dangers that are genuinely without precedent. Whoever leads in AI will possess advantages in economic productivity, military capability, intelligence gathering, and technological innovation that could prove decisive in the broader contest for global primacy.
What makes the AI competition uniquely dangerous is its dual-use nature. Unlike nuclear weapons, which were developed explicitly for military purposes and then subjected to arms control regimes, artificial intelligence is simultaneously a commercial technology, a scientific research tool, and a military capability. The same large language models that power consumer chatbots can be adapted for intelligence analysis, propaganda generation, or cyber warfare. The same computer vision systems that enable autonomous vehicles can guide autonomous weapons. This blurring of civilian and military applications makes the AI arms race far more difficult to monitor, regulate, or control than any previous technological competition between great powers.
Russian President Vladimir Putin captured the stakes succinctly in 2017 when he declared that "whoever becomes the leader in this sphere will become the ruler of the world." While the statement was characteristically blunt, it reflected a strategic assessment shared by policymakers in both Washington and Beijing: AI supremacy is not merely desirable but existential. It is this perception — that falling behind in AI could mean permanent strategic subordination — that gives the AI arms race its most dangerous quality. When both sides believe they cannot afford to lose, the logic of competition becomes self-reinforcing, and the space for cooperation narrows dramatically.
"AI is the rare technology that simultaneously transforms economic productivity, military power, and intelligence capability. The nation that leads in AI will hold compounding advantages across every dimension of great-power competition."
China's pursuit of AI dominance is not an accident of market forces. It is the product of deliberate, centrally coordinated state policy on a scale that has no parallel in the democratic world. The 2017 "New Generation Artificial Intelligence Development Plan" set an explicit goal: China would become the world's leading AI power by 2030, with a domestic AI industry worth over $150 billion. This plan was not an aspiration; it was a directive backed by massive state investment, regulatory support, and the full weight of the Chinese Communist Party's institutional machinery.
The roots of this ambition trace to the broader "Made in China 2025" initiative, which identified artificial intelligence, along with robotics, aerospace, and advanced materials, as strategic industries in which China must achieve self-sufficiency and global leadership. For Beijing, technological dependence on the West is not merely an economic inconvenience — it is a strategic vulnerability that could be exploited in a crisis. The lesson of the US sanctions against Huawei and ZTE was not lost on Chinese policymakers: reliance on American technology is a form of geopolitical leverage that Washington has shown it is willing to use. AI self-sufficiency is therefore understood in Beijing as a national security imperative, not merely an industrial policy goal.
China brings formidable assets to this competition. Its population of 1.4 billion generates data at a scale that no other nation can match, and data is the raw material on which modern AI systems are trained. Chinese technology companies — Baidu, Alibaba, Tencent, ByteDance, and a growing ecosystem of AI-focused startups — have invested billions in AI research and development. The Chinese government has built massive AI research parks, funded national AI laboratories, and integrated AI development into its military modernization program. Chinese universities now produce more AI research papers than their American counterparts, and China leads the world in certain AI application areas, including facial recognition, natural language processing for Mandarin, and autonomous systems for logistics.
Perhaps most significantly, the Chinese system possesses a structural advantage in the deployment of AI for surveillance and social control. The integration of AI-powered facial recognition, natural language processing, and behavioral analysis into China's domestic security apparatus — most controversially in the surveillance of Uyghur populations in Xinjiang — has given Chinese companies and researchers unparalleled real-world testing grounds for technologies that have both civilian and military applications. This willingness to deploy AI in ways that would be politically or legally impossible in democratic societies gives China a developmental advantage in specific AI domains, even as it raises profound ethical concerns.
The Chinese military has also made AI a centerpiece of its modernization strategy. The PLA's concept of "intelligentized warfare" envisions a future battlefield in which AI-enabled systems handle target identification, logistics optimization, electronic warfare, and decision support at speeds that exceed human cognitive capacity. China has invested in autonomous drones, AI-powered cyber warfare tools, and machine learning systems for satellite imagery analysis. The PLA's Strategic Support Force, established in 2015, consolidates space, cyber, and electronic warfare capabilities under a single command — a structure explicitly designed to leverage AI across multiple domains of conflict.
"China's AI strategy is not a market phenomenon. It is an act of state will — a centrally directed campaign to ensure that the defining technology of the 21st century serves Beijing's strategic interests."
Despite China's rapid advances, the United States retains significant advantages in the AI competition — advantages that are structural rather than merely quantitative. Understanding the nature of these advantages, and their vulnerabilities, is essential to assessing the trajectory of the AI arms race and its implications for the broader Thucydides Trap dynamic.
The most fundamental American advantage is the innovation ecosystem centered in Silicon Valley and extending through research universities, venture capital networks, and corporate R&D laboratories across the country. The foundational breakthroughs in modern AI — the transformer architecture that powers large language models, the reinforcement learning techniques behind systems like AlphaGo, the diffusion models that generate images and video — have overwhelmingly originated in American companies and universities. OpenAI, Google DeepMind, Anthropic, Meta AI, and Microsoft Research represent a concentration of AI talent and capability that no other nation possesses. This advantage is not merely historical; it reflects deep structural factors including academic freedom, access to capital, a culture of risk-taking, and immigration policies that have historically attracted the world's best researchers.
Talent is perhaps the single most critical input in the AI race, and here the United States benefits from a dynamic that no amount of state investment can easily replicate. The world's leading AI researchers — disproportionately trained at American and British universities — have overwhelmingly chosen to work in the United States, drawn by higher compensation, better research infrastructure, and the intellectual freedom of the American academic and corporate environment. China has made enormous investments in AI education and has produced a growing number of world-class researchers, but the talent flow remains asymmetric. A significant fraction of China's most accomplished AI scientists have been educated in the US, and many have chosen to remain, though tightening immigration policies and geopolitical tensions are beginning to alter this dynamic in ways that could prove costly for America.
The third pillar of American advantage is compute infrastructure — the specialized hardware required to train and run advanced AI systems. Training a frontier AI model requires tens of thousands of advanced GPU chips running for months at enormous expense. The United States, through NVIDIA, AMD, and the broader semiconductor ecosystem, controls the design and production of the most advanced AI chips in the world. NVIDIA's H100 and successor chips have become strategic commodities, and the US government has leveraged this dominance through export controls designed to limit China's access to cutting-edge compute. This hardware advantage, combined with access to massive cloud computing infrastructure operated by Amazon, Microsoft, and Google, gives the United States a lead in raw computational capability that China is working urgently to close but has not yet overcome.
The US government has also moved aggressively to consolidate its AI advantage through policy. The CHIPS and Science Act, signed into law in 2022, directed over $52 billion toward domestic semiconductor manufacturing and research. Executive orders on AI safety and security have established frameworks for managing the risks of advanced AI while maintaining American competitiveness. The Department of Defense has created dedicated AI offices, and the intelligence community has made AI integration a top priority. These efforts represent a recognition — belated, in the view of many analysts — that AI competition is not merely a commercial matter but a core dimension of national security.
If AI is the arena of competition, semiconductors are the ammunition. The global semiconductor supply chain — extraordinarily concentrated, technically demanding, and strategically vital — has become the single most important chokepoint in the US-China technology rivalry. Understanding this chokepoint is essential to understanding both the AI arms race and its connection to the broader Thucydides Trap, particularly the Taiwan flashpoint.
At the center of the semiconductor story is Taiwan Semiconductor Manufacturing Company (TSMC), which fabricates approximately 90% of the world's most advanced chips. These are the chips that train and run frontier AI models, power advanced military systems, and enable the data center infrastructure on which both economies depend. TSMC's dominance is the product of decades of specialized investment and engineering expertise that cannot be replicated quickly or cheaply. No other company — not Samsung, not Intel, not China's SMIC — can match TSMC's ability to manufacture chips at the 3-nanometer and 2-nanometer process nodes that define the cutting edge of AI hardware.
The strategic implications are profound. TSMC is located in Taiwan — the island that sits at the very center of the US-China rivalry. A Chinese takeover of Taiwan would give Beijing control over the world's most advanced chip fabrication capacity, fundamentally altering the technological balance of power. Conversely, a conflict that damaged or destroyed TSMC's fabrication facilities would inflict a catastrophic shock on the global technology industry, one from which recovery would take years and cost trillions of dollars. This is the "silicon shield" thesis: Taiwan's semiconductor industry makes the island so economically critical that no rational actor would risk its destruction. But the Thucydides Trap teaches us that rationality is often the first casualty of great-power rivalry.
The United States has used its leverage over the semiconductor supply chain as a primary tool in its technology competition with China. The October 2022 export controls imposed by the Bureau of Industry and Security were among the most consequential economic warfare measures in decades. These controls restricted the sale of advanced AI chips to China, limited Chinese access to semiconductor manufacturing equipment, and even prohibited American citizens from working in Chinese chip fabrication facilities. The measures were explicitly designed to freeze China's AI capabilities at their current level while the US pulled ahead. Dutch and Japanese governments, whose companies — ASML and Tokyo Electron — produce essential lithography and fabrication equipment, were pressured to impose parallel restrictions.
China's response has been a massive mobilization of resources toward semiconductor self-sufficiency. Beijing has poured tens of billions of dollars into domestic chip development through its "Big Fund" and successor investment vehicles. Huawei's HiSilicon division has developed the Ascend series of AI chips, and SMIC has made progress toward more advanced process nodes, though it remains generations behind TSMC. China has also stockpiled AI chips ahead of export control deadlines and developed workarounds to acquire restricted technology through intermediaries. The semiconductor competition has thus become an arms race within the arms race — a contest over the tools needed to build the tools of AI supremacy.
"The semiconductor supply chain is the most strategically consequential chokepoint in the global economy. Whoever controls the production of advanced chips holds leverage over the pace of AI development itself — and both Washington and Beijing understand this with perfect clarity."
The military applications of artificial intelligence represent the most immediately dangerous dimension of the AI arms race. While the economic and commercial aspects of AI competition are consequential, it is the integration of AI into weapons systems, surveillance networks, cyber warfare tools, and command-and-control structures that most directly connects the AI arms race to the risk of great-power conflict identified by the Thucydides Trap.
Autonomous weapons systems are the most visible and controversial application. Both the United States and China are developing AI-enabled drones, unmanned submarines, and robotic ground vehicles capable of operating with varying degrees of autonomy. The US military's Replicator initiative aims to field thousands of small, autonomous drones across multiple domains. China has become the world's leading exporter of military drones and has demonstrated swarm technology capable of coordinating hundreds of unmanned aerial vehicles in synchronized operations. The progression toward greater autonomy in weapons systems raises fundamental questions: At what point does a machine make a lethal decision without meaningful human oversight? What happens when AI-controlled systems on both sides interact in ways that neither side fully understands or can predict?
AI-powered surveillance and intelligence represent another critical domain. Machine learning systems can analyze satellite imagery, intercept communications, process open-source intelligence, and identify patterns in vast datasets at speeds that dwarf human analytical capability. Both nations are deploying AI to monitor the other's military movements, detect nuclear submarine deployments, track missile launches, and assess strategic intentions. The advantage of faster and more comprehensive intelligence is obvious. But there is a corresponding danger: AI systems trained on incomplete or biased data may produce assessments that are confident but wrong, and the speed at which these assessments are generated leaves less time for human judgment to intervene before decisions are made.
Cyber warfare is being transformed by AI in ways that are difficult to observe but potentially decisive. AI can be used to discover software vulnerabilities, generate sophisticated phishing attacks, automate the exploitation of network weaknesses, and adapt malware in real time to evade defenses. Conversely, AI-powered defensive systems can detect intrusions, patch vulnerabilities, and respond to attacks faster than human operators. The result is an escalating cycle of AI-on-AI competition in cyberspace, where the offense-defense balance shifts unpredictably and the line between espionage and act of war is perpetually blurred. A sufficiently sophisticated AI-enabled cyber attack on critical infrastructure — power grids, financial systems, military communications — could create the conditions for a conventional military response, with escalation dynamics that neither side may be able to control.
Decision-making speed may be the most consequential and least appreciated dimension of military AI. Throughout the nuclear age, strategists have grappled with the problem of compressed decision timelines. The introduction of ICBMs reduced warning time from hours to minutes. AI threatens to compress decision cycles even further. If AI systems are used to detect, assess, and recommend responses to perceived threats, the time available for human deliberation shrinks toward zero. In a crisis between nuclear-armed great powers, the pressure to delegate authority to faster-acting AI systems could create a dynamic in which machines effectively make decisions that determine the fate of nations — not because anyone intended this outcome, but because the speed of AI-enabled warfare left no alternative.
"The central danger of military AI is not that machines will decide to start a war. It is that the speed of AI-enabled conflict will compress decision timelines to the point where human judgment is effectively excluded — and errors become irreversible before anyone can intervene."
The AI arms race between the United States and China finds its most instructive historical parallel in the Anglo-German naval rivalry of the early 20th century — a competition that contributed directly to the outbreak of World War I and remains one of the clearest examples of the Thucydides Trap in action.
In 1906, Britain launched HMS Dreadnought, a battleship so technologically advanced that it rendered every existing warship in the world obsolete overnight. The irony was cruel: Britain, which possessed the world's largest navy, had just invalidated its own fleet. Germany, which had been building a conventional navy to challenge British supremacy, suddenly found itself on nearly equal footing. The result was a frantic arms race as both nations poured resources into building dreadnought-class battleships, each new vessel raising the stakes and deepening the mutual suspicion that made war increasingly likely.
The parallels to the AI competition are striking and instructive. Like the dreadnought, advanced AI is a potentially transformative technology that threatens to reset the competitive balance between an established power and a rising one. Just as Germany's naval buildup was driven by a combination of national prestige, strategic ambition, and industrial capability, China's AI investment reflects its growing power and its determination to achieve technological parity with or superiority over the United States. Just as Britain viewed Germany's fleet as an existential threat to its maritime supremacy, the United States views Chinese AI advancement as a challenge to its technological and military preeminence.
The arms race dynamic itself shares common features. In both cases, each side's investments provoked reciprocal investments by the other, creating an escalatory spiral that consumed enormous resources and heightened tensions without making either side more secure. The British feared that Germany's growing fleet signaled aggressive intent; the Germans insisted their buildup was purely defensive. Today, the United States frames its export controls and AI investment as defensive measures to maintain technological advantage, while China characterizes its AI development as a legitimate pursuit of national development goals. Each side interprets the other's actions through the lens of threat, reinforcing the cycle of competition.
The dreadnought parallel also illuminates a critical danger: the possibility that a technological arms race can foreclose diplomatic options and make war appear inevitable. By 1914, the Anglo-German naval rivalry had so poisoned bilateral relations that diplomatic resolution of the underlying geopolitical tensions became nearly impossible. The sunk costs of the arms race — the treasure spent, the industries mobilized, the public expectations raised — created political incentives to justify the investment through confrontation rather than accommodation. If the AI arms race follows a similar trajectory, the economic and political commitments made by both the US and China could similarly narrow the space for compromise and increase the probability of conflict.
But the parallel has limits that are equally important to recognize. The dreadnought race played out over approximately a decade before war came. The AI race is unfolding at a pace that compresses timelines dramatically. Moore's Law and its analogues in AI development mean that technological advantages are gained and lost in years, not decades. The decision space for statesmen is correspondingly compressed. And unlike battleships, which were visible, countable, and subject to arms limitation treaties, AI capabilities are inherently difficult to verify, measure, or constrain through traditional arms control mechanisms. The AI arms race is, in this sense, more dangerous than the dreadnought race — faster, harder to monitor, and more resistant to diplomatic management.
If the AI competition between the US and China carries the risks described above, the question becomes whether those risks can be mitigated through arms control, diplomatic frameworks, or cooperative agreements. The history of great-power technology competitions offers both cautionary tales and grounds for measured hope.
The most successful precedent is nuclear arms control. The Nuclear Non-Proliferation Treaty, the Strategic Arms Limitation Talks, and subsequent frameworks demonstrated that great-power rivals can negotiate constraints on destabilizing technologies when both sides recognize that uncontrolled competition poses unacceptable risks. The Cuban Missile Crisis served as the catalyzing event that convinced both the US and the Soviet Union that the alternative to arms control was mutual annihilation. No comparable catalyzing event has yet occurred in the AI domain — and it is far from clear that one could occur without catastrophic consequences.
The challenges of AI arms control are formidable. First, verification is extraordinarily difficult. Nuclear warheads can be counted; missile silos can be photographed from satellites; nuclear tests produce seismic signatures that can be detected worldwide. AI capabilities are embodied in software running on commercial hardware, often indistinguishable from civilian applications. How do you verify that an adversary is not developing autonomous weapons when the same neural networks power both commercial image recognition and military target identification? Second, the dual-use nature of AI means that restricting military AI development would require constraints on civilian AI research — a politically and economically unacceptable proposition for both nations. Third, the pace of AI development outstrips the pace of diplomacy. By the time a treaty is negotiated, the technology it addresses may already be obsolete.
Despite these obstacles, several frameworks for managing the AI competition have been proposed. Some analysts advocate for "rules of the road" rather than comprehensive arms control — agreements on specific high-risk applications such as AI control of nuclear weapons, AI-enabled cyber attacks on nuclear command and control systems, or fully autonomous lethal weapons. These targeted agreements could reduce the most catastrophic risks without requiring comprehensive verification or constraining commercial AI development. The US and China have engaged in preliminary discussions on AI safety, and both have expressed support for the principle that AI should not be given autonomous authority over nuclear launch decisions — though translating this principle into binding agreements remains a distant prospect.
International institutions could also play a role. The establishment of an international AI safety body, analogous to the International Atomic Energy Agency, has been proposed by multiple governments and AI researchers. Such a body could establish standards, conduct research on AI risks, and provide a forum for dialogue between competing powers. The challenge is that neither the US nor China has shown a willingness to submit its AI development to international oversight, and the structural incentives of the Thucydides Trap actively discourage such cooperation. When both sides fear that the other's gain is their loss, the logic of competition tends to override the logic of cooperation.
Unilateral restraint is another possibility, though one with significant limitations. Both the US and China have issued AI ethics guidelines and announced policies restricting certain AI applications. The US Department of Defense has adopted ethical principles for military AI, including requirements for human oversight of lethal decisions. China has published its own AI governance principles emphasizing "human-centered" values. But in the absence of mutual verification, unilateral restraint carries the risk that one side adheres to self-imposed limits while the other does not — a recipe for strategic disadvantage that no great power will accept for long.
"The fundamental paradox of AI arms control is that the technology most in need of regulation is also the technology most resistant to the verification mechanisms on which all previous arms control regimes have depended."
The Thucydides Trap has always been, at its core, a story about miscalculation. The Peloponnesian War began not because Athens and Sparta wanted to fight, but because the structural dynamics of their rivalry created conditions in which misperceptions, fear, and the logic of preemption drove both sides toward a war that served neither's interests. The AI arms race threatens to make this dynamic more dangerous than at any previous point in history, because artificial intelligence compresses the time available for human judgment precisely when the stakes are highest.
Consider a crisis scenario in the Taiwan Strait. Chinese AI-powered surveillance systems detect what they interpret as preparations for a US military intervention. American AI systems simultaneously flag Chinese military movements as preparations for an invasion. Both sides' AI systems generate threat assessments and recommend responses at machine speed. Decision-makers on both sides are presented with AI-generated analysis that is confident, data-rich, and urgent — but potentially based on incomplete information or flawed pattern recognition. The time available to verify, deliberate, and seek diplomatic off-ramps is measured in hours rather than days or weeks. In this environment, the risk that a false alarm, a misinterpreted exercise, or a minor incident spirals into a major confrontation is dramatically elevated.
This is not science fiction. The history of the Cold War is replete with incidents in which technical systems generated false alarms that nearly triggered nuclear war. In 1983, Soviet officer Stanislav Petrov correctly identified a satellite warning of incoming American missiles as a false alarm, preventing a retaliatory strike. In 1995, Russian President Boris Yeltsin activated the nuclear briefcase after radar detected a Norwegian research rocket that was briefly mistaken for a Trident missile. In both cases, human judgment — the willingness to question what the machines were reporting — averted catastrophe. But in an AI-accelerated decision environment, the time for such human intervention is precisely what is being eliminated.
The interaction of AI systems on opposing sides introduces an additional layer of unpredictability. When two AI systems, each optimizing for its own side's strategic objectives, interact in a competitive environment, the emergent behavior can be impossible to predict even by the designers of those systems. Game theory has long recognized that the interaction of individually rational strategies can produce collectively irrational outcomes — the Prisoner's Dilemma being the canonical example. AI systems operating in a strategic environment could produce flash-crisis dynamics analogous to the "flash crashes" that have occurred in AI-driven financial markets, where algorithmic interactions create sudden, extreme, and destabilizing events that no individual actor intended or anticipated.
The implications for the Thucydides Trap are sobering. The original trap described by Thucydides involved human emotions — fear, honor, and interest — driving states toward war. The AI-accelerated version of the trap adds a new dimension: the possibility that the speed and opacity of machine-driven decision processes will outpace the human capacity for judgment, empathy, and restraint that has historically provided the last line of defense against catastrophic miscalculation. If the Thucydides Trap is a diagnosis of the structural conditions that make great-power war possible, the AI arms race is the accelerant that could make those conditions lethal before statesmen have time to intervene.
The path forward requires recognizing that the AI competition between the US and China is not merely a technological contest but a strategic and existential challenge that demands the same level of diplomatic creativity and political courage that prevented nuclear war during the Cold War. The structural pressures of the Thucydides Trap are real, but they are not deterministic. What is deterministic is the consequence of complacency — the assumption that the AI arms race can be won without managing its risks, or that the speed of technological change will somehow slow to accommodate the pace of human wisdom. History suggests otherwise. The dreadnoughts were built, the alliances were locked in, and the crisis came. Whether the AI arms race follows the same trajectory depends on choices that are being made now, in Washington and Beijing, in the laboratories and in the corridors of power, by leaders who may not fully understand the technology they are racing to deploy.
"The Thucydides Trap has always been about the gap between the speed of structural change and the speed of political adaptation. Artificial intelligence threatens to widen that gap to the point where adaptation becomes impossible — and the trap snaps shut before anyone sees it coming."