The Global Contest for Control: The Hidden Power Struggle Behind AI Governance
In Paris, in Brussels, in Washington and Beijing, the question of how to govern artificial intelligence has become less a matter of ethics than of choreography. Summits convene and frameworks proliferate, but beneath the outward displays of alignment and urgency, something more structural is taking place. The debate over AI regulation is frequently framed in binary terms: innovation versus constraint, safety versus sovereignty, openness versus order. Yet these binaries obscure rather than illuminate. What truly defines AI governance today is not a shared concern for risk or opportunity, but the competition to shape the stage on which such concerns are decided. The deeper contest is one over access, influence, and institutional control.
“AI governance is unfolding not as a technocratic convergence, but as a differentiated struggle between political economies and their associated influence systems.”
Across the world’s three principal blocs, namely the United States, the European Union, and the People’s Republic of China, AI governance is unfolding not as a technocratic convergence, but as a differentiated struggle between political economies and their associated influence systems. In the United States, AI firms have embedded themselves in statecraft through formal lobbying and informal revolving-door networks. In the EU, rule-making procedures are marked by procedural transparency, but also by asymmetrical access, legal complexity, and elite capture. In China, the absence of procedural pluralism is not a sign of state autonomy but of deeply fused techno-political architectures where regulatory and commercial authority are co-constitutive. Understanding the future of AI governance thus requires less focus on ethical frameworks or regulatory models and more on how power flows, who has the proximity to shape it, and under what conditions those proximities are legitimised.
This is not a claim made rhetorically. It is grounded in empirical and conceptual work that demands greater attention. A 2025 comparative study by Xiao Wang, Qi Liang, and Zhenguo Yang used structural topic modelling to analyse 139 national AI strategies. The study revealed marked divergence in policy orientation. Chinese strategies clustered around “application,” “security,” and “development”; EU documents emphasised “ethics,” “trust,” and “rights”; US texts concentrated on “leadership,” “innovation,” and the “role of government.” These lexical variations mirror deeper institutional norms. China’s model embeds AI in state-led economic modernisation. The EU enshrines it within a precautionary legal framework. The US defers to market dynamism tempered by selective federal coordination. These are not just policy differences; they are competing visions of how influence should be operationalised, justified, and distributed.
“Influence is not always monolithic, and that some firms seek public-interest alignment rather than its circumvention.”
In the United States, lobbying is institutionalised not as an exception to democracy but as its procedural extension. According to data from OpenSecrets, AI-related lobbying expenditures rose from $75 million in 2022 to over $108 million in 2024, with Microsoft, Amazon, OpenAI, and Palantir among the most active players. In parallel, the introduction of H.R. 8923 (a House bill proposing a decade-long moratorium on state-level AI regulations) reflected the growing ambition to consolidate regulatory authority at the federal level, pre-empting more stringent local standards. While nominally framed as a uniformity measure, the bill’s drafting process included extensive closed-door consultation with industry lawyers, as revealed by The Markup in late 2024.
Palantir CEO Alex Karp has explicitly framed AI as “the defining weapon of the next era,” calling for the fusion of private innovation with state security infrastructure. Similar arguments have emerged from venture capitalists and think tanks aligned with the U.S. Department of Defense’s JAIC (Joint Artificial Intelligence Center). This convergence is not incidental. It reflects the evolution of what political theorist Wendy Brown has termed “market-masquerading sovereignty”, where the state delegates public risk to private technologists while absorbing their political worldview.
However, not all private influence is extractive. The “Little Tech” coalition, led by Luther Lowe and endorsed by the Chamber of Progress, advocates for interoperable standards, stronger antitrust enforcement, and a level playing field for startups. In multiple submissions to the National Institute of Standards and Technology (NIST) and the Office of Science and Technology Policy (OSTP), these groups have warned that an overly concentrated AI ecosystem threatens not only innovation but democratic responsiveness. Their role complicates any simplistic narrative of “Big Tech capture,” reminding us that influence is not always monolithic, and that some firms seek public-interest alignment rather than its circumvention.
“Rights-based rhetoric, they argue, operates as a legitimising grammar while shielding elite consensus-making from meaningful democratic oversight.”
In the European Union, governance is mediated through legalism, not lobbying per se, but the results are not necessarily more equitable. The EU’s flagship AI Act (Regulation 2024/1689) introduced a pioneering risk-based framework, but its final provisions were significantly weakened after extensive industry intervention. Research by Corporate Europe Observatory showed that over 400 lobbying meetings took place between tech firms and EU institutions during the Act’s negotiation phase. Specific changes, such as the narrowed definition of general-purpose AI, reduced liability for foundation model developers, and watered-down bans on biometric surveillance, reflected a coordinated effort to minimise enforcement burdens.
Legal scholars Anna Mei and Fabian Sag, in the Digital Law and Policy Review, have critiqued this process as “procedural pluralism without participatory depth.” Rights-based rhetoric, they argue, operates as a legitimising grammar while shielding elite consensus-making from meaningful democratic oversight. Consultations exist, but navigating them demands legal fluency, institutional capital, and full-time engagement capacity, advantages that disproportionately benefit large firms and well-funded trade associations. Here, access is not denied but tiered, stratified by capability rather than principle.
Yet this diagnosis must be nuanced. European civil society organisations have played a decisive role in shaping the regulatory landscape. The European Digital Rights network (EDRi), Mozilla Foundation, and Algorithmic Justice League all submitted amendments and mobilised public support against overreach in real-time biometric surveillance. The French CNIL and the German Federal Commissioner for Data Protection have pushed for robust interpretative guidance. Even within the European Parliament, divisions between the Internal Market Committee and the Civil Liberties Committee produced a final text that—though compromised—retains enforceable provisions on transparency, data governance, and human oversight.
It is also worth noting that the EU’s institutional fragmentation, while a challenge for coherence, can create pockets of democratic friction. Member states diverge: France and Italy have pushed for sovereign AI strategies, while countries like Germany and the Netherlands have insisted on interoperability and fundamental rights as non-negotiable. These tensions make the EU’s regulatory fabric messier, but they also prevent it from being entirely captured.
“What emerges is a model of co-regulatory entanglement, where the distinction between policy input and ideological enforcement is systematically blurred.”
China’s model, by contrast, offers neither fragmentation nor deliberation. It is defined not by regulatory dialogue but by techno-political integration. Yet influence exists; just not in liberal institutional forms. The Cyberspace Administration of China (CAC) exerts control through licensing, model registration, and content alignment protocols. To deploy a large language model commercially, developers must submit detailed documentation on data provenance, architecture, and risk mitigation strategies. Crucially, they must demonstrate ideological compliance, including keyword blacklists and alignment with “core socialist values.” This regime culminated in the 2024 release of “Xi Thought” LLMs, which embed party doctrine directly into model outputs and reinforce the pedagogical role of AI in shaping public discourse.
But this is not mere autocracy; it is architecture. As the Stanford DigiChina project has shown, influence in China flows through multiple intermediaries: the Chinese Academy of Information and Communications Technology (CAICT), the Beijing Academy of AI, and provincial tech zones such as those in Hangzhou and Shenzhen. These bodies act as both implementers and validators of central policy, piloting compliance schemes and feeding lessons back into national regulation. Firms like Alibaba DAMO and iFlyTek have participated in standards-setting committees while simultaneously testing sandbox environments with local governments. What emerges is a model of co-regulatory entanglement, where the distinction between policy input and ideological enforcement is systematically blurred.
China’s AI governance is thus not void of influence; it is densely saturated with it. But unlike in the US or EU, this influence is neither pluralist nor contestable. There are no external accountability bodies, no adversarial press, and no meaningful civil society participation. As Rogier Creemers has noted, the Chinese regulatory state operates through a logic of “responsive authoritarianism”, it absorbs feedback from industry but channels it through vertically integrated structures that maintain political loyalty.
“When discretion becomes indistinguishable from unaccountable authority, governance becomes not adaptive but post-democratic.”
The material consequences of these divergent models are increasingly apparent. The IMF’s 2025 AI and Productivity in Europe report estimates that AI could contribute 0.3 percentage points to annual GDP growth across the eurozone over the next decade. The United States is expected to see a 0.5 to 1.5 point increase, depending on sectoral adoption rates and labour market reallocation. PwC’s longstanding projection that AI could boost China’s GDP by 26 percent by 2030 remains in circulation, though scholars at Brookings and the World Bank have questioned these figures, citing discrepancies between reported growth and satellite-based activity data.
More important than these projections is what they conceal: the distribution of AI’s benefits. Who captures the productivity surplus? Whose voices shape its deployment? Who is excluded from the agenda-setting table? The term “regulatory returns” is instructive here. Like financial returns, regulatory returns refer to the value extracted not through production but through strategic positioning within policymaking ecosystems. In the United States, this means preferential contracts and pre-emptive deregulatory carveouts. In the EU, it involves shaping definitions and liability thresholds. In China, it depends on proximity to ideological orthodoxy and integration into state-led developmental objectives.
Yet what unites all three systems is a pervasive absence: the democratic subject. In the US, public comment mechanisms exist but are routinely marginalised in closed-door policymaking. In the EU, participatory access is formal but functionally selective. In China, no such claim is entertained. The result is a governance landscape where the most consequential decisions, about automation, surveillance, education, and justice, are negotiated among institutional elites and industry consortia. As Daniel Drezner has argued, there are reasons to favour technocratic discretion in high-complexity environments. But when that discretion becomes indistinguishable from unaccountable authority, governance becomes not adaptive but post-democratic.
“To govern AI is not merely to mitigate its dangers or unlock its value. It is to adjudicate who has the right to speak, to shape, to dissent.”
Still, the path is not foreclosed. Democratic legitimacy in AI governance need not mean direct referenda on model weights. It can mean institutional innovation: transnational citizens’ assemblies on high-risk deployments, modelled on Ireland’s constitutional conventions or the Conference on the Future of Europe; public interest lobbying funds financed by a 0.1 percent levy on AI companies with revenues exceeding €500 million; an International Algorithmic Oversight Board, housed under the UN Human Rights Council, empowered to investigate systemic harms and recommend remedies across jurisdictions. These are not utopian abstractions. They are plausible design responses to a legitimacy crisis that cannot be solved by regulation alone.
Today, the stage is full. Tech executives, regulators, ethicists, and policymakers perform a familiar drama: ambition and caution, threat and promise. But the audience, the citizen, the worker, the democratic subject, remain in the lobby. To govern AI is not merely to mitigate its dangers or unlock its value. It is to adjudicate who has the right to speak, to shape, to dissent. If this governance is to serve more than power, it must become more than theatre. It must become accountable. What if AI governance were not a race, but a republic? Not a negotiation among the capable, but a reckoning with the common? Not a question of what can be regulated, but of who must be heard?
Sources
Brookings Institution. (2024). Satellite-based measures of economic activity in China: A reappraisal of AI-driven growth projections. Washington, DC: Brookings.
CAICT – China Academy of Information and Communications Technology. (2024). AI Development Annual Report. Beijing: Ministry of Industry and Information Technology.
Corporate Europe Observatory. (2024). Captured states: Big Tech’s year of lobbying during the AI Act negotiations. Brussels.
Creemers, R. (2023). Cyber China: Upgrading propaganda, public opinion work and social management for the twenty-first century. In J. deLisle, A. Goldstein, & G. Yang (Eds.), The Internet, Social Media, and a Changing China (pp. 58–76). University of Pennsylvania Press.
Drezner, D. W. (2020). The Toddler in Chief: What Donald Trump Teaches Us about the Modern Presidency. University of Chicago Press.
European Commission. (2024). Artificial Intelligence Act: Regulation (EU) 2024/1689 of the European Parliament and of the Council. Brussels.
European Digital Rights (EDRi). (2023). Recommendations on the EU AI Act trilogues: Rights-based governance for AI.
Gupta, D., & Kulothungan, V. (2023). Adaptive Governance for High-Risk AI: Institutional Proposals for Democratic Legitimacy. Journal of Technology and Society, 8(1), 112–139.
IMF. (2025). AI and Productivity in Europe: A Sectoral Assessment. Washington, DC: International Monetary Fund.
Mei, A., & Sag, F. (2024). Rights without Remedies? Legalism, Consultation, and the Limits of Participatory AI Governance in the EU. Digital Law and Policy Review, 12(2), 87–106.
Mozilla Foundation. (2023). Response to the Commission Consultation on the AI Act.
NED – National Endowment for Democracy. (2024). Data-Centric Authoritarianism: The Rise of AI-Driven Governance in China. Washington, DC.
OpenSecrets. (2024). AI lobbying spending totals and top spenders (2022–2024). Center for Responsive Politics.
Palantir Technologies. (2024). Remarks by CEO Alex Karp at the Munich Security Conference.
PwC. (2018). The macroeconomic impact of artificial intelligence: How AI could add $15.7 trillion to the global economy. London: PricewaterhouseCoopers. https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf
Stanford DigiChina Project. (2024). Translation and analysis of China’s AI regulations and model registration guidelines. Stanford University Cyber Policy Center.
The Markup. (2024). Inside the making of H.R. 8923: How AI lobbyists influenced America’s state preemption bill.
Wang, X., Liang, Q., & Yang, Z. (2025). Mapping the Strategic Lexicon of National AI Plans: A Structural Topic Modeling Approach. Technology and Public Policy, 9(1), 34–59.
World Bank. (2024). Is AI Driving Growth? Reassessing Emerging Market Claims Using Nightlight Data. Washington, DC: World Bank Group.