Artificial intelligence is advancing faster than any regulatory system designed to govern it. Adoption of paid AI among US businesses rose from 5.2% in January 2023 to 43.8% by September 2025. Since the start of 2025 alone, AI adoption has outpaced the combined growth of the previous two years. Yet governance has lagged sharply behind deployment. A 2025 survey of 1,500 companies across regions and sectors found that 81% remain in the first two early stages of responsible AI maturity – the gap between technological uptake and institutional preparedness thus widening.
As governments, businesses, and societies grapple with the implications of this acceleration, the core governance question is no longer limited to what rules should exist. Increasingly, it centers on how those rules should interact with innovation, enforcement capacity, institutional competence, and public trust. And yet, the more fundamental question persists: What is the purpose of AI?
These questions were explored on 31 March at an expert panel on “Governing AI Under Constraint: Innovation, Scale, and Regulation in China and Europe,” hosted by European Guanxi at the Peking University School of Transnational Law (STL) in Shenzhen. The discussion brought together legal, policy, and business perspectives, with contributions from Gilad Abiri, Associate Professor at Peking University; Yajun Zhang, Founder of Zhang Global Advisory; and Eric Stryson, Managing Director of the Global Institute for Tomorrow (GIFT).
What emerged was neither a binary comparison between “strict” and “flexible” regulation nor a competition between regions. Instead, the discussion revealed the more nuanced reality of how AI governance is shaped by enforcement capability, economic structure, and institutional design along with the text of regulation itself.
Similar Laws, Unequal Leverage
At a formal level, China and the European Union share many regulatory concerns. Data protection, content moderation, cross‑border data flows, and algorithmic accountability feature prominently in both systems. For example, China’s Personal Information Protection Law (PIPL) mirrors many principles found in Europe’s General Data Protection Regulation (GDPR), and a detailed comparative analysis by China Briefing confirms clear parallels across consent frameworks, data protection roles, impact assessment obligations, and cross‑border transfer mechanisms.
However, in probing further, differences between the Chinese and European frameworks become more apparent.
A 2025 survey of 273 EU decision‑makers found that only 15.8% believe Europe will achieve digital sovereignty within five years, with structural dependency on US vendors cited as the principal constraint. Thus, European regulation, while normatively ambitious, often struggles to translate principle into consistent practice – particularly when the dominant AI platforms operating within Europe are developed and controlled elsewhere. The result is a glaring paradox: powerful AI systems remain widely available in the region despite operating within legal grey zones, producing uncertainty without meaningfully altering outcomes.
China, by contrast, has pursued a more iterative and execution‑oriented model. Regulatory frameworks usually evolve alongside deployment, supported by continuous engagement between regulators, legal professionals, and technology firms. A Hong Kong Legislative Council brief notes that China “adopts a sector‑specific approach focused on security and alignment with national priorities,” deploying regulations progressively across generative AI, algorithmic recommendations, and deepfake technologies – which are each refined as deployment matures. While this approach carries risks of its own, it enables faster adjustment to emerging capabilities and better protection against threats.
Therefore, the distinction between the EU and China regulatory frameworks is not simply precaution versus speed but highlights the difference between regulation without leverage and regulation with operational control.
Rethinking the Regulation-Innovation Tradeoff
One of the most persistent claims in AI policy debates is that regulation inherently stifles innovation. Yet the comparison between Europe and China challenges this assumption.
China combines extensive regulation with one of the world’s most dynamic AI ecosystems. Early protection of domestic markets, the cultivation of indigenous platforms, and direct public‑sector involvement have enabled firms to scale rapidly while remaining embedded within national governance frameworks.
Europe, by contrast, lacks large indigenous digital platforms at scale. According to the European Parliament’s 2025 report on technological sovereignty, the EU “relies on non‑EU countries for over 80% of digital products, services, infrastructure and intellectual property”. This dependence reduces regulatory bargaining power and weakens enforcement, while simultaneously constraining domestic innovation capacity.
The implication is that effective indigenous enforcement of digital technologies may be a precondition for innovation rather than an obstacle. Clear rules, when applied consistently, can create the conditions for domestic capability‑building, whereas regulatory ambiguity tends to advantage incumbents and external providers.
Governing During Uncertainty
Across regions, the shared challenge of AI development moving faster than the legislative process persists. This temporal mismatch means governments are attempting to govern technologies whose future form, scale, and impact remain deeply uncertain. As noted by panellists during the European Guanxi event, even AI developers themselves often struggle to predict how products or business models will evolve over the next 12 to 18 months.
In this context, governance must be reinforced by experimentation, feedback loops, and real‑world learning. Subnational pilots, regulatory sandboxes, and continuous review mechanisms offer one pathway to narrowing the gap between innovation and oversight.
Systemic Risks and Distributed Misuse
The panel discussion highlighted a set of increasingly urgent yet under‑examined risks – particularly criminal misuse, misinformation, and low-cost deception. Between 2023 and 2025, Deepfake files surged from 500,000 to 8 million – a 16-fold increase in just two years. Fraud attempts involving deepfakes rose 3,000% in 2023 alone, while human detection rates for high‑quality deepfake video stand at just 24.5%, rendering procedural safeguards rather than individual vigilance as the only reliable defense.
A crucial aspect to note is that these risks are not confined to major, large-scale foundation models. Smaller, open‑source systems can be deployed locally, beyond the reach of platform‑level regulation. This challenges traditional governance approaches: while large platforms can be regulated, distributed misuse cannot be governed in the same way.
Addressing these risks will require new forms of international coordination, information‑sharing, and public literacy.
Capability as a Binding Constraint
A recurring theme throughout the discussion was that effective AI governance depends less on legal text than on human capability. Regulators, policymakers, and corporate leaders are often tasked with overseeing systems they do not fully understand.
While this skills gap is global, its responses differ by region. China’s policy process draws heavily on technical expertise from universities, research institutes, and industry, embedding domain knowledge directly into governance structures. Many technically trained graduates enter public institutions and state‑linked organisations, strengthening institutional capacity from within.
By contrast, many other governments rely principally on external consultation while internal understanding remains limited. An OECD report on governing AI in the public sector found that “skills gaps and issues with accessing and sharing quality data are widespread across governments,” with many AI initiatives stalled in pilot phases due to insufficient internal capacity to demonstrate return on investment. The same report observed that adoption “trails some firms in the private sector, slowed by skill gaps, legacy IT systems, limited data, tight budgets, and stricter requirements for privacy, transparency, and representation”.
This implies that AI governance is not only a regulatory challenge but is also a pressing concern for education and workforce landscapes.
Looking Ahead: AI as a Means, Not an End
Underlying the entire conversation lays a deceptively simple question: What is AI for?
Here it is important to note that AI is not an end in itself. Its impact depends on the problems it is applied to and the values that guide those choices. If left solely to market incentives, it risks accelerating extractive growth models, increasing resource consumption, and deepening inequality. With targeted and consistent direction, it could work for the better to strengthen public services, enhance environmental stewardship, and build societal resilience.
The panel converged on a shared conclusion: governing AI requires clarity of purpose. Without an explicit orientation toward public value, governance frameworks risk becoming symbolic at best and reactive at worst.
In response, GIFT has developed The Thinking Leader Programme (TLP) – a six‑week, virtual, open‑enrolment programme designed to strengthen judgment, decision‑making, and leadership influence. Drawing on over 20 years of experience, the TLP equips participants with practical tools, original thought outputs, and AI‑supported cognitive frameworks they can apply well beyond the programme.
For organisations and individuals ready to lead with clarity, the Thinking Leader Programme begins this June.
Samyuktha is a Research and Content Associate at GIFT Hong Kong. She is involved in converting insights into impactful written material, assisting leadership engagements, and coordinating across GIFT’s global offices to foster content collaboration. Prior to joining GIFT, Samyuktha was pursuing her Bachelors’ degree from the Hong Kong University of Science and Technology, where she double-majored in Global Business and Economics.