A Bird’s-eye View of the Paris AI Action Summit: Regulation, Power, and Alternatives

Lucas Anjos

Tech Policy Fellow

The recently concluded AI Action Summit, held in Paris, France on February 10-11, emphasized the widening difference between the AI strategies of the Global North and the common concerns of the Global South. While world leaders, major tech CEOs, and policymakers debated regulatory frameworks and economic competitiveness, usually advocating for a mutually exclusive view of regulation vs. innovation, the summit largely mirrored previous gatherings in its focus on maintaining the dominance of established players. Some would argue that there was a slight shift to more immediate risks (instead of a heavy focus on existential ones), including to the environment.

However, beneath the surface of corporate influence and on the sidelines of state-driven AI nationalism, we saw some alternative models and perspectives—particularly from the Global South—emerging to challenge the status quo.

Global North’s AI dilemma: Caught between regulation and deregulation

The summit made clear that AI governance in the Global North is at a crossroads, at least in terms of discourse. Seemingly led by France and Germany, the European Union continues to push for regulatory oversight through initiatives such as the Digital Markets Act, the Digital Services Act, and expanded AI risk management rules. Meanwhile, the United States, represented by Vice President JD Vance, warned against “overregulation,” arguing the old trope that stringent AI rules (worldwide) could stifle innovation and harm U.S. firms. This divide is not merely about regulation but about who controls the AI market.

The U.S. approach, aligning with Big Tech interests (its national champions), seeks to maintain its technological hegemony by resisting European-style regulation, particularly regarding misinformation and algorithmic accountability. Meanwhile, the EU, despite its regulatory ambitions, faces internal tensions between economic protectionism and fostering a competitive and flourishing AI landscape. 

There is a growing shift in the discourse coming from the (new composition of the) European Commission, one that seeks to dismantle this public image of excessive red taping and focuses on the “pro-business” opportunities of government-driven innovation. The recent dropping of a proposal for an AI Liability Directive is a testament to that. After the Draghi Report in 2024, it is all about the competitiveness discourse now.

Obviously, China’s AI ecosystem loomed in the background of all these discussions, with its DeepSeek AI assistant emerging as a major competitor, promising capabilities equivalent to U.S. models but at lower costs. As the AI race intensifies, concerns over monopolization, surveillance, and global AI value chain dependencies become even more relevant.

Missing voices: Global South exclusion and the coloniality of AI

A recurring critique of AI governance summits is their selective participation. This is not new in tech conferences, assemblies and events, with the exception of a few initiatives, such as the Internet Governance Forum. As observed in previous forums, the Paris AI Action Summit once again prioritized major Western tech firms, European regulators, and a handful of Global North states, both in its main event at Grand Palais and in its sidelines, while largely excluding critical perspectives from civil society and Global South nations. Though not new, this exclusion is symptomatic of a broader issue: AI governance remains structured around the interests of the developers, reinforcing historical asymmetries in technological power.

For many countries, the prevailing AI governance model represents a new form of digital colonialism—where data, infrastructure, and algorithmic control are concentrated in a few dominant economies, leaving developing nations dependent on external technologies. The European Commission’s InvestAI initiative, which commits over €200 billion to AI development, illustrates how capital-intensive AI ecosystems continue to be shaped by states and corporations that already wield disproportionate influence.

This reality has spurred growing discussions on AI sovereignty. Brazil and several African nations have begun advocating for alternative AI models, including open-source AI, decentralized data governance, reflections on the environmental sustainability of these industries and regional AI training infrastructure to reduce reliance on Silicon Valley and Beijing.

Emerging alternatives toward a (somewhat) decentralized AI future

Despite the dominance of Global North AI models, alternative frameworks are gradually gaining traction. Various approaches are being explored, reflecting a growing determination to establish AI ecosystems that are both more independent and tailored to regional needs.

One significant trend is the development of state-led AI initiatives. Countries such as Brazil and South Africa have begun investing in public AI infrastructure, recognizing that reliance on pre-trained models developed in the United States or China risks perpetuating technological dependency. By creating state-backed AI systems (directly and indirectly, through tax incentives, research and development, capacity-building, and training), these nations aim to ensure that AI technologies are somewhat aligned with local economic, linguistic, and social realities, rather than being optimized solely for the interests of foreign tech conglomerates.

Alongside these governmental efforts, a strong push toward decentralized and open AI has emerged. Initiatives such as Africa’s Masakhane NLP project and India’s AI4Bharat exemplify this movement, focusing on the creation of AI models that better reflect linguistic and cultural diversity. These projects seek to build technologies that are more accessible, inclusive, and responsive to the needs of historically underrepresented communities.

At the same time, concerns over data sovereignty and algorithmic transparency have led to stronger regulatory frameworks, even if administrative in nature. In Latin America, regulators like Brazil’s ANPD have taken steps to ensure that AI systems trained on local data remain subject to national legal oversight. This shift reflects a growing recognition that control over data is integral to technological sovereignty. 

Without clear regulatory protections, AI models risk becoming contemporary tools of external influence, extracting value from local populations without accountability to domestic institutions.

Taken together, some of these approaches have been challenging the prevailing perception that AI development must be dictated by a handful of powerful firms concentrated in the Global North. Instead, they point toward governance models that emphasize democratic participation and technological self-determination. While still in their early stages, these efforts mark an important step toward a more decentralized and equitable AI landscape—one that does not merely reproduce the existing hierarchies of the digital economy, but instead seeks to redefine them on more just and inclusive terms.

A missed opportunity for AI and public interest

A key takeaway from the AI Action Summit was the limited discussion on AI’s role in terms of public interest. Mariana Mazzucato and Tommaso Valletti, in their commentary on AI governance, warned that governments risk ceding too much power to private firms under the guise of innovation. Their argument is simple: AI should not be treated merely as a commercial asset, but as a general-purpose technology capable of transforming healthcare, education, governance, and many other fields.

Unfortunately, the summit did not prioritize these concerns. It largely reinforced a competitive AI arms race between the U.S., the EU, and China—even if subtly. While sustainability concerns were actually acknowledged in the final statement (which the U.S. and the U.K. refused to sign), concrete, tangible, and de facto commitments to reduce AI’s environmental footprint or ensure equitable access to AI technologies were missing.

Toward a more inclusive AI future

If AI is to be a tool for global development, rather than another mechanism of technological dependency, then countries outside the traditional power centers need to assert their agency in shaping AI policies. This means investing in alternative AI models, fostering international South-South cooperation, and demanding a seat at the regulatory table(s).

As it stands, AI governance continues to reflect the economic and geopolitical ambitions of a few dominant actors. But the emergence of open source initiatives and discourse, regional AI alliances, and public AI infrastructures suggests that a shift—however incremental—is possible. The challenge ahead is ensuring that this shift translates into meaningful and sustainable AI policies that do not merely reproduce old power structures in a new, digital and algorithmic form.

The Paris AI Action Summit may have reaffirmed long-lasting existing divides, but it also highlighted the obvious need for a more pluralistic and inclusive AI future—one where the Global South is not merely an observer, but a defining voice in shaping technologies that will govern the decades to come.

Lucas Anjos

Tech Policy Fellow

Lucas is a legal scholar specialized in technology, privacy, and international law, working as a postdoctoral researcher at Sciences Po Law School in Paris.