[ISAFIS Gazette #4] Who Regulates the Robots? The Global Struggle for AI Governance and Legitimacy
Written by: Fatimah Azzahra Staff of Research and Development
Artificial intelligence is developing faster than the global systems built to regulate it. From ChatGPT’s viral boom to AI-driven surveillance, the world is witnessing a digital revolution— but there’s still no clear international rulebook in place. While global institutions like the UN, WTO, and ITU are slowly starting to engage with the issue, there is no binding, centralised legal framework to govern AI at the global level (Schmitt, 2021). This gap creates a regulatory gap, where policies are outdated, fragmented, or based on voluntary principles with no legal weight.
In the absence of a coordinated international response, powerful countries and tech corporations are stepping in to fill the gap. For instance, the European Union is leading with its upcoming AI Act, while the United States continues to adopt a more decentralised, marketfriendly approach. Meanwhile, private actors like OpenAI, Google, and Microsoft are setting global norms simply because their technologies are the ones most widely used (Almeida et.al., 2021). This raises a major question in international political economy specifically in AI global governance, who decides the rules, and whose values are embedded in these technologies?
What makes the situation even more complicated is AI operates across borders, but enforcement authority doesn’t. A chatbot designed in California can influence elections in Jakarta. An algorithm developed in Beijing can collect data from Nairobi. Yet, there is no globally recognised mechanism to hold these systems accountable when something goes wrong (Schmitt, 2021). The result? A growing crisis of legitimacy. International law is struggling to keep up, and there’s still no agreement on who should be responsible for enforcing ethical or legal boundaries in the AI space.
This paper contends that without more global coordination, the world will continue to face a fragmented and unequal AI governance structure—one in which enforcement is uneven, accountability is lacking, and the voices of less powerful states are marginalised. In the following parts, I’ll discuss the fragmented nature of AI regulation today, emphasise the jurisdictional problems associated with cross-border technology, and explain why soft law alone is insufficient to hold strong players accountable.
Fragmented Global Regulatory Landscape
The conversation around AI governance today is like a puzzle with too many missing pieces. Countries are scrambling to regulate AI, but each is following a different rulebook. One significant difficulty is fragmentation, where instead of a single framework, states are taking opposing approaches. The EU, for example, has implemented a risk-based AI Act that categorises systems depending on eir risk level, but the United States has a sector-specific, decentralised model, with laws differing by state and agency (Clifford Chance, 2024).
However, disagreement is not the only issue—many regulations contain weaknesses. The EU’s risk-based model has been challenged for failing to include context. Deepfakes, for example, are deemed “low risk” under the Act, despite being used for disinformation during the RussiaUkraine conflict. This shows how current frameworks fail to address AI’s multipurpose nature and the need for regulation that considers real-world intent and impact.
The US approach, meanwhile, tends to be reactive—focusing on recovery after harm, rather than prevention. It also shows little interest in global leadership on AI governance, possibly because US tech giants already comply with EU-style rules like the GDPR when operating in Europe (Siegmann & Anderljung, 2022).
This creates an issue in which the EU establishes worldwide norms, but its regulations are not always enforced or respected. Many US-based companies nominally come under EU lawbut frequently fail to completely comply, particularly in areas like as data protection and algorithmic transparency (Veale & Zuiderveen Borgesius, 2021).
What we’re left with is a patchwork of laws that don’t align and don’t always work. This creates loopholes for companies to exploit them (Cole, 2024), affecting global standard-setting and contributing to a growing “governance vacuum” (The Guardian, 2025), thus creating more concerns about cross-border accountability in the age of AI.
Cross-Border Jurisdictional Challenges
AI does not respect borders. Its algorithms and data often flow through multiple countries at once, making jurisdictional authority a major grey area. If an AI system developed in one country causes harm in another, who should be held responsible? Which country’s laws should apply? These are not hypothetical concerns—they’re real problems affecting how we assign accountability in an increasingly digitized world (Draper & Gillibrand, 2023). Legal uncertainty like this not only delays justice but also discourages cooperation between states.
To make matters worse, international AI norms are often shaped by dominant actors whose technologies are widely adopted, raising concerns about unbalanced influence and the risk of digital soft power being used to entrench global dominance (Perc et al., 2019; Arora et al., 2023). In this context, the lack of cross-border enforcement doesn’t just hinder legal responses—it highlights an urgent need for more countries to step up and assert their own regulatory voices.
If jurisdictions beyond the EU and the US remain passive, the global AI legal landscape will continue to lack balance and fail to reflect the diversity of global legal traditions and human rights standards.
Soft Law and Voluntary Principles Fail to Ensure Accountability
Most global AI frameworks today fall under what’s called ‘soft law’—guidelines that sound good on paper but aren’t legally binding. Initiatives like the OECD AI Principles and the G7 Hiroshima AI Process advocate for ethical AI, but they rely on voluntary compliance (Leslie & Perini, 2024).
In practice, this means companies can publicly support ethical AI while continuing to develop or deploy systems with known biases or harmful effects, a tactic known as ‘ethics washing’ (Bietti, 2019). Since private companies, not governments, are leading much of AI’s development, the absence of enforceable regulation becomes even more problematic. Selfregulation might sound ideal in theory, but without penalties or oversight, it often falls short in practice (Schultz & Seele, 2022). And even if actors try to comply, their domestic capabilities, including technological and legal instruments, may not be strong enough to support their ambitions.
As a result, public trust in both AI technologies and those who govern them continues to erode. In short, while there are efforts being made to regulate AI, the current governance landscape remains fragmented, uneven, and largely unenforceable.
Conclusion
AI is moving faster than the rules meant to guide it, and this gap is growing harder to ignore. As technologies continue to cross borders, so too must the responsibilities that come with them. Without clearer legal rules and stronger cross-border cooperation, we risk creating a world where no one is truly accountable for the harms AI can cause.
The current reliance on voluntary guidelines and patchy regulations is no longer enough. To protect human rights and uphold the rule of law, more countries must join the conversation— not just the usual powers. By putting forward their own ideas, legal standards, and ethical values, other nations can help build a more balanced global framework that reflects a wider range of perspectives.
Until there’s a stronger, binding framework that reflects diverse global interests—not just those of major powers—AI regulation will remain fragmented and unfair. The goal isn’t to slow down innovation, but to make sure it’s shaped by fairness, transparency, and shared responsibility. AI might be borderless, but accountability shouldn’t be.
References
Almeida, Patricia & Santos Jr, Carlos & Farias, Josivania. (2021). Artificial Intelligence Regulation: a framework for governance. Ethics and Information Technology. https://www.researchgate.net/publication/351039094_Artificial_Intelligence_Regulation_a_framework_for_governance
Arora, A., Barrett, M., Lee, E., Oborn, E., & Prince, K. (2023). Risk and the future of AI: Algorithmic bias, data colonialism, and marginalization. Information and Organization, 33(3), 100478. https://doi.org/10.1016/j.infoandorg.2023.100478
Bietti, E. (2019, December 1). From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3513182
Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: themes, knowledge gaps and future agendas. Internet Research, 33(7), 133–167. https://doi.org/10.1108/intr-01-2022-0042
Clifford Chance (2024). Global developments in AI regulation. Clifford Chance. https://www.cliffordchance.com/content/dam/cliffordchance/briefings/2024/05/global-developments-in-ai.pdf
Cole, M. D. (2024). International ∙ AI Regulation and Governance on a global scale: An overview of international, regional and national instruments. AIRe, 1(1), 126–142. https://doi.org/10.21552/aire/2024/1/16
Draper, C., & Gillibrand, N. (2023). The potential for jurisdictional challenges to AI or LLM training datasets. Workshop on Artificial Intelligence for Access to Justice, AI4AJ 2023. https://ceur-ws.org/Vol-3435/paper2.pdf
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Leslie, D., & Perini, A. M. (2024). Future shock: Generative AI and the international AI policy and governance crisis. Harvard Data Science Review, Special Issue 5. https://doi.org/10.1162/99608f92.88b4cc98
Perc, M., Ozer, M., & Hojnik, J. (2019). Social and juristic challenges of artificial intelligence. Palgrave Communications, 5(1). https://doi.org/10.1057/s41599-019-0278-x
Schultz, M., & Seele, P. (2022). Digital ethics washing: A systematic review. CSR COMMUNICATION CONFERENCE 2022, September 14-16, 157–160. https://csr-com.org/wp-content/uploads/2023/07/csrcom-proceedings-2022.pdf#page=159
Schmitt, L. (2021). Mapping global AI governance: a nascent regime in a fragmented landscape. AI And Ethics, 2(2), 303–314. https://doi.org/10.1007/s43681-021-00083-y
Siegmann, C., & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2208.12645
The Guardian. (2025, February 7). AI is developing fast, but regulators must be faster. The Guardian. https://www.theguardian.com/technology/2025/feb/07/ai-is-developing- fast-but-regulators-must-be-faster?.com
Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402
Zaidan, E., & Ibrahim, I. A. (2024). AI governance in a complex and rapidly changing regulatory landscape: A Global perspective. Humanities and Social Sciences Communications, 11(1). https://doi.org/10.1057/s41599-024-03560-x
0 Comments