The Democratic Illusion: Why AI Governance Lacks a Public Mandate
- Prof. Prince Sarpong

- Feb 19
- 9 min read
While narrow AI systems have long operated in specialised domains, the rise of Generative AI represents a structural shift. Arguably the most consequential cognitive infrastructures of our time, large language models (LLMs) such as GPT-5 and Claude, were built, trained, and deployed without public consent or meaningful democratic oversight. This absence exposes a deeper paradox at the heart of modern governance: we live in formal democracies, yet the infrastructures shaping public thought, discourse, and decision-making are controlled by private, unelected actors.

From Regulation to Authorisation
Most policy conversations treat AI as a technical, legal or ethical issue. Whether through the EU’s risk-based, the US’s market-led, or China’s state-led security approach, the overarching emphasis remains on managing downstream risks rather than establishing a foundational public mandate.
The primary focus of AI governance lies on regulating bias, ensuring privacy, and preventing misinformation.
While necessary, these are downstream interventions that assume the legitimacy of AI systems themselves. However, the more fundamental democratic question of “Who authorised their existence and under what mandate?” too often remains unasked.
Political theorist Nancy Fraser calls this kind of omission misframing: a tendency of liberal democracies to define participation too narrowly, thereby excluding the structural conditions under which decisions are made. In the case of AI, governance frameworks routinely centre “safety” or “transparency,” but they rarely address the right of publics to determine the boundaries of technological power in the first place.
As technology firms frame AI development as innovation rather than governance, questions of legitimacy are displaced by narratives of inevitability and progress. The public is recast as a beneficiary, not a stakeholder. Terms of service replace consent, and market adoption substitutes for deliberation. For instance, the act of using AI systems, like chatbots, constitutes participation without sovereignty. While users are 'included' in the ecosystem as consumers and data-providers, they lack any meaningful agency over the system’s fundamental architecture or social purpose. This is a simulation of engagement that substitutes market adoption for democratic authorisation: the collective right of a public to determine the terms under which a technology is permitted to operate within their society.
The absence of a public mandate is not a static oversight but a structural process of disenfranchisement. To understand how AI has escaped democratic authorisation, we must examine the cycle of its development: first, the unauthorised extraction of the public's cognitive resources; second, the insulation of these systems from public oversight through para-sovereign governance; and third, the substitution of genuine accountability with private ethical frameworks.
The Extraction of Consent
Generative AI systems rely on vast datasets scraped from human expression: books, posts, code, art, and archives. These data are often used without consent, attribution, or compensation, thereby transforming human cognition into private capital. In many cases, individuals whose words or works were used to train these systems never agreed to their use, nor to the ways in which those systems might later influence people’s lives, labour markets, or political realities.
This structure mirrors what has been described as data (or surveillance) capitalism: an economic system that converts social activity into monetisable assets. Under this system, consent becomes both individualised and symbolic:
clicking “I agree” to a privacy notice is treated as a substitute for political authorisation of the technology’s foundational legitimacy, referring to the collective democratic decision regarding whether such pervasive cognitive infrastructures should be built, how they should be governed, and what role they should play in public life.
In reality, it is a mechanism of what I term “consent extraction”, a process where individuals relinquish rights without understanding the scale or consequences of the systems they enable. In doing so, a private contract replaces a social contract, a phenomenon that legal scholar Daniel J. Solove describes as the consent dilemma, where the complexity of data processing renders individual self-management impossible. This extraction is not merely an economic transaction but a political one. By treating human expression as raw material, firms bypass the need for a mandate, framing the process as a technical necessity of innovation rather than a claim on human knowledge. When citizens click “I agree”, they are not authorising a new algorithmic regime; they are navigating a forced choice within a regime of consent extraction.
The extraction of consent constitutes a profound democratic deficit as it bypasses the imperative of public authorisation. In a democracy, the deployment of transformative social infrastructure, whether physical roads or cognitive algorithms, requires a public mandate to ensure it serves the common good. Sheila Jasanoff, for example, argues that we are in a constitutional moment, where the rules for governing science and technology are being rewritten, and that renewed forms of public engagement are needed to counter anti-democratic tendencies. The democratic deficit here is epistemic as much as legal. When knowledge itself becomes privatised, and when public discourse, collective reasoning, and social memory are captured as training data, citizens lose the ability to define the terms of their own cognition.
The result is epistemic disenfranchisement, where the conditions of thought are governed by technological protectorates that citizens neither built nor approved. Once this cognitive capital is consolidated, the democratic deficit shifts from economic to institutional. The wealth and technical complexity generated by data extraction create a barrier that traditional democratic institutions struggle to pierce, leading to a reliance on the very industry that staged the initial extraction.
Para-Sovereign Power
Even in well-functioning democracies, the institutions tasked with AI regulation remain structurally dependent on the very industries they oversee. Because the state lacks the independent computational capacity and technical expertise required to audit frontier models, it must rely on ‘corporate discretion’ and industry-led standards. This creates a state of digital dependency where the regulator becomes an auxiliary to the developer, rather than a sovereign voice for the public.
The most influential policy frameworks, from the EU’s AI Act to the OECD AI Principles, exhibit different levels of formal legitimacy. For example, while the parliamentary involvement in the EU AI Act may constitute an indirect public mandate, these processes remain largely 'downstream'. They regulate the effects of systems whose foundational existence and deployment were never subjected to a direct public vote or constitutional deliberation.
This has created what might be called para-sovereign governance, or what Julie Cohen theorises as a condition that amounts to joint sovereignty, where governance shifts from democratic politics to managerial, technocratic coordination. In this state, legitimacy circulates within a closed loop of technocrats, consultants, and corporate ethicists, while publics remain spectators to their own regulation. This 'closed loop' effectively creates a state where policy is co-authored by the regulated and the regulator, ensuring that even landmark frameworks risk codifying the status quo rather than subjecting it to a public vote. The problem reflects a conceptual blind spot: elections and courts continue to operate as if the way citizens form opinions and deliberate were outside the purview of governance. Yet, because AI systems now mediate the very information flows used for democratic deliberation, their privatisation means that democracy is losing its epistemic foundation.
As states become structurally dependent on private 'cognitive infrastructure', they effectively outsource the 'digital nervous system' of the state.
In this context, digital sovereignty is eroded and the state loses the capacity to govern independently of the platforms it relies upon, further entrenching the para-sovereign status of technology firms.
The lack of a public mandate further entrenches systemic global exclusion. As noted in the final report of the United Nations High-level Advisory Body on AI, there is a profound “global governance deficit” as a handful of multinational companies in a few countries dictate the trajectory of AI, while the impacts are “imposed on most people without their having any say in the decisions for doing so”. Out of 193 UN Member States, 118—primarily in the Global South—are entirely missing from recent prominent AI governance initiatives. This exclusion represents 'epistemic disenfranchisement' on a planetary scale. When the raw materials of AI, such as training data, are globally sourced but decision-making remains concentrated in a few developed economies, the result is a 'patchwork of norms' that fails to represent the linguistic, cultural, and political diversity of the global majority.
Ethics Without Mandate
In response to mounting public pressure, many technology firms have launched internal ethics boards, “responsible AI” divisions, or multi-stakeholder initiatives. These efforts produce valuable discourse, but they are not democratic mechanisms. Most lack binding authority, operate behind closed doors, and answer to corporate governance rather than public mandate. They offer what could be called “ethics without mandate”, a simulation of accountability that preserves legitimacy without redistributing power.
The result is a profound inversion of democratic logic. Instead of technology serving as an instrument of public will, publics are redefined as risk factors to be managed, or as data sources to be mined. The social contract becomes one-sided: citizens are governed by cognitive systems whose goals and parameters they cannot contest.
This transformation reduces the citizen to a 'user' and a subject whose agency is limited to a binary choice of 'accept or decline' within a digital ecosystem designed elsewhere. In this configuration, the public is no longer the author of its social rules, but a population to be optimised, nudge-by-nudge, toward corporate or state objectives that remain shielded from democratic scrutiny. This is not merely a failure of oversight but a structural displacement of the public from the seat of sovereignty.
The Need for Epistemic Democracy
The problem, then, is not simply the absence of regulation but the absence of authorisation. The legitimacy of AI systems must rest not on their utility or performance, but on the collective processes that determine their existence, scope, and direction. What is needed is a model of epistemic democracy, and thus a political framework that extends democratic accountability into the cognitive systems that mediate social life.
I would therefore argue that a transition to epistemic democracy requires three fundamental shifts in the way we conceive of technological power.
First, we must move from transparency to participation. Rather than merely disclosing how AI works (an 'after-the-fact' transparency that provides no agency), governance must include public deliberation on whether and how such systems should exist. Participatory design assemblies and citizen juries could form the basis of a procedural legitimacy that asks for permission before deployment, rather than forgiveness after harm.
Second, we must pivot from regulation to restitution. Since the value of AI is extracted from the digital commons, we must develop what I describe as “cognitive royalties”: a system of micropayments for data usage, treating data as labour. These systems would treat the 'digital nervous system' as a public utility. If a model is trained on a community's collective knowledge, that community should hold equity in the system's benefits or have the power to restrict its commercial use through 'data trusts' that act as collective bargaining units for our digital expression.
Finally, governance must migrate from industry ethics to institutional authority. While the EU AI Act moves in this direction, it primarily establishes compliance benchmarks for developers. A true epistemic democracy requires independent citizen assemblies with the interdisciplinary expertise and binding authority to evaluate not just a model’s safety, but its social and democratic 'fit'—the right to veto systems that are fundamentally incompatible with human autonomy.
Critics may argue that shifting toward an epistemic democracy by incorporating participatory design assemblies and public audits will inevitably slow the pace of innovation. However, this is a category error that confuses speed with progress. These measures will not stifle innovation; they will ground it in social legitimacy. Innovation that lacks a public mandate is inherently fragile, as it rests on the extraction of consent rather than the cultivation of trust. Without these democratic safeguards, AI will continue to evolve as a para-sovereign force that operates outside any framework of collective authorisation, eventually undermining the very social stability and cognitive autonomy required for a functioning society. By re-founding democratic practice to include the governance of cognitive infrastructures, we ensure that technological advancement serves the public will rather than merely treating the public as a risk factor to be managed.
The Democratic Moment
We are living through what Sheila Jasanoff and J. Benjamin Hurlbut call a constitutional moment for science and technology: a juncture when the boundaries of authority must be renegotiated between knowledge, power, and the public.
The question is no longer how intelligent machines have become, but whether we can build democratic institutions capable of governing that intelligence.
We are at a point where the technical sophistication of our tools has far outpaced the democratic sophistication of our governance. If we fail to draw a democratic line now, we may end up becoming the subjects of a cognitive regime we can neither understand nor control.
The illusion of democracy in AI lies in believing that existing institutions suffice to govern systems that were never subject to public authorisation. Restoring legitimacy requires more than better rules or safer code. It demands a re-founding of democratic practice itself. One that recognises cognition as a public domain and insists that those who build the infrastructures of thought remain accountable to the societies that think within them. If AI is trained on us, it must also answer to us. That is the democratic line yet to be drawn.

DISCLAIMER: This blog is a reproduction of a blogpost part of the Symposium on AI & Democracy. Other contributions can be found here. The views expressed in this blog are those of the author and do not reflect the official position of the original authors or any institution with which theyare affiliated. The author
Prince Sarpong is an Associate Professor of Finance at the University of the Free State. He serves as an Advisory Board Member at both AI2030, a global initiative on Responsible AI, and Epistemica, an epistemic engineering firm (LinkedIn).


Comments