AI Governance in Higher Education: Your University’s AI Policy Is Punishing the Wrong People
- Dr. Victor osei kwadwo

- 4 days ago
- 7 min read
We need to talk about the elephant in the lecture hall: universities are struggling to respond to AI, and many are getting it wrong. Walk into any faculty meeting these days and you will hear the same anxious refrains. “Students are using ChatGPT to write their essays!” “How do we detect AI-generated work?” “We need stricter policies!” It is 2026, and higher education is responding to transformative technology the same way it often has: with panic, prohibition, and a scramble to preserve the status quo. The backwardness we cultivate in the name of integrity will be punished by the very environments we claim to be preparing students for.

To be fair, some of that anxiety is warranted. AI does raise real questions about academic integrity and what learning means when a chatbot can draft a passable essay in seconds. But the dominant response, centering on surveillance and prohibition, is not just ineffective. It is actively harmful to students, counterproductive to learning, and a poor use of institutional resources.
The Burden on Teachers and the Failure of Detection
In many institutions, graders and teachers are now expected to flag possible AI use and report it to a board of examiners. In practice, this transforms grading. Instead of engaging with the substance of a student’s argument, the grader’s attention drifts toward a different question: “Did a human write this?” Boards of examiners become overwhelmed with reports based on gut feeling. In response, they ask teachers to be more circumspect, adding more labour to an already demanding process.
Everyone does more work now, and no one does the work that matters: evaluating whether students are learning.
The detection tools themselves are deeply unreliable. A 2023 Stanford study by Liang et al. found that AI detectors misclassified over 61% of TOEFL essays as AI-generated, with non-native English speakers up to 30% more likely to be falsely flagged. An independent study in the International Journal for Educational Integrity found none of 14 detectors achieved accuracy above 80%.
The consequences are real. At the University of North Georgia, a student was placed on academic probation for using Grammarly. A Yale student sued the university over wrongful suspension based on GPTZero. And in the most significant ruling yet, Orion Newby, a freshman with autism at Adelphi University, had his history paper flagged 100% AI-generated by Turnitin. Two other detectors cleared it as human-written. A Nassau County judge ruled the punishment was “without valid basis and devoid of reason” and ordered the university to expunge his record.
If a single tool can flag a paper as entirely AI-generated while others clear it completely, the tool is not evidence. It is a coin flip with consequences.
In the Netherlands, where generative AI has become what the University of Twente has called a “Gordian knot” for examination boards, institutions have shifted toward disclosure rather than detection. But this carries its own contradiction: why would a student voluntarily declare AI use when they are unlikely to be caught, and when declaration triggers cumbersome documentation requirements? Disclosure-based systems deserve credit for acknowledging the failure of detection, but policing, whether by algorithm or honour system, is a poor substitute for pedagogy that makes the question irrelevant.
Ethics Without Mandate
When detection fails, institutions fall back on policy: revamped honour codes, lengthy AI-use statements, required declarations on every assignment. This is procedural compliance masquerading as academic integrity.
Prince Sarpong, writing on the broader governance of AI in “The Democratic Illusion”, describes a pattern he calls “ethics without mandate”: the tendency of institutions to simulate accountability through internal review processes that lack binding authority and answer to institutional governance rather than to the publics they claim to serve. The parallel to university AI policy is striking.
Honour codes, disclosure checkboxes, and AI-use declarations offer the appearance of ethical engagement while preserving the existing power structure entirely.
Students are governed by rules they had no role in shaping, enforced through tools whose accuracy they cannot challenge. Sarpong frames this as a structural displacement of the public from the seat of sovereignty. In the classroom, it is a displacement of the student from the centre of learning. If a take-home essay can be satisfactorily completed by pasting a prompt into ChatGPT or any generative AI, the assessment is the problem, not the student.
The Assessment Opportunity
Rather than treating AI as a cheating tool, we should recognize it as a signal that traditional assessment models need redesigning.
Some solutions are not new at all. Italian universities have relied on oral examinations for over a century. As someone who experienced this system during a master’s programme at Politecnico di Milano between 2013 and 2015, I found it uncomfortably subjective at the time. But in hindsight, the Italian tradition was ahead of its time. An oral exam is inherently AI-proof: a skilled examiner probes deeper in real time, something no written submission allows. A 2025 study in College Teaching confirmed that oral assessment reliably distinguishes genuine understanding from superficial knowledge. Oral exams carry risks for anxious students and non-native speakers, but these are design problems, not fundamental flaws.
Having studied at Maastricht University, their Problem-Based Learning (PBL) model illustrates the point differently. When pedagogy is built around small-group discussion and collaborative reasoning, AI becomes difficult to misuse because the learning happens in the room, not on the page. You cannot paste a group discussion into ChatGPT. Maastricht has accordingly focused its efforts on researching how AI interacts with PBL rather than on detection.
Prince Sarpong’s Cognitive Growth Index (CGI), a framework for AI-integrated assessment in higher education, pushes the assessment opportunities further: rather than measuring whether students used AI, it tracks how they think in AI-mediated environments, evaluating cognitive growth and adaptive reasoning over time. The shift is fundamental: from policing tools to measuring learning. The question is not “how do we stop students from using AI?” but “how do we design assessments that require the kind of thinking AI cannot replace?”
Students graduating today will enter workplaces where AI is ubiquitous. They will be expected to use these tools effectively, critically, and ethically. These are learnable skills, and universities should be teaching them, not prohibiting them.
The Equity Dimension
Perhaps most troubling is how AI policing amplifies existing inequities. The University of Nebraska-Lincoln has reported higher false positive rates among neurodivergent students, including those with ADHD and autism. African and Chinese students studying in English-medium programmes are prone to the writing gaps that come with working in a second or third language.
That has always been the reality. But now, when these same students produce polished work, they become the usual suspects of AI detection. The very improvement that should be celebrated becomes grounds for suspicion. A student who previously submitted rough but earnest writing and now turns in something well-structured is not necessarily cheating. They may simply have found a tool that helps them communicate what they already know.
This extends to academic publishing. Manuscript rejections citing “poor English” are familiar to scholars from the Global South, even when the contributions are sound. Are we error hunting or evaluating contributions to knowledge? AI tools that help researchers communicate more clearly should be welcomed as a levelling force. Instead, they have become another reason for suspicion.
The UN High-level Advisory Body on AI has noted that 118 of 193 member states are absent from prominent AI governance initiatives. When Global South universities adopt AI policing frameworks designed in Western contexts, they import the biases embedded within them. For decades, the system tolerated a playing field tilted toward those who could pay for help. Now that a free tool has flattened some of that inequality, we are engrossed in policing it.
We Have Been Here Before
This apprehension feels familiar to me, because I have lived through a version of it.
In 2007, during my bachelor’s degree in Ghana (Kwame Nkrumah University of Science and Technology), some lecturers were firmly against students using PowerPoint. The argument was that it would make us lazy and leave us helpless without a computer. Resilience meant pen and paper and the chalkboard.
Then in 2013, I arrived at Politecnico di Milano and walked into a gap I had not anticipated. My classmates were fluent in data visualization tools and design software I had never encountered. I was resilient, certainly. But I was also visibly behind. The resilience my lecturers had cultivated came at the cost of competence in the tools my peers considered basic.
The logic today is the same: deny the tool, and the student becomes stronger. But what actually happens is that students in restrictive environments fall behind while their peers elsewhere learn to use the tools critically and effectively. If we are not careful, the students we “protect” from AI will arrive at their first jobs and discover the rest of the world moved on.
The backwardness we cultivate in the name of integrity will be punished by the very environments we claim to be preparing students for.
Some institutions are charting a different course. In the Netherlands, universities have pursued a collaborative model through SURF, building institutional AI environments at the University of Amsterdam and VU Amsterdam,. Harvard, Cornell, Michigan, and Florida have moved toward human-centered policies anchored in flexibility rather than prohibition.
The academic AI panic is not really about protecting learning. It is about protecting a system that was already under strain before ChatGPT came along. The sooner we admit that, the sooner we can build something better.

Next in series: “The Assessment Crisis: When Everything Can Be Automated”
About this series: This is a critical examination of how universities are responding to AI technology, written for educators, administrators, students, and anyone invested in the future of higher education. Comments and debate welcome.
DISCLAIMER: The views expressed in this blog are those of the author and do not necessarily reflect the official position of Governance and Development Advisory, or any institution with which the author is affiliated. This piece is written in a personal capacity to contribute to critical dialogue on AI and higher education.



Comments