Why universities must set clear rules for AI use before trust in academia erodes
Vendan Ananda Kumararajah
- Published
- Opinion & Analysis

While AI tools are now embedded in everyday university life across Europe, the absence of clear institutional rules, shared standards and enforceable governance is creating ambiguity, inequity and a gradual erosion of trust at the heart of academic systems, writes Vendan Ananda Kumararajah
Universities across Europe are already operating in an AI-enabled reality. Students use large language models to draft essays, summarise readings, generate code, and structure arguments. Academics, meanwhile, use AI to accelerate research, review literature, and prepare teaching materials. Yet despite this widespread adoption, academia still lacks something fundamental: clear rules of engagement. In the absence of that clarity, ambiguity takes hold and steadily erodes trust in academic systems.
The current state of play is paradoxical. AI tools are everywhere in higher education, yet governance is nowhere. Institutions oscillate between informal tolerance, outright bans, or vague guidance that shifts responsibility onto individual lecturers and students.
This leaves everyone uncertain. What counts as acceptable assistance? Where does learning end and automation begin? Who is accountable when AI-generated work enters assessment systems? Without clear boundaries, neither students nor educators are operating on a level playing field.
Academic integrity has always rested on transparency, attribution, and demonstrable understanding. AI, however, disrupts all three.
When students submit work partially or wholly shaped by automated systems, the ethical question is not merely whether AI was used, but how, where, and with what justification. In the absence of formal AI infrastructure and governance rules, ethical judgement becomes subjective, inconsistent, and vulnerable to dispute.
This is unfair to students, who are left guessing what is permissible, and to educators, who are asked to police behaviour without institutional backing. Ethics cannot be enforced through ambiguity. They must be designed into the system.
One of the least discussed consequences of unmanaged AI use in academia is inequity. Some students have access to premium tools, stronger prompts or informal guidance, while others rely on free versions or avoid AI altogether through uncertainty. Departments vary in their tolerance, with some encouraging experimentation and others treating it as misconduct. This landscape creates structural unfairness and weakens the basis of academic meritocracy.
When institutions fail to define shared rules of engagement, advantage accrues unevenly and the principle of fair assessment is steadily undermined.
Perhaps the most serious concern of all is pedagogical.
AI can accelerate learning but it can also bypass it. When automated prompts generate polished answers, students may produce convincing work without acquiring underlying domain knowledge. Critical thinking, synthesis, and argumentation risk being replaced by surface coherence.
The question universities must confront is not whether AI is “good” or “bad”, but whether current usage supports or undermines learning objectives. If students cannot defend, critique, or explain AI-assisted outputs, the educational contract has already been broken.
Universities are trusted institutions because they produce knowledge that is aligned with disciplinary standards, critically reasoned, and defensible under scrutiny. Unregulated AI use weakens that trust, and when neither students nor institutions can clearly account for how knowledge is produced, confidence in academic outputs erodes internally and in the eyes of employers, regulators, and society. This is happening already.
The solution, in my view, is not prohibition. Blanket bans are unenforceable and intellectually dishonest. Nor is laissez-faire adoption viable. What is needed is institutional AI governance that is practical, transparent, and enforceable.
Two steps are essential.
First, internal, institute- or course-specific AI platforms. Rather than relying on uncontrolled external tools, institutions should provide curated AI environments aligned with course objectives, assessment methods, and ethical standards. This creates traceability, consistency, and shared expectations.
Second, clear rules of engagement. Students should know when AI may be used, for what purposes, with what disclosure, and how they are expected to defend and critically analyse AI-assisted work. AI should be treated as an intellectual instrument, not a shortcut and its use should always remain visible, discussable, and examinable.
European academia has navigated previous technological shifts such as calculators, the internet, digital libraries by updating norms and governance, not by pretending change could be stopped. AI is no different, except in scale.
The choice now is stark: either institutions design AI governance intentionally, or they allow trust, integrity, and learning outcomes to erode by default.
In academia, as in society, trust depends on structure rather than silence.

Vendan Ananda Kumararajah is an internationally recognised transformation architect and systems thinker. The originator of the A3 Model—a new-order cybernetic framework uniting ethics, distortion awareness, and agency in AI and governance—he bridges ancient Tamil philosophy with contemporary systems science. A Member of the Chartered Management Institute and author of Navigating Complexity and System Challenges: Foundations for the A3 Model (2025), Vendan is redefining how intelligence, governance, and ethics interconnect in an age of autonomous technologies.
READ MORE: ‘Why social media bans won’t save our kids‘. Politicians are rushing to block under-16s from social platforms, but the danger runs much deeper than screen time or teenage scrolling, warns Vendan Ananda Kumararajah. The real threat lies in systems built for profit, not childhood, and only a redesign of the platforms themselves will make the online world genuinely safe for young people.
Do you have news to share or expertise to contribute? The European welcomes insights from business leaders and sector specialists. Get in touch with our editorial team to find out more.
Main image: Pixabay
RECENT ARTICLES
-
First Adolescence, now Inside the Manosphere. How do we protect boys from misogynistic alpha male influencers? -
NATO reluctance signals limits on U.S. leadership -
Iran, nuclear proliferation and the hard choices facing democracies -
When AI customer service fails, don’t blame technology — it’s leadership at fault -
SUCCESS London conference highlights challenge of life after cure for brain tumour survivors -
A new generation of disability rights leaders is reshaping Europe -
Trump hasn’t broken America — he’s exposed what it really is -
AI is rewriting Europe’s networks from the inside out — and the continent isn’t ready -
Europe’s new gender strategy may be solving yesterday’s problems -
Why Britain still needs reporters in the courtroom -
Rivers run deeper than we think -
Spain’s rocket builder just landed €180 million — and Europe’s case for space sovereignty just got harder to ignore -
Why jobs and housing must be solved together to deliver real disability inclusion -
The new gender divide is already reshaping Europe’s future leaders -
The Arctic’s unfinished cold war -
Highway robbery: how the UK’s post-Brexit electric car policy blew a fuse -
Nokia built the brains for the AI network revolution — so why is American capital leading the charge? -
What the UK SEND reform whitepaper means and what it might take to deliver it -
Europe cannot call itself ‘equal’ while disabled citizens are still fighting for access -
Is Europe regulating the future or forgetting to build it? The hidden flaw in digital sovereignty -
The era of easy markets is ending — here are the risks investors can no longer ignore -
Is testosterone the new performance hack for executives? -
Can we regulate reality? AI, sovereignty and the battle over what counts as real -
NATO gears up for conflict as transatlantic strains grow -
Facial recognition is leaving the US border — and we should be concerned


























