1. Headline & intro
Teaching has always depended on a fragile social contract: students agree to do the hard work, teachers agree to guide and fairly judge that work. Generative AI has blown a hole straight through that contract. What’s collapsing first is not content or curriculum, but trust.
In this piece I’ll use an Ars Technica column by Earth science instructor Scott K. Johnson as a starting point to look at how large language models (LLMs) are quietly destabilising higher education, why online learning is particularly at risk, how institutions (especially in Europe) are misreading the problem, and what a realistic, "AI-aware" future of assessment might look like.
2. The news in brief
As reported by Ars Technica, part‑time college instructor Scott K. Johnson describes how teaching asynchronous online Earth science courses has become dramatically more discouraging since the arrival of tools like ChatGPT.
He explains that in online settings, where students already have weaker engagement, generative AI now allows learners to submit assignments that look legitimate but are largely or entirely machine‑written. In a College Board survey he cites, 84 percent of 600 high school students said they had used generative AI for schoolwork.
Johnson notes that classic plagiarism (copy–paste from Wikipedia) feels almost quaint compared to LLMs, which can generate plausible answers to almost any prompt, including reflective and higher‑order questions that educators previously used to make cheating harder. He recounts redesigning and even abandoning formerly effective assignments because investigating suspected AI use takes hours per case and is almost impossible to prove conclusively. He also points to surveys of thousands of faculty showing widespread concern that LLMs are undermining critical‑thinking skills, while administrators push institutional AI tools and urge staff to "teach students how to use AI."
3. Why this matters: education is turning into a mistrust economy
The most important point in Johnson’s account is not that students are cheating. Students have always cheated. What’s different now is that the basic mechanics of how we see learning are breaking.
Modern education rests on three fragile assumptions:
- Outputs roughly track effort. A good essay usually implies real cognitive work.
- Teachers can spot corner‑cutting. Plagiarism detectors, suspicious patterns, or inconsistent writing used to be enough.
- Assessment is scarce but meaningful. Grades from a course are trusted by employers and other institutions.
LLMs attack all three.
A student can now produce a passable answer in seconds with negligible effort; the signal (output) is almost decoupled from the thing we care about (learning). Detection tools are unreliable and, crucially, cannot offer courtroom‑grade proof—so every borderline case becomes a draining argument. And as this scales, the value of a grade issued in 2026 quietly erodes: how much of that transcript reflects the student’s mind, and how much reflects OpenAI’s or Anthropic’s weights?
In this environment, everyone loses in the long term:
- Students lose genuine skills and confidence. Many already treat AI as “workload management” rather than learning, as Johnson notes from student conversations.
- Teachers, especially precarious adjuncts, are pushed into a permanent police role, with rising administrative risk and no extra time.
- Serious learners are hurt because instructors can no longer distinguish them cleanly from those outsourcing everything.
The only short‑term winners are AI vendors and, to a lesser extent, university marketing departments that can brag about "AI‑powered campuses" while quietly offloading the pedagogical fallout onto teaching staff.
4. The bigger picture: from calculators to certification crisis
Optimists like to say: "People panicked about calculators too." But that analogy misses two crucial differences.
- Calculators are narrow and verifiable. They automate arithmetic but do not invent fictitious numbers. LLMs improvise. In education, that means a "personal tutor" that confidently teaches you wrong facts if your prompt is slightly off.
- Curricula adapted before calculators became ubiquitous. We re‑emphasised conceptual math, problem setup, and understanding. With generative AI, adoption in the wild is outpacing curriculum re‑design by years.
Recent developments underline the shift:
- Edtech companies are racing to add "AI assistants" to every LMS and homework platform. Many of these tools are essentially institutionalised shortcut engines: they generate solutions, outlines, and even entire lab reports.
- Traditional homework businesses like Chegg have already suffered because students now bypass them in favour of free LLMs. That should be a warning sign: if your business model is “do the assignment for the student,” you are now competing with a near‑infinite supply of free automation.
- On the other side, there’s a boom in "AI detection" and online proctoring systems. These promise certainty they cannot deliver, and they create new privacy and bias problems while failing to restore trust.
Underlying all of this is a more existential question: What is the university credential actually certifying? In a world where text, code, and even data analysis can be outsourced to a machine, a degree needs to represent something much closer to capability under scrutiny: can this person reason, communicate, and adapt in a setting where shortcuts are constrained?
If institutions fail to make that shift, alternative credentialing systems—industry certificates, project portfolios, or rigorous bootcamps—will look increasingly attractive to employers who can no longer interpret a conventional transcript.
5. The European angle: AI, regulation, and the online learning dilemma
For Europe, this crisis intersects directly with regulation.
Under the upcoming EU AI Act, AI systems used to evaluate students can be classified as "high‑risk" and subject to strict requirements on transparency, robustness, and human oversight. That’s a double‑edged sword. On one hand, it can help curb some of the worst ideas—like fully automated grading of nuanced work. On the other, it may further incentivise institutions to pretend that AI isn’t already deeply embedded in informal student workflows.
European universities also operate in a context of strong GDPR enforcement and a privacy‑conscious public. That makes heavy surveillance (aggressive proctoring, keystroke monitoring, biometric verification) politically and legally difficult, especially in countries like Germany or Austria. So the "lock everything down" route—the instinctive US response in some cases—is not really viable here.
Then there is language and scale. Much of the LLM ecosystem is still optimised for English. Students in Central and Eastern Europe, or in smaller language communities, face a mixed picture: AI may be slightly less capable in their language, delaying the cheating wave a bit, but it is also less useful as a legitimate learning tool. Meanwhile, EU‑funded online programmes that were supposed to widen access—from rural Spain to the Balkans or the Baltics—are the very formats most vulnerable to AI‑based outsourcing of work.
In other words, Europe is simultaneously better protected against surveillance overreach and more exposed to the erosion of online learning, precisely in regions where flexibility and distance education were key to catching up.
6. Looking ahead: towards AI‑aware assessment, or quiet collapse
Over the next three to five years, universities have three broad paths—consciously chosen or drifted into.
AI denial. Keep existing assignments, publish vague policies against "unauthorised AI use," and rely on gut feeling plus unreliable detectors. This is the path of least resistance and leads directly to a slow, silent devaluation of grades and degrees.
AI‑aware redesign. This requires serious work but is the only sustainable route. It likely includes:
- More in‑class, supervised assessment: oral exams, whiteboard problem‑solving, timed writing, practical labs.
- Assignments that require process evidence: drafts, version histories, reflection on decisions, and live defence of projects.
- Explicit teaching of AI literacy: when and how to use tools, how to check them, and where use is forbidden.
- Clear separation between tasks where AI is encouraged (e.g., brainstorming, language polishing) and those where it is banned (core reasoning in gateway courses).
AI outsourcing at scale. Some institutions will lean into full AI integration, marketing "hyper‑efficient" learning with automated tutoring, grading, and content generation. The risk is that these become diploma mills with better branding, issuing certificates that employers quietly learn to discount.
For readers—whether students, parents, or academics—the key things to watch are:
- Policy clarity: Does your institution have concrete, course‑level rules on AI, or only high‑level slogans?
- Assessment mix: Are most high‑stakes grades coming from unsupervised essays and problem sets, or from work where real‑time performance matters?
- Support for teachers: Are instructors given time, training, and legal backing to redesign courses, or just told to "embrace AI"?
The biggest unresolved question is what happens to asynchronous online programmes. If institutions cannot find credible ways to ensure that the person enrolled is the one doing the work—without violating privacy—some forms of mass online education may simply lose their status as serious pathways to a degree.
7. The bottom line
LLMs did not just add another cheating tool; they broke the already‑fragile link between visible output and invisible effort. Unless universities rapidly move to AI‑aware assessment that foregrounds supervised performance, process, and genuine human interaction, we risk turning education into a mistrust economy where everyone is suspect and credentials mean less each year. The uncomfortable question for students and institutions alike is simple: what, exactly, do you want a degree to prove in the age of ChatGPT?



