Reflection on AI in Education — The Blind Spots

Critical Pedagogy in the Digital Age

A country may distribute devices, tablets, or AI platforms widely, but unless it cultivates critical thinking, cultural grounding, and ethical judgment, the technology becomes an accelerator of confusion rather than understanding

AI – The key is to keep it helpful. Pic – Australian Council for Educational Research

 

By Nandini Bhautoo

There is a famous short story by E.M. Forster, “The Machine Stops” (1909), that speaks uncannily to our present moment. In Forster’s imagined world, humans live underground, isolated in individual cells. All learning, entertainment, and human interaction are mediated by the Machine, which acts as government, religion, and collective brain. It offers comfort and convenience but gradually erodes human capacity.

The story becomes a myth for our age because, although it appears to be about a future technology, it is really about how infrastructure shapes consciousness. In Forster’s world, people communicate only through screens, instant lectures, and automated feeds. Everything is effortless, and yet that very ease becomes a form of confinement. Our relationship with Artificial Intelligence (AI) today mirrors this dynamic. We celebrate the promise of unlimited information and instant answers, but we rarely notice that ease can harden into habit, and habit into dependence. Architecture becomes destiny.

Forster traces a progression in his characters’ relationship to the Machine: they first use it, then consult it, then depend on it, then worship it, and finally forget that life ever existed without it. It is a sequence that feels increasingly familiar. When knowledge becomes fully mediated, critical thought atrophies. Students in the story “think ideas” without ever leaving their rooms. Culture becomes commentary on commentary. Physical reality is dismissed as irrelevant. The radical act becomes stepping outside. Predictably, the humanities die first, because they depend on lived experience, moral imagination, and embodied uncertainty. When the Machine eventually collapses, society collapses with it — not because of malice, but because humans have surrendered the habits of mind that would have allowed them to live without it. You can survive a machine breaking down; you cannot survive forgetting how to think.

This Forsterian anxiety increasingly shadows our own moment, especially when one examines current policy papers introducing AI into education — whether the government’s recent MyGPT initiative for schools or the Higher Education Commission’s proposals for AI in universities. The intent of these policies is good. They aim to normalise AI use, democratise access, reduce digital divides, and offer equal technological footing. But the core weakness is consistent across documents: they focus on tools rather than cognition. Most treat AI as if it were a library or a calculator — a neutral resource waiting to be used.

But AI is not inert. It is a thinking partner that shapes reasoning, narrative structure, intellectual style, and even the boundaries of what students consider acceptable thought. Current policy foregrounds infrastructure, access, and training in usage, but rarely discernment. Almost nowhere is there serious consideration of how AI may flatten originality, weaken epistemic stamina, reorganise academic inquiry, or subtly reshape the experience of thinking itself. The phenomenology of AI — the way it enters and restructures consciousness — emains largely unexamined.

This oversight carries significant risks. Higher education, if it proceeds along this path, may soon produce a generation of students who can gather information yet cannot interpret, synthesise, or generate an original idea from it. AI dependence is especially dangerous because it is invisible: AI makes everything feel correct, even when it is not. By focusing on technological access while ignoring cognitive formation, current policy overlooks how AI can accelerate shallow thinking, mask gaps in understanding, generate plausible but unfounded claims, and remove productive struggle — a core component of genuine learning. Students and teachers alike require training not only in how to use AI, but in how to critique it: recognising bias, interrogating sources, testing claims, and designing tasks that require reasoning beyond what a model can supply. These dimensions are missing from nearly every document.

The deeper blind spot, however, concerns the human mind itself. Educational philosophy has long understood that tools do not produce intelligence; they amplify whatever intelligence — or confusion — is already present. Yet most policy statements ignore intention, attention, epistemic humility, and the gradual formation of judgment. This same oversight appears in the government’s MyGPT initiative in primary education. The focus is on access, not on developing critical thinking, research habits, media literacy, or the ability to recognise bias and uneven data quality. Providing children with AI — generated content without teaching them to interrogate it — against existing narratives, academic standards, or local knowledge — risks cultivating superficial learners. If young students come to treat AI as the origin of thought, the consequences for epistemology are severe. Instead of supplementing learning, AI may eclipse deeper intellectual engagement and local intellectual traditions.

AI feels deceptively simple, like a pen or calculator. But unlike a pen, it is not neutral. It structures the very act of inquiry: what questions we think to ask, how we frame them, what counts as a satisfactory answer, which sources appear, and what tonal norms emerge. In this sense, the tool is not merely used; it co — authors the cognitive experience. AI does not create wisdom. It magnifies whatever habits exist. A thoughtful student becomes more thoughtful; a superficial student becomes more superficial. A student lacking foundations receives an avalanche of content but no orientation. This is the paradox of every cognitive technology. Writing expanded memory but weakened mnemonic discipline. The printing press democratized knowledge but eroded oral traditions. Google made facts immediately available but hollowed out long — form reading. AI now accelerates thought but risks making reflection optional. Without grounding, the user becomes passively intelligent — knowledgeable through outsourcing rather than internal effort.

Technology itself never transforms society; human intention does. When students use AI, the outcomes depend on their purpose, humility, values, and prior intellectual habits. AI is like fire: if the inner structure is stable, it illuminates; if unstable, it destroys. This idea is ancient — echoed in the Upaniṣhads — and equally affirmed by modern cognitive science: tools alter cognition only through the mental frameworks already present.

This has direct implications for policy. A country may distribute devices, tablets, or AI platforms widely, but unless it cultivates critical thinking, research discipline, epistemic humility, cultural grounding, and ethical judgment, the technology becomes an accelerator of confusion rather than understanding. It multiplies misinformation, substitutes automation for reflection, and becomes a hegemonic filter rather than an opening for inquiry. If the mind is untrained, AI will think on its behalf. If the mind is trained, AI becomes a powerful extension of thought.

What is missing from current frameworks is a recognition that AI literacy is not merely technical literacy. Students must learn how to think with AI, not simply how to operate it. Teachers must become mentors in discernment — in why we ask questions and how we interpret answers. And technology must be anchored within cultural depth, standing beside rather than above our intellectual traditions. Mauritius has rich epistemologies that AI should complement, not overshadow.

The most striking absence in current AI policy documents is the neglect of the arts and humanities. These fields are not optional add-ons; they are the intellectual disciplines that prevent technology from becoming unmoored. Humanities determine not just how we think, but what and why we think. Without philosophy, AI amplifies confusion; without ethics, harm; without history, amnesia; without literature, literalism; without art, sterility. The capacities AI cannot replicate — interpretive judgment, ethical imagination, emotional intelligence, symbolic reasoning — are precisely those cultivated by the humanities. Students need inner depth before they can handle outer abundance. And true AI literacy requires meta — literacy: an understanding of how knowledge is constructed, how authority is formed, how bias operates, and what qualifies as evidence.

Importantly, AI threatens the humanities precisely because the humanities threaten AI. They question assumptions, expose biases, protect human subjectivity, and preserve ambiguity — all inconvenient to a society increasingly obsessed with efficiency. Without them, AI education risks collapsing into mere workforce training, shrinking public discourse and weakening citizenship. The strongest international AI strategies already integrate ethics, philosophy, media literacy, and cultural competence, not as soft skills but as structural safeguards.

We should be able to do the following: embed humanities at the centre of AI integration, build a culturally grounded AI literacy model, avoid the pitfalls of purely techno — economic adoption. This is the essential corrective we must bear in mind when discussing AI in Education.


Mauritius Times ePaper Friday 28 November 2025

An Appeal

Dear Reader

65 years ago Mauritius Times was founded with a resolve to fight for justice and fairness and the advancement of the public good. It has never deviated from this principle no matter how daunting the challenges and how costly the price it has had to pay at different times of our history.

With print journalism struggling to keep afloat due to falling advertising revenues and the wide availability of free sources of information, it is crucially important for the Mauritius Times to survive and prosper. We can only continue doing it with the support of our readers.

The best way you can support our efforts is to take a subscription or by making a recurring donation through a Standing Order to our non-profit Foundation.
Thank you.

Add a Comment

Your email address will not be published. Required fields are marked *