It's unavoidable. 86% of students using AI for studies. 57% of universities see AI in education as a strategic priority. In terms of AI, we are excited about the possibilities, but's not just about the application of it.
From adaptive learning platforms to AI-driven skills assessment, the technology is entering classrooms, lecture halls, and training programmes at remarkable speed. Yet a UNESCO report released this year reminds us of something vital: AI in education is not just about technology. It’s about ideology.
In UNESCO’s report for the AI and the Future of Edcation, there is a framework of six imaginaries, exploring the latest research, and forward-looking perspectives. Imaginaries are the big, competing stories, visions, or ideologies that people use to imagine the future of AI in education.
They aren’t predictions. They’re not policies.
They’re narratives, mental models that shape how researchers, policymakers, educators, and technologists talk about AI’s role in schools and universities.
Think of them as lenses: each imaginary highlights certain opportunities and risks, while downplaying others. They’re powerful because the way we imagine AI directly influences the way we design and implement it. There are 6 Imagineries the report highlights:
1. The Utopian: AI for Democratized Learning
The vision: AI creates personalised, equitable, and inclusive education. Systems tailor learning to individual strengths, weaknesses, and preferences, while providing adaptive pathways and wellbeing monitoring. Research suggests that personal AI assistants linked to broader educational ecosystems could enable learners to set their own goals, track progress, and receive tailored interventions.
Arizona State University has piloted adaptive AI platforms that provide personalised study paths for thousands of students. Early results show improved retention and engagement, particularly for first-generation learners. Similar initiatives in Europe link AI tutors with mental health dashboards, giving educators early warnings when students disengage.
Unilever has tested AI-driven career guidance platforms that analyse employee skills against market trends, recommending reskilling and upskilling opportunities. Employees report feeling empowered to design their own career trajectories.
Risks: Access remains unequal. Students in rural or underfunded schools may lack reliable devices or connectivity. Without inclusive design, personalisation algorithms may embed biases that disadvantage already marginalised learners.
Future vignette (2035):
It’s 2035. Every student has a “learning cloud” — a personal AI companion that curates educational resources across their lifetime. At 16, it helps a student choose STEM electives aligned with their strengths. At 22, it guides them through internships, and at 35, it recommends reskilling for a green energy career. The dream: education that adapts seamlessly to life. The risk: dependence on corporate-owned ecosystems that monetise every learning step.
2. The Perfect Educator: AI as Autonomous Teacher
The vision:
This imaginary sees AI as a near-perfect educator — a tireless tutor, evaluator, and lecturer capable of delivering education with little or no human intervention. It promises consistency, efficiency, and instant feedback at scale.
At Tsinghua University, pilot AI teaching assistants already grade essays, answer student questions via chatbots, and provide personalised resources. Students in large courses report faster feedback than they would from human lecturers.
PwC has trialled AI tutors for entry-level accounting trainees. These systems deliver instant answers to technical questions, guide learners through practice cases, and free senior staff from repetitive training tasks.
Risks and critiques:
Over-automation risks reducing education to transactional exchanges.
AI lacks emotional intelligence, cultural sensitivity, and the ability to mentor.
There is a danger of “teacher replacement” narratives undermining the dignity and professional agency of educators.
Future vignette (2035):
In a fully autonomous classroom, 40 students sit in front of personalised learning dashboards. Their AI tutor detects confusion, adjusts lessons, and provides feedback instantly. Yet when a student struggles with anxiety or feels unmotivated, the system offers only generic coping prompts. Without human presence, learning risks becoming efficient but soulless. The likely future is hybrid: AI takes over repetitive tasks, while humans focus on mentorship and community.
3. The Techno-Solutionist: AI as the Fix
The vision:
Techno-solutionism assumes that technology, if deployed widely enough, can solve the challenges of access, quality, and scale in education. AI is portrayed as the ultimate efficiency tool.
MOOCs (Massive Open Online Courses) have been revitalised with AI. Coursera now uses AI to auto-generate quizzes, personalise content sequencing, and provide language translation — serving millions simultaneously.
Amazon employs AI to deliver real-time training to warehouse employees via handheld devices. The system pushes micro-learning modules at scale, ensuring compliance and efficiency across a global workforce.
Risks and critiques:
Techno-solutionism often treats education as a logistics problem rather than a human and social process.
It risks “scaling mediocrity,” where efficiency is prioritised over depth of learning.
It neglects the structural roots of inequality, such as underfunded schools or systemic discrimination.
Future vignette (2035):
A national government rolls out an AI-powered learning system for all secondary schools, promising equal access for millions. Test scores rise in the short term, but creativity and critical thinking decline as students adapt to rigid, standardised AI-driven assessments. The promise of “scale” risks replacing quality with uniformity.
4. The Cyberlibertarian: AI Without Regulation
The vision:
Cyberlibertarianism imagines an education market where AI flourishes free from government regulation. Companies innovate rapidly, offering AI tutors, assessment platforms, and career guidance without oversight. The assumption: competition produces the best outcomes.
Start-ups offer AI-based admissions counsellors that promise to optimise applications to elite universities. These services thrive in deregulated environments, but concerns arise around data privacy and fairness.
Fast-growing tech firms use unregulated AI platforms to train staff cheaply. The lack of oversight accelerates adoption but raises questions about quality, bias, and exploitation of learner data.
Risks and critiques:
Unregulated AI risks reinforcing inequality by privileging those who can afford premium tools.
Without data protection, student and employee information can be misused or commodified.
Accountability is weak; errors or biases may go unchecked.
Future vignette (2035):
Global tech giants control most of the education market. Learners subscribe to AI tutors owned by corporations, their learning histories monetised for targeted advertising and recruitment. Governments, having ceded oversight, struggle to regain control. Education risks becoming a commodity rather than a public good.
5. The Dystopian: AI as Surveillance
The vision:
In this imaginary, AI is used to monitor, control, and discipline learners. Education becomes less about growth and more about compliance.
Chinese universities have tested AI systems that monitor classroom attention via facial recognition and track library attendance. Students report feeling constantly watched, with reduced willingness to experiment or challenge authority.
Some companies use AI to track keystrokes, monitor webcam activity during training, and assess engagement through micro-expressions. While marketed as productivity tools, employees often experience them as surveillance.
Risks and critiques:
Creates a climate of mistrust.
Undermines creativity and autonomy.
Risks mental health consequences, from anxiety to self-censorship.
Future vignette (2035):
In a “smart classroom,” every student wears biometric sensors. Heart rate, attention span, and micro-expressions are fed into a dashboard for administrators. The system flags “low engagement” students for intervention. While marketed as early support, the effect is chilling: learners adapt by performing compliance, not by pursuing genuine curiosity.
6. The Ecological Warning: AI’s Energy Costs
The vision:
AI offers immense power — but at immense ecological cost. Large models consume vast amounts of electricity to train and operate, straining institutions’ sustainability commitments.
The University of Cambridge has begun calculating the carbon footprint of its AI research and teaching tools, recognising that “green AI” must become a strategic priority.
Global corporations adopting AI-powered training must now account for energy-intensive data centres in ESG reporting. Some invest in carbon offsets, but this remains a stopgap.
Risks and critiques:
AI may undermine universities’ climate pledges.
Greenwashing — claims of sustainability without substantive change.
Equity issue: energy-intensive tools may only be affordable to wealthy institutions.
Future vignette (2035):
International regulations require universities and corporations to report AI energy consumption alongside emissions. Institutions unable to meet “green learning” standards face reputational and financial penalties. Sustainability becomes a competitive advantage in AI adoption.
Beyond the Six Imaginaries: The Future of AI and Education
Again, the imaginaries are not predictions, they are lenses. In reality, education’s future will contain elements of each. Looking ahead, several trends stand out:
AI-powered lifelong learning: Personal AI “learning companions” guide careers, reskilling, and wellbeing throughout life.
Immersive education: AI + AR/VR create experiential learning, from medical simulations to global teamwork exercises.
Continuous assessment: Exams may fade as AI enables ongoing evaluation across all stages of learning.
Global inequality risks: Wealthy institutions may accelerate, while others fall behind, creating a new education divide.
Ethical charters: Universities and corporates will adopt AI ethics charters, balancing innovation with values.
A Cautious Optimism
At Navilo, we believe the answer is not to accept any single imaginary at face value. Instead, leaders must cultivate cautious optimism: embracing AI’s potential while shaping its use with evidence, equity, and sustainability.
Evidence over hype: Prioritise research-driven pilots with measurable outcomes.
Human-centred design: Keep teachers, mentors, and facilitators at the heart of learning.
Ethical foresight: Make sustainability, privacy, and inclusion core to AI adoption.
AI will not simply “fix” education. It will magnify the systems and values we already hold. University leaders and corporate L&D professionals must treat AI not as a neutral tool but as a mirror reflecting priorities: equity or exclusion, empowerment or control, innovation or exploitation.
The question is not whether AI will shape education. It already is. The question is: whose values will it amplify?
We encourage university leaders and corporate learning professionals to read UNESCO’s AI and the Future of Education report in full. More importantly, we invite you to engage in dialogue, across sectors, across borders, and across disciplines, about how we ensure AI in education serves as a force for equity, dignity, and sustainability.
Because AI is not the answer. It is the amplifier.
And the future of education depends on what, and whose, values we choose to amplify.





