The debate around generative AI in education oscillates between two extremes: on one side, promises of efficiency; on the other, concerns about gradual skill erosion. For decision-makers at universities, academies, and continuing education institutions, however, this binary view falls short. The real strategic lever lies in an often overlooked phenomenon: newskilling.
Current research shows that the nature of human-AI interaction is decisive in determining whether learners develop new competencies or lose existing ones. This presents education leaders with a central design challenge: How can learning environments be created that deliberately promote newskilling?
The Spectrum of Competency Development in the AI Era
When students or employees work with generative AI, different usage patterns emerge with vastly different effects on competency development. Researchers at Harvard Business School have identified three characteristic modes:
- Self-Automators delegate problem-solving entirely to AI without critically examining results. This risks stagnation or even decline of existing skills.
- Centaurs use AI strategically for specific tasks while maintaining control over strategic decisions. This mode promotes traditional upskilling.
- Cyborgs continuously integrate AI as a thinking partner in their work processes. In doing so, they develop qualitatively new competencies – true newskilling.
For educational institutions, this differentiation means: Simply using AI tools does not guarantee learning progress. What matters is the pedagogical framework that encourages certain interaction patterns while preventing others.
Understanding Newskilling: More Than Technical Operation
The term newskilling describes an emergent learning process that arises through interaction with generative AI systems. At its core, it involves two dimensions that must be developed together:
Cognitive enhancement encompasses the ability to purposefully integrate AI systems into problem-solving processes. This ranges from understanding algorithmic operations to effective prompting to critical validation of AI outputs. This instrumental-technical competency forms the foundation.
Metacognitive self-regulation goes beyond this. It describes the ability to observe, evaluate, and deliberately steer one's own thinking and learning in dialogue with AI. This is the core of what is termed AI Leadership: leveraging AI potential without surrendering human agency.
Both dimensions are interdependent. Without technical competency, interaction remains superficial. Without metacognitive reflection, there is a risk that thinking processes are outsourced to the machine before they have been internalized.
Strategic Implications for Educational Institutions
Establishing newskilling requires more than providing AI tools. Education leaders face the task of designing learning environments that systematically promote productive human-AI interaction.
A central approach is intelligent scaffolding: AI-supported learning aids that are gradually withdrawn as competency increases. This creates space for higher-order cognitive processes such as critical reflection and creative problem-solving without overwhelming learners.
At the same time, the role of instructors is fundamentally changing. They become orchestrators of hybrid collaboration, moderating the interaction between humans and machines. In concrete terms, this means:
- Using data to identify where learners struggle in AI interaction
- Designing teaching and learning settings that demand technical judgment
- Being transparent about their own AI use and serving as role models
- Actively guiding and coaching metacognitive reflection
This expanded role requires appropriate infrastructure. AI systems that exist in isolation alongside regular instruction can hardly achieve the necessary pedagogical integration.
AI Tutors as Enablers of Structured Newskilling
This is where the strategic value of AI tutors directly integrated into existing learning management systems becomes apparent. An AI tutor embedded in Moodle enables instructors to deliberately orchestrate human-AI interaction rather than leaving it to chance.
The benefits of such integration operate on multiple levels: Learners receive structured access to AI support tailored to their specific course context. Instructors maintain oversight and can intervene when necessary. And the institution gains insights into which interaction patterns occur in different learning situations.
An AI tutor available around the clock as a learning companion can fulfill various functions: answering comprehension questions, prompting reflection, providing feedback on interim results, or supporting the structuring of complex tasks. The crucial factor is that this support is anchored in the overall pedagogical concept.
From Risk Debate to Design Challenge
The current discussion around deskilling and skill skipping points to real risks. When AI results are adopted without understanding the underlying thought processes and solution paths, sustainable learning fails to occur. These dangers do not disappear through bans or ignorance.
For education leaders, however, this is not grounds for passivity but rather a clear design challenge. The question is not whether AI will be used – that is already happening. The question is how interaction can be designed to promote newskilling rather than cause deskilling.
The key lies in the deliberate orchestration of human-AI collaboration. Integrated AI tutors offer a practical starting point: They enable controlled interaction, pedagogical embedding, and continuous guidance by instructors.
Universities and continuing education institutions that embrace this design challenge are positioning themselves not only for the immediate challenges of AI transformation. They are preparing their learners for a working world in which proficient collaboration with AI systems becomes a core competency. Newskilling is therefore not an option but a strategic necessity.