Thursday, 16 April 2026

What we measure /assess in the age of AI and digitalisation?

AI Has Already Changed Assessment—Higher Education Just Hasn’t Caught Up

AI has already transformed how learners think, solve problems, and produce knowledge—but higher education assessment systems remain largely unchanged.

A clear gap is emerging between how students actually work and how they are evaluated.

Students today use AI to accelerate learning, iterate rapidly, and access knowledge beyond formal curricula. In many cases, they are becoming more adaptive and efficient than the systems designed to assess them—yet they remain discreet, largely because current models still penalize or misunderstand AI-assisted work.

This creates a fundamental misalignment: we claim to value critical thinking and real-world readiness, yet continue to assess controlled, decontextualized outputs.

Frameworks such as Learning Analytics and Computerized Adaptive Testing show that more dynamic, process-oriented assessment is possible, while Automated Scoring Systems highlight both the scalability and risks of current approaches. At the policy level, OECD and UNESCO continue to call for competency-based, ethical, and transparent systems.

Yet institutions remain slow to adapt.

If this continues, assessment risks becoming increasingly performative, while students become more strategic—and less transparent.

Final thought: The real issue is not that students are using AI, but that they may already be learning and evolving faster than the systems meant to measure them.

#Assessment #HigherEducation #AI #EdTech #FutureOfLearning


Monday, 13 April 2026

Using recorded PODCASTS instruction instead of MOCK exams

Beyond mock exams in a university context: Not a tenable method of assessment in the age of AI and digitalization!

As an educator in multilingual education and AI-assisted pedagogies, I am challenging the long-standing memorization paradigm that has dominated classrooms. Mock exams belong to a different traditional era of practice—rooted in the standardization and industrial paradigm that treated education like assembly-line production. Here's why I reject them and embrace dynamic alternatives.

Why Mock Exams Fail

Mock exams promote rote cramming, turning students into prisoners of repetition. They kill cognitive dynamism, ignoring how multilingual skills—like contextual fluency and cultural adaptation—demand real-world application, not one-shot tests from an outdated industrial model.



My Approach: Progressive, In-Class or off-class/remote Mastery

I favor continuous activities and mini-projects that build competencies situationally. Students apply multilingual skills through collaborative tasks, simulations, and AI-enhanced workflows, fostering deep, layered understanding.

Innovation: NotebookLM Audio Guides

Today, I used NotebookLM to create an audio overview of the final exam structure—an elaborate narrative linking questions to the full lecture network. Students engage multisensorially: listening, associating content (e.g., module on multilingual practices and ethics), and reflecting. This immersive prep boosts retention without mocks.

In the multilingual education's AI era, such tools augment cognition, reducing anxiety and sparking motivation... Yet, existing structure and policies, especially those plagued with micro-management philosophies and various types of control-mania practices, do not let these illuminating methods be nurtured and implemented.







 

What we measure /assess in the age of AI and digitalisation?

AI Has Already Changed Assessment—Higher Education Just Hasn’t Caught Up AI has already transformed how learners think, solve problems, and ...