How Tech Can Move Us Beyond Rote Learning
EdTech opportunities to classify assessment questions according to Bloom's Taxonomy
Welcome to the 39th edition of The Discourse. In this edition, we explore the rot in education that is rote learning. What’s a novel way to grade questions? And what are potential Edtech opportunities resulting from this transition?
If you’re new here, please subscribe and get insights about product, design, no-code delivered to your inbox every week.
Let's talk about rote learning
The education system has let us down, especially in India. And the cornerstone has been rote learning. We've all had education experiences that focussed on memorizing facts and figures, rather than understanding or applying concepts.
In the age of Google, do we really need to memorize things?
I remember when I was growing up, History was such a boring subject. All we had to do was to memorize the dates of the battle of Plassey or Waterloo. But in fact, it’s such an interesting subject. I learned more about history by reading Sapiens and watching YouTube videos on history by Crash Course.
If you think the situation is bad in school, it’s even worse in engineering colleges.
The examination system at our higher-ed institutions does not achieve its intended purpose. Most exam questions are based on rote learning. The predictability of the exam system erodes the incentive to study and absorb the course material well. - From the National Employability Report - Engineers 2019
What’s a good way to solve it?
If you break down education into two parts: you get training and assessment. To solve the problem of rote learning, you have to address both ends.
Because the incentives of assessment drive the instruction.
“Show me the incentive, and I will show you the outcome.” – Charlie Munger
The New Education Policy 2020, our first education reform in 34 years, calls out these concerns directly.
Student assessment will shift from summative which primarily tests rote memorization skills, to more regular and formative mode, which tests higher-order skills like analysis, critical thinking, and conceptual clarity.
Eashwari Nair writes about how the NEP 2020 is taking steps to replace rote learning with a more holistic approach.
While it’s good to see the NEP calling out the problem, the specifics of how are missing.
Let's take a look at one way we can address the gap.
Bloom's Taxonomy is a classification of learning outcomes that is used to structure curriculum, assessments, and activities.
The aim is to have a well-distributed assessment that covers skills from the bottom to the higher-order cognitive skills.
This is not to say that memorization isn't important. It’s good to have a strong base of memorization on which the student can build higher-order skills. But should never be the primary focus of assessment.
Implications for education
This fundamentally changes how written papers are assessed. Each exam question is to be tagged with the correct level of Bloom's taxonomy.
What's the benefit of classifying the questions?
From the school’s perspective, they get to know the spread of the assessment questions across all the levels. They will know when the assessment questions are heavily biased towards memory-based questions and when it is more distributed.
What’s the benefit for the student?
The student is given a report card of how well they are doing across these different levels. They can then use these insights to identify and improve their performance across the different skills for each subject.
Most assessment questions have verbs in them, for e.g. list, compare, describe, explain, derive. Using these verbs, you can tag questions according to the taxonomy.
But tagging by individual teachers or institutes can create inconsistencies. This was proven by research conducted by Swansea University in 2020:
A 2020 analysis showed that these verb lists showed no consistency between educational institutions, and thus learning outcomes that were mapped to one level of the hierarchy at one educational institution could be mapped to different levels at another institution.
This creates an opportunity to create a standard assessment platform that automates the tagging of questions by a machine learning model to provide consistency while saving time and effort.
I ran a POC a few years ago to classify questions through ML models to identify the correct Bloom's level with 1200 data points. It was 87% accurate. So the tech is there.
While the technology works, the main challenge for the ML model will be the quality of the training data. As they say, garbage-in, garbage-out. This data set has to be a gold standard.
The next main challenge is adoption. Let’s face it. Educational institutions don’t like overhead. The incentives for teachers are not aligned for them to take on additional workload. A transformation like this has severe adoption and change management challenges.
Long term we want education to be completely experiential and adaptive to the needs of the individual students. Think Duolingo, but for all subjects.
But till then Bloom's taxonomy can make us cross the bridge from rote memorization to higher-order thinking skills.
New Education Policy 2020 - Careratings
Design for Semi-Automatic Generation of Question Paper from A Semantically Tagged Distributed Question Repository
Exam Questions Classification Based on Bloom’s Taxonomy Cognitive Level using Classifiers Combination
Analyzing the Cognitive Level of Classroom Questions Using Machine Learning Techniques
A Pragmatic Master List of Action Verbs for Bloom's Taxonomy - Swansea University
Thanks to Sreedhar for providing feedback on early drafts of this piece.
📘 Read of the week: The Power of Incentives: The Hidden Forces That Shape Behavior - Farnam Street (11 min)
That's it for today, thanks for reading!
What’s your experience with rote learning and do you think this would solve the problem? Reply or comment below, and I'll be happy to answer them. Give feedback and vote on the next topic here.
Talk to you soon!
Press the ♥️ button if you liked this edition.