The world we inhabit today is rapidly changing in different ways, and shedding old ways of doing things. It has become necessary to equip our future generation with learning and skills that will not just help them survive but also face the challenges of the knowledge century with rigour.
Often pegged as the next superpower, India will have to be ready with a workforce that is able to think outside of the box. It will not be enough that individuals are good at their chosen vocation, but they have to be able to think, reason, and innovate.
But is our education system equipped to groom such individuals? Probably not.
The World Bank reports that since 2001, India's Education for All Programme - Sarva Shiksha Abhiyan (SSA) - has brought nearly 20 million children into primary school. India's performance in the international student assessment PISA in 2012 showed that while we may have succeeded in bringing millions of children to school, their learning levels remain abysmal.
As a country that has pledged to achieve the Sustainable Development Goals, we are obliged to improve the quality of learning. Like many other countries, India also has embarked on the journey to ensure that all its children acquire quality education.
While achieving quality learning appears daunting if one considers the sheer volume of 200 million students in our system, it is not impossible if a planned approach is taken. India seems aware that tracking student learning is important. In the annual budget speech this year, Finance Minister Arun Jaitley proposed to introduce a system of measuring annual learning outcomes in schools.
But does all measurement of student learning result in improving learning outcomes? The models of assessment used so far do not seem to have brought any discernible shift towards improving learning quality. Further, they seem to be focussed on monitoring how many children passed, or how much they scored, but not on diagnosing what or how well the students have learnt.
We have to bear in mind that the point of an assessment exercise is not just to rank students or slot them into different grades but to enhance the quality of learning outcomes.
Using large-scale assessments of student learning can be effective in improving learning quality if certain practices are taken care of:
1. Pay attention to the quality of questions used in assessments
Most questions in large-scale assessments - be it the board exams or otherwise - promote rote learning, which only requires students to reproduce what teachers have taught them, without questioning, thinking or reasoning.
For example, a question “Which part of a carrot plant is eaten?” made 81 percent of class 5 students in a national survey choose the answer “root” from among the answer choices of “leaves, root and stem”. It is appalling that no one seemed to question the validity of the question in the first place, the students did not seem to think or question back “eaten by whom - man or animal”, with 81 percent blindly giving the expected answer. Further, no one taking the national survey and using the results seemed to notice that the question is technically incorrect, as all parts of a carrot plant are eaten including the leaves, a delicacy in many cuisines across the world. These are typical examples of how rote questions promote rote learning in a system.
To understand the quality of student learning, questions need to go beyond the rote to check whether students have understood the concepts and can apply them and be creative.
Further, the focus in such questions should not only be to check if students can give the right answers but in identifying the types of wrong answers to extract the misconceptions students have and identify the common errors they make.
2. Use credible test administration protocols to collect reliable learning outcomes data
Whenever large-scale data are collected for policy decisions, it may be worthwhile to understand how a system will behave in such an exercise. In 1974, American social scientist Donald T. Campbell published a paper - “Assessing the impact of planned social change” - in which he described the effect of quantitative measurements on decision-making processes. It came to be known as Campbell’s law and it suggests, “the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
In other words, the authenticity and credibility of data is often influenced by the stakes attached to the use of findings from such data. In a country where some teachers are known to dictate answers during board examinations in schools, the risk of data corruption in any large-scale assessment which gathers information for higher-ups (read policymakers) is real.
It is important to differentiate the protocols that will be used for collection of learning outcomes data that will be aggregated for higher policy and data that is collected by the teacher for their own understanding of learning level of students in their classroom. Trained external evaluators, who can administer tests and collect credible data are critical to use for such data for policy decisions.
3. Ensure that the analysis goes beyond ranks and averages to provide information meaningful for change and dialogue
The data from diagnostic assessments are mines of information on what children think, learn and assume. Most often no meaningful analysis is done on such data beyond calculating student marks, aggregating and ranking them at school, district, state or national level. Even in a few rare cases when advanced techniques are used, the purpose seems to be only to put the marks of different tests on a common scale for comparison, rather than to extract information regarding the learning gaps at different ability levels of students.
Analysis of data has to show the way for a policy on where resources need to be spent and on what skills and concepts teachers need to be trained on. Teachers need to know where their students have misconceptions and make errors, and students should get clarity on the strength and weaknesses of their performance. If such actionable insights are brought out through diagnostic assessments, such a feedback loop will pave the way for each stakeholder segment to address it.
4. Disseminate widely and encourage use of findings in all stages from policy to classroom
We have to ask ourselves if the information is actually reaching the various stakeholders and how we are disseminating it. The findings from a large-scale assessment exercise shouldn’t be allowed to languish in a report that a select few have access to. Technology should be leveraged to ensure that this information reaches every classroom, every teacher, and every stakeholder in the system in a manner that is most useful for them. Smart portals can be used to present data in a user-friendly manner while keeping the privacy of the data intact. Policy makers, education administrators, researchers, teachers and students should be given access to whatever kind of data they need.
For the government, the intent to better understand student outcomes is commendable, but unless care is taken to introduce the most appropriate assessment that diagnose learning issues and provide insights which can be acted upon, measuring learning outcomes will remain limited in its scope to improve learning quality.
As a society, all of us also need to shift the focus from marks to actual learning. We have to understand that an integral part of learning is the ability to not just recall a concept but also understand and apply it in our day-to-day lives. Unless we do this, we cannot say that our children are getting quality education.
Vyjayanthi Sankar is a leading education, assessment and management expert. A Fulbright Humphrey fellow and an Ashoka fellow, Vyjayanthi regularly consults for the Brookings Institution, The World Bank, UNICEF and the Learning Metrics Task Force. She currently heads the Centre for Science of Student Learning, a Delhi-based education research organisation.