<p>As we move towards adopting an outcomes-based education, there is a growing concern among all stakeholders about what students are learning at school. Therefore, the National and State Achievement Surveys (henceforth NAS/SAS) were instituted with the sole purpose of measuring what students know and can do by measuring their performance at grades 3, 5, and 8 against expected learning outcomes (LOs) at these stages of school.</p>.<p>The result of the survey provides information on students’ learning levels, which also serves to indicate the health of the education system.</p>.<p>Currently, the National/State Achievement Surveys are sample-based, representing all types of schools—government, aided, and private. Besides students’ performance data, the survey collects other relevant data like school environment, teaching methods, and students’ demographics. The data helps the government identify gaps in learning where they exist across school, district, state, and national levels. This information aids educational functionaries in making important decisions about remedial measures to be taken, resource allocation to schools in need, professional development for teachers, as well as informed policy making. It is therefore extremely important that the data captured in the survey is a true reflection of ground realities so that these decisions are not misguided.</p>.<p>Ideally, a survey needs to be administered in a manner that yields as candid responses as possible, reflecting the status quo. However, due to the logistic and administrative constraints that such large-scale surveys entail, the dates for the NAS as well as the schools and students who are involved are public knowledge.</p>.<p>As a result, there is a propensity for all stakeholders to ‘prepare’ students to obtain better results. The current trend of announcing learning outcomes to be tested, creating ‘workbooks’, ‘worksheets’, ‘question banks’ and arranging for ‘tutorials’ are deleterious practices that encourage teaching to the test, promote rote learning, and result in inflated scores that neither reflect students’ actual competencies nor capture the effectiveness of the education system, thereby defeating the very purpose of conducting the survey.</p>.<p>It is worthwhile to scrutinise why such practices have gained prevalence. For the most part, stakeholder education is lacking. Very few people are aware of the purpose of the surveys—so much so that they are often referred to as exams, bringing with them the pressure to perform not just for students and teachers but also for district and school leaders. Besides the social desirability factor, there is an equally strong factor of apprehension and anxiety at play. Teachers, school heads, and other functionaries are fearful of the public shaming and punitive action they might face if the results of the survey are below expectations. As a result, there is an inclination to ensure their students and schools do well by engaging in drills on questions related to the specific learning outcomes to be tested.</p>.<p>Similarly, a very misguided district or state could attempt to ‘improve’ scores on NAS/ SAS by directing schools to ‘prepare’ students using materials created for this specific purpose. Unfortunately, the apparent ‘gains’ that such quick fix measures produce are akin to creating Potemkin villages. They create the false impression of high performance that is misleading and prevents timely interventions that could address the learning gaps that exist.</p>.<p>Educators and others concerned must ensure these practices are nipped before they are entrenched in the system and become an evil that will be hard to expunge. While spreading awareness about the purpose of NAS is an obvious step in this direction, perhaps a more effective measure would be to iterate that the data will be used to identify schools in need of developmental support and not punitive action. This will eliminate the fear of humiliation that drives teachers, school leaders, and district officials to adopt coaching practices for improving their scores. When reassured that under-achievement on the test will not be seen as their personal ‘failure’ but will bring them the help and support they need to improve, they are likely to ensure the survey results are a more accurate representation of students’ learning levels.</p>.<p>This approach will also help the government at various levels to be better informed about the actual status of student learning and make evidence-based decisions about how learning levels can be improved. It can save a lot of time, effort, and resources by ensuring the right support is extended to places and people who need it most.</p>.<p>Supporting low-performing districts and schools with capacity-building initiatives on teaching methods, formative assessments, and creating equitable learning cultures can truly boost learning levels over time and improve systemic health in a sustainable way.</p>.<p>(The writer teaches at Azim Premji University, Bengaluru.)</p>
<p>As we move towards adopting an outcomes-based education, there is a growing concern among all stakeholders about what students are learning at school. Therefore, the National and State Achievement Surveys (henceforth NAS/SAS) were instituted with the sole purpose of measuring what students know and can do by measuring their performance at grades 3, 5, and 8 against expected learning outcomes (LOs) at these stages of school.</p>.<p>The result of the survey provides information on students’ learning levels, which also serves to indicate the health of the education system.</p>.<p>Currently, the National/State Achievement Surveys are sample-based, representing all types of schools—government, aided, and private. Besides students’ performance data, the survey collects other relevant data like school environment, teaching methods, and students’ demographics. The data helps the government identify gaps in learning where they exist across school, district, state, and national levels. This information aids educational functionaries in making important decisions about remedial measures to be taken, resource allocation to schools in need, professional development for teachers, as well as informed policy making. It is therefore extremely important that the data captured in the survey is a true reflection of ground realities so that these decisions are not misguided.</p>.<p>Ideally, a survey needs to be administered in a manner that yields as candid responses as possible, reflecting the status quo. However, due to the logistic and administrative constraints that such large-scale surveys entail, the dates for the NAS as well as the schools and students who are involved are public knowledge.</p>.<p>As a result, there is a propensity for all stakeholders to ‘prepare’ students to obtain better results. The current trend of announcing learning outcomes to be tested, creating ‘workbooks’, ‘worksheets’, ‘question banks’ and arranging for ‘tutorials’ are deleterious practices that encourage teaching to the test, promote rote learning, and result in inflated scores that neither reflect students’ actual competencies nor capture the effectiveness of the education system, thereby defeating the very purpose of conducting the survey.</p>.<p>It is worthwhile to scrutinise why such practices have gained prevalence. For the most part, stakeholder education is lacking. Very few people are aware of the purpose of the surveys—so much so that they are often referred to as exams, bringing with them the pressure to perform not just for students and teachers but also for district and school leaders. Besides the social desirability factor, there is an equally strong factor of apprehension and anxiety at play. Teachers, school heads, and other functionaries are fearful of the public shaming and punitive action they might face if the results of the survey are below expectations. As a result, there is an inclination to ensure their students and schools do well by engaging in drills on questions related to the specific learning outcomes to be tested.</p>.<p>Similarly, a very misguided district or state could attempt to ‘improve’ scores on NAS/ SAS by directing schools to ‘prepare’ students using materials created for this specific purpose. Unfortunately, the apparent ‘gains’ that such quick fix measures produce are akin to creating Potemkin villages. They create the false impression of high performance that is misleading and prevents timely interventions that could address the learning gaps that exist.</p>.<p>Educators and others concerned must ensure these practices are nipped before they are entrenched in the system and become an evil that will be hard to expunge. While spreading awareness about the purpose of NAS is an obvious step in this direction, perhaps a more effective measure would be to iterate that the data will be used to identify schools in need of developmental support and not punitive action. This will eliminate the fear of humiliation that drives teachers, school leaders, and district officials to adopt coaching practices for improving their scores. When reassured that under-achievement on the test will not be seen as their personal ‘failure’ but will bring them the help and support they need to improve, they are likely to ensure the survey results are a more accurate representation of students’ learning levels.</p>.<p>This approach will also help the government at various levels to be better informed about the actual status of student learning and make evidence-based decisions about how learning levels can be improved. It can save a lot of time, effort, and resources by ensuring the right support is extended to places and people who need it most.</p>.<p>Supporting low-performing districts and schools with capacity-building initiatives on teaching methods, formative assessments, and creating equitable learning cultures can truly boost learning levels over time and improve systemic health in a sustainable way.</p>.<p>(The writer teaches at Azim Premji University, Bengaluru.)</p>