What the External Evaluation Exams Are (Not) Telling Us

The external evaluation exams after 7th and 12th grade are still not measuring the competencies that the education system claims to be developing
Author: Neli Koleva, Head of Strategic Partnerships at Teach For Bulgaria

For thousands of students across Bulgaria late spring marks the start of exam season. You can probably think of at least two. Yes, the external evaluation exams at the end of 7th and 12th grade. They have become a rite of passage. Thousands of fates depend on the results. The significance of these exams is based, to an extent, on the shared understanding that they are a good indicator of the skills developed at school and hence – the quality of education.
A report published by OECD in March puts this understanding up for debate. The main criticism the report offers is that the exams are not in alignment with the competency-based approach that our education system claims to have adopted. In other words, the standardized tests are not measuring the skills that the students have acquired, i.e. critical thinking and emotional intelligence, they are measuring the students’ ability to memorize information.
The consequences are very serious when it comes to the exams after 7th and 12th grade. The students, their parents, and teachers are all very concerned about doing well and getting good grades. Principals also prioritize that. It’s not surprising because the results are publicly available and affect the reputation of the schools. This affects the number of newly enrolled students and the amount of public funding.

Consequently, a number of key agents in the education system prioritize the coveted good results from these exams while the exams don’t really test key skills for professional and personal development, but the ability to memorize information which is easily available anyway.
Design Flaws
The national external evaluation exams should have the sole purpose to provide data about the efficiency of the education system. The advancement of children’s education should not in any way depend on these exams. In Bulgaria, however, the national external evaluation exams (especially the ones after 7th grade) serve both purposes which puts additional pressure on the experts who design them.

The selective nature of the exams puts serious pressure on the students and encourages the practice of teaching to the test. This is a problem because the tests do not reflect the competency-based approach adopted by the education system.
“At the moment, the external evaluation exams are primarily measuring students’ comprehension of the curricula. The most basic comprehension – which author wrote which novel,” commented Ivelina Pashova, Head of School Training at Teach For Bulgaria.
This means that the education system does not have any data on the extent to which students develop crucial skills such as “the ability to identify important information and to create a high-quality original text.”
The external evaluation exams do not measure any kind of progress made between grades, they just provide a snapshot of the current situation. This limits the ability of the schools to evaluate the impact of the education that they offer to their students. Even if they wanted to improve the quality of education that they provide, they wouldn’t know where to start because they don’t have any data.
“Even if you tried to compare the results of these exams from 7th and 12th grade, you wouldn’t get any information about the added value of the education that these students have received because the evaluation standards of the two exams are incompatible,” added Ivelina Pashova, Head of Teach For Bulgaria’s program Model Schools.
Identity Crisis in 7th Grade
The leading recommendation of the OECD report is for the high school entry exam and the external evaluation exam at the end of 7th grade to be separated. This would allow both exams to be better suited for their two very different goals.

When it comes to the high school entry exam, the Ministry of Education and Science should communicate its purpose very clearly.
“Is the goal of this entry exam to select the students with the highest academic achievements in order to place them in the most prestigious high schools, or is it a tool to help students pick a high school based on their skills and interests,” the report asks.
Taking into account the fact that these entry exams identify the top 5% of the most academically advanced students, the authors of the report suggest that the Ministry of Education and Science should focus on the latter goal.
If helping students pick the right school for them became the goal of the entry exam, this would give the ministry an opportunity to tackle other problems in the education system. Such a problem, for example, is the “horizontal stratification” – the students make their choice not based on their interests, but based on their socioeconomic background.
What Are We Actually Testing
According to the OECD report, the questions in the external evaluation exams are not aligned with the competency-based approach adopted by the education system. They mostly measure students’ ability to memorize large amounts of information.
For example, in order to get a good score on the open-ended questions, students don’t need to provide arguments, but to quote specific memorized phrases.
The modification of the exams towards measuring complex skills and competencies requires a completely different approach in their design. The first step would be to formulate criteria and design questions which can measure the skills of the students. Such criteria would both indicate the questions and their accurate evaluation.
“The evaluation should be through the approbation of the questions before the official exams,” pointed out Ivelina Pashova.
Since 2012, all PISA tests, for example, have included a component which measures key competencies. Last year’s test was focused on creative thinking. One of the four areas which indicate creativity is creative writing. The tools through which the tests check for this skill include generating ideas in different written formats such as scripts, essays, or dialogues.
The Competence-Based Approach at School
“The evaluation of a cluster of competencies is often beyond the scope of standardized tests,” commented Mila Ivanova, Senior Research and Development Specialist at Teach For Bulgaria.
One of the main reasons is that skills such as creative thinking and creativity are multilayered. Consequently, it would take a variety of tasks and questions, especially designed for this goal, to measure to which extent students have those skills.
At the same time, not everything that can be measured should absolutely be measured nation-wide. “It is absolutely possible for the schools to define what the development of a specific skill would mean for the students they serve,” added Ivelina Pashova.
This is based on the fact that schools function in different contexts (socioeconomic, cultural, etc.) and sometimes their goal is to train different types of specialists. For example, the definition of creativity at the vocational school of technology could involve coding in order to solve specific everyday problems, whereas at the high school for foreign languages the same notion could involve the ability to construct an original argument in an essay. Both cases would involve skills whose development could be traced more thoroughly not with a single exam, but with a multitude of assignments collected in a portfolio.
The development of these competencies, however, would require some universal prerequisites. They would include the behavioral mindsets of the students, their engagement in class, and whether they feel safe at school.
“Children cannot study, if they feel threatened or socially excluded,” added Ivelina Pashova.