First Grade Assessments

Posted on January 14, 2012 by


According to the MES, “The first graders showed a high level of academic performance” on recent assessments of their English and computer skills.

This is good news, of course, and I don’t doubt that progress is being made – after all, the netbooks these kids are getting are great; the new English books are fantastic, and of course the full force of TLG being directed exclusively at grades 1-6 was bound to have some effect on the English skills of first graders.

However, something in the article gave me pause: “70% of the first graders received the highest scores.”

Now, I don’t know exactly what this means, of course, because the English is somewhat nonspecific, but also because these kinds of test scores are often arbitrary and meaningless. What I do know is that in general, I would say that 70% of respondents getting “the highest scores” is an indication not of high performance, but of a bad assessment rubric. In other words, it’s not that the students are really good, it’s that the test is not really good.

In general, assessments should be challenging to most students. They should contain questions of varying difficulty, and in general, the poor students should only be able to answer the easy questions, the average students should be able to answer the questions of low and medium difficulty, and only the above-average students should be able to answer the questions of high difficulty. This is really the only way to measure the achievement of students who are not average performers – in other words, to accurately gauge the progress of below-average students and above-average students. If all of the questions are of medium difficulty, it creates a “race to the middle” in which the goal is to produce average students, which is basically demotivating to everyone involved in education.

In a test in which 70% of students received “the highest grades” I am tempted to believe that what happened was that all of the questions were of medium difficulty. Thus the average and above-average students (about 70%) were able to answer all of them, whereas the below-average students were completely left behind (the 30% who did not get “highest scores”). This is the sort of grade distribution that I see in my classes at school all the time – a huge slew of 9s and 10s from the students who pay even a tiny bit of attention in class, and then a bunch of 5s and 6s from students who know absolutely no English.

How can a student earn a 6 – which is a passing grade – without even the most basic competence in the subject? Grade inflation and cheating. I am also tempted to believe that the 70% result was a collaborative effort – in other words, that the first graders who were assessed had a significant amount of help from parents, teachers, and each other in completing this questionnaire.

Of course, these suspicions could be completely unjustified. This assessment could well have been challenging and rigorous and administered fairly – but in the absence of information, we are left to wonder. Unfortunately, if the only thing we know about a particular assessment is that 70% of people did really really well on it, the natural conclusion is to doubt the quality of that assessment.

I want to say again that I have no doubt that progress is being made in Georgian education – I can see the improvements and the impact they are having, as they occur – but I don’t think that anyone is served by publicizing these kinds of assessments that don’t really tell us anything. We don’t know what the questions were, how the assessment was administered, what “the highest scores” mean, and thus ultimately we have no information about the performance of the students in question. Unfortunately, when some PR flack puts out a story that nominally covers progress in education but that factually doesn’t contain any substance whatsoever, this opens them up to the criticism that maybe there just isn’t any substance to report. These puff pieces are ammunition for the cynical.

As long as the Ministry is making progress in education, they should also be measuring that progress with meaningful and credible assessments (if possible, performed in conjunction with some sort of independent, disinterested entity, like an NGO), because frankly, the idea that 70% of first graders aced an English exam is so incredible that I don’t think any reasonably well-informed person would believe it without looking more closely at some of the details.

For our part, TLG is working on ways to illustrate the progress we are making in English language instruction and student outcomes – ways that are compelling and indisputable. The Teacher Portfolios were one such opportunity as they encouraged individual teachers to present and document their success stories. And on that note, if you are a volunteer with a story about the difference you’ve made in your classrooms, we want to hear about it. Drop a comment or make a forum post, or check out our call for submissions for more ideas on how to contribute your stories and ideas.