Unless we know how things are counted, we don’t know if it’s wise to count on the numbers … Consider the plan to evaluate the progress of New York City public schools inaugurated by the city a few years ago. While several criteria were used, much of a school’s grade was determined by whether students’ performance on standardized state tests showed annual improvement. This approach risked putting too much weight on essentially random fluctuations and induced schools to focus primarily on the topics on the tests. It also meant that the better schools could receive mediocre grades because they were already performing well and had little room for improvement. Conversely, poor schools could receive high grades by improving just a bit.
Each year the formula has been significantly revamped because of the absurdity of the previous year’s grades, including this year’s grade inflation, in which 84% of elementary and middle schools got "A’s". If the authors of this system were to receive a grade themselves, it would be an "F".
The school grades are based 85% on the previous year's state test scores, which themselves have been widely derided as unreliable. The formula used has also been shown to unfairly penalize schools with large number of high-need special education students, despite the DOE's claim to fully control for the student population.
Yet, inexplicably, the DOE refuses to conform to reason and alter the formula so that it is based on more than one year’s data; despite the fact that Jim Liebman promised at their inception to base them on three years’ worth of test scores.
Other troubling problems related to the way in which the grades also rely in part on survey results from teachers and parents. Recent articles in the Daily News have shown how several principals have pushed teachers into giving them favorable reviews; with the threat that otherwise, DOE may close their schools based on low grades. Parents also commonly report the same sort of pressure, either externally or internally imposed.
The school grading system also ignores critical but highly variable factors that differ widely among schools and yet are largely outside the their control, such as class size or overcrowding, which can work against increases in achievement.
Yet the teacher data reports are themselves problematic; and their formula has never been publicly released. I submitted a FOIL for the formula more than a year ago, as well as the identity of the “independent” panel that DOE had claimed had attested to its reliability, and have still not received anything in return.
As the National Academy of Sciences has pointed out, in its comments to Secretary Duncan’s misbegotten grant program “Race to the Top”, no system for evaluating teachers on the basis of test scores has yet been established that is ready for prime time, given all the inherently complex and imponderable factors that go into test scores, particularly at the classroom level. Any attempt to implement such a program, they urged, should be carefully tested and independently vetted, because it could very well have unfair and damaging consequences, not just to teachers but to our kids as well.
We have already seen how art, science and music and untested subjects have been minimized in our children’s schools since the over-emphasis on high-stakes tests has been imposed; with weeks more spent on test prep and less on learning.
All parents should closely watch the evolution of the recent agreement between the New York teachers union and the state, to base 25% of teacher evaluation on state test scores and another 15% on “locally selected measures of achievement that are rigorous and comparable across classrooms."