Thursday, August 25, 2011

A court decision on the teacher data reports that will hurt our kids


It is unfortunate that the day after a court decision held that NY teachers should be evaluated by use of multiple assessments, with student scores on state standardized tests only one minor factor, today, the appellate court said that the DOE could release the teacher data reports to the public, based only on these same test scores. 
Most testing experts agree that these reports are highly unreliable and reductionist, and they will unfairly tarnish the reputation of many excellent teachers:
1.     The state tests were never designed for such a purpose – and are technically unable to make year to year judgments on “progress” or value added. 
2.    Many studies have shown the extreme volatility of these measures, and how the results differ even from one sort of test to another.  See Juan Gonzalez’s column on how DOE consultants themselves believe these reports are highly unreliable; here are links to the original documents revealing this, obtained through a  FOIL.
3.    As John Ewing, former executive director of the American Mathematical Society, recently concluded, ”if we drive away the best teachers by using a flawed process, are we really putting our students first?  Mike Winerip reported on a top-notch NYC teacher who was denied  tenure in just this manner.
If NYC goes ahead and releases this data it would likely be the first school district in the country to do so willingly and enthusiastically; when the LA Times generated its own value-added data for Los Angeles teachers, the paper was widely criticized.  Chris Cerf, former deputy Chancellor and now acting State Superintendent of NJ schools, was originally in charge of creating the teacher data reports; he promised that they would never be used for teacher evaluations and that the DOE would fight against any effort to disclose them publicly. In a 2008 letter to Randi Weingarten, Cerf wrote: "It is the DOE's firm position and expectation that Teacher data reports will not and should not be disclosed or shared outside the school community."
Chancellor Walcott should think twice before releasing this data, if he cares about real accountability, the morale of teachers,  and the potential damage to our kids.
Here are some of the recent studies from experts on the unreliability of this evaluation method:
Sean P. Corcoran, Can Teachers be Evaluated by Their Students’ Test Scores? Should they Be? The Use of Value-Added Measures of Teacher Effectiveness in Policy and Practice. As  the author concluded from his analysis, “The promise that value-added systems can provide a precise, meaningful, and comprehensive picture is much overblown… .Teachers, policy-makers and school leaders should not be seduced by the elegant simplicity of value-added measures. Given their limitations, policy-makers should consider whether their minimal benefits outweigh their cost.”  
National Research Council, Henry Braun, Naomi Chudowsky, and Judith Koenig, eds., GettingValue Out of Value-Added: Report of a Workshop, 2010: “Value- added methods involve complex statistical models applied to test data of varying quality. Accordingly, there are many technical challenges to ascertaining the degree to which the output of these models provides the desired estimates.”  
John Ewing, former executive director of the American Mathematical Society, current president of Math for America;  MathematicalIntimidation: Driven by the Data; “Why must we use value-added even with its imperfections? Aside from making the unsupported claim (in the very last sentence) that “it predicts more about what students will learn…than any other source of information”, the only apparent reason for its superiority is that value-added is based on data. Here is mathematical intimidation in its purest form—in this case, in the hands of economists, sociologists, and education policy experts…And if we drive away the best teachers by using a flawed process, are we really putting our students first?"
Sean P. Corcoran, Jennifer L. Jennings, Andrew A. Beveridge, Teacher effectiveness on high- and low-stakes tests; April 10, 2011. " To summarize, were teachers to be rewarded for their classroom's performance on the state test or alternatively, sanctioned for low performance many of these teachers would have demonstrated quite different results on a low-stakes test of the same subject.  Importantly, these differences need not be due to real differences in long-run skill acquisition…
That is, teachers deemed top performers on the high-stakes test are quite frequently average or even low performers on the low-stakes test. Only in a minority of cases are teachers consistently high or low performers across all metrics… Our results… highlight the need for additional research on the impact that high-stakes accountability has on the validity of inferences about teacher quality. "

No comments: