In the midst of all the massive budget cuts, DOE intends to contract out with a company to expand the production of the controversial teacher performance reports. Check out Gotham Schools: City will spend $1.5M to extend judging of teachers via test scores.
Lots of people have disputed the utility of these reports -- despite the comments in Gotham Schools from a DOE contractor. As Skoolboy points out here, even if the model is correct, "the Teacher Data Report provides no evidence whatsoever about why a teacher is successful--the many daily practices that promote student learning."
But I'd like to point out another problem. The model used to evaluate teacher effectiveness, as pointed out on our blog here, includes class size at the school and classroom level, meaning that DOE indeed recognizes that teachers should be expected to produce smaller gains the larger their classes. In fact, this is the only external factor included in the model that is assumed to increase teacher effectiveness policies – the rest are teacher and student characteristics.
That's fine and reasonable of course -- far more fair than the school progress reports -- and the teacher bonuses that are based on them, that judge all schools as though they had an equal chance to succeed, despite the fact that some may have classes of under 20 and other at 30 or more.
The problem is that the reported class size data in NYC middle schools and especially at the high school levels is extremely unreliable – so much so that these evaluation reports are likely to be wrong.
Most CTT (inclusion) classes in high school are still misreported as two classes, years after we have pointed out this problem to DOE. In middle school, the reported class size in most cases is actually the homeroom or advisory, not academic classes. Many high school classes, including quite frequently, two different levels of a subject taught by a teacher at the same time to the same group of students are counted as two separate classes. This means that some teachers may be judged as if they had class sizes of 20 or less, when their real class sizes are up to 34 or more.
I’d like to hear how useful these reports are from teachers who have received them – and also if they are sufficiently transparent as to the data they assume, especially when it comes to class size. Otherwise, it would be impossible to check it for accuracy. Please leave a comment.
Subscribe to:
Post Comments (Atom)
4 comments:
Class size AND curriculum are factors in a teacher's success. Neither of which are under his or her control.
1. It doesn't work.
2. They are using it to scare teachers.
The program should be scrapped.
Jonathan
Gauging how far a student moves is impossible if all you're looking at is test performance.
Technically, moving a student could mean getting them from a 35% to a 58%. That is a significant academic leap, but they still fail their exam which is all they seem to care about (despite their attention to class size and ratios).
If class size is such a significant factor affecting student success that they built it into the teacher performance algorithm... then why doesn't the city just take that $1.5M and devote it to reducing class sizes?? Why do we devote more resources to measuring results than to shaping them?
Post a Comment