Yesterday, the DOE released an evaluation of the second year of its literacy coach program, created under Chancellor Farina in 2016-2017. The program is budgeted at $85.7 million this year, for a total of approximately $235 million since its inception. Each literacy coach has a salary of $90,000 to $150,000, and there are approximately 515 of them working with K-2 teachers in selected elementary schools across the city. The number of schools and coaches have expanded each year.
In August 2018, the DOE released a brief power point which purportedly contained the sole written evaluation of the first year of the program. By analyzing the growth scores from October to May of second graders who were administered the Gates-MacGinitie Reading Tests (GMRT) at schools that received literacy coaches, compared to students at similar schools without coaches, they found no positive results in either word decoding, word knowledge, or comprehension.
The results were expressed in grade equivalents, which allowed one to see that rather than progressing, both groups had fallen further behind in terms of grade level.
The results were expressed in grade equivalents, which allowed one to see that rather than progressing, both groups had fallen further behind in terms of grade level.
I submitted another FOIL request in August 2019, and on Monday, Dec. 9, 2019 they told me they would delay any substantive response till January 16, 2020, with this excuse: "[4] the need to review records to determine the extent to which they must be disclosed, [and 5] the number of requests received by the agency."
Three days later, on Dec. 12, they released the second year evaluation, at which point there were 236 literacy coaches, serving 298 elementary schools, spending an average of 20 periods with each teacher. 168 reading coaches were assigned to one elementary school only and 68 served two schools.
The study found a slight but significantly larger average growth score in reading comprehension among second graders who were tested in a sample of about one third of the first and second cohort of schools that had access to literacy coaches, as compared to second graders at similar schools that did not. No significant gains were found overall, or in the other two specific areas of word decoding or word knowledge.
The study found a slight but significantly larger average growth score in reading comprehension among second graders who were tested in a sample of about one third of the first and second cohort of schools that had access to literacy coaches, as compared to second graders at similar schools that did not. No significant gains were found overall, or in the other two specific areas of word decoding or word knowledge.
1- One would expect that the DOE would not merely test 2nd graders in the program, but also analyze the scores of these same students as 3rd graders, to see if the slight gains that they experienced earlier had persisted or disappeared. They could do this without administering more tests by comparing their 3rd grade state test scores to students at similar schools without coaching. No such analysis is mentioned in the paper.
2- Curiously, the DOE also seemed to switch its methodology in reporting test score gains compared to the first year study. See this:
“...in this report we use extended scale scores. In previous reporting, we used grade equivalent scores. Scale scores refer to the continuous scale on which GMRT results are measured, from Pre-Reading to Adult Reading. While grade equivalents are more easily understandable, scale scores are more precise and are used for analyses. Scale scores on the GMRT capture students’ reading ability on a linear scale that is useful for both comparison across grades and for analysis.”
I'm not sure why they made this change, unless the one area of comprehension where there appeared to be a significant difference, the gain was too small to show up in terms of grade equivalents. Or perhaps they didn't want to reveal that even those students at schools that had achieved significant gains still fell behind in terms of grade level?
3. The report makes other claims without any data to support them, and overall it is surprisingly sparse in actual statistics. See this claim for example: "Students of teachers who received ULit coaching grew more than students of teachers who did not."
The study does not include any data to back this up, and does not actually say that the difference in growth was significant.
What this may suggest is that while the growth scores of students at schools that had access to coaches were significantly greater in the area of comprehension, the same significance did not occur when the scores of the students of teachers that actually received coaching were compared to students of teachers who did not receive coaching.
This passage is similarly hard to understand:
Moreover, the more coaching a teacher received, the more growth the students had, on average. The difference between students whose teachers had more coaching and those who did not was statistically significant.
Again, no data is provided to back up either claim, and no definition of “more coaching” is offered in which would allow one to evaluate the claim made by the second statement. How much more coaching was needed for a teacher' students to exhibit statistically significant test score gains?
What this may suggest is that while the growth scores of students at schools that had access to coaches were significantly greater in the area of comprehension, the same significance did not occur when the scores of the students of teachers that actually received coaching were compared to students of teachers who did not receive coaching.
This passage is similarly hard to understand:
Moreover, the more coaching a teacher received, the more growth the students had, on average. The difference between students whose teachers had more coaching and those who did not was statistically significant.
Again, no data is provided to back up either claim, and no definition of “more coaching” is offered in which would allow one to evaluate the claim made by the second statement. How much more coaching was needed for a teacher' students to exhibit statistically significant test score gains?
4. The report also doesn't provide the full questions or results of the teacher/coach/principal surveys. Though it says that almost half of teacher respondents responded that if they had a reading coach the following year, they would like the coach to work with them "one-on-one", it doesn't report if teachers were asked if they wanted coaching at all, or if they believed the funding might be better spent on more classroom teachers to lower class size or to hire intervention teachers who would work directly with struggling readers.
No information is provided either as to whether the coaches believed their training was useful, even though a third of school leaders reported that "the coach was out of the building too often for professional learning."
5. We are now in the midst of the fourth year of the program. Why is the DOE just now releasing second year results, instead of the third year results that should be available - especially given that we are spending nearly $100 million a year on the program? Or do the results have to be carefully sifted and parsed before released, as these appear to be?No information is provided either as to whether the coaches believed their training was useful, even though a third of school leaders reported that "the coach was out of the building too often for professional learning."
All in all, there needs to be a more comprehensive study with much more data provided before one can be assured that the program is providing benefits to kids worth the cost. And such an analysis would be far more credible coming from an experienced and independent research outfit like RAND.
No comments:
Post a Comment