Showing posts with label Shael Suransky. Show all posts
Showing posts with label Shael Suransky. Show all posts

Wednesday, September 28, 2011

Why the school progress reports and NYC education reporters deserve a big fat “F”


Gary Rubinstein, a math teacher at Stuyvesant, totally outclasses the numerous education reporters in this city in his analysis of the recent school grades.  

He shows via this graph to the right and discusses in his blog how there is little or no correlation in rank order for schools between last year or this year's progress reports. 

(Incidentally, in a subsequent posting, Rubinstein also points out that NYC charters are twice as likely to get "Fs" in progress than non-charters.)

Unfortunately the mainstream media continue to repeat without dispute Suransky’s claim that the progress reports were much more “stable” this year, even though 60% of schools changed grades.    

Not one reporter, to my knowledge anyway, has bothered to point out how experts have shown that 32-80% of the annual gains or losses in scores at the school level are essentially random – and yet 60% of the school grade is based upon these annual gains or losses. 

See the Daily News oped I wrote in 2007, in which I offer even more criticisms of the progress reports, including their inherent instability, “Why parents and teachers should reject the new grades”.  

 Researchers have found that 32 to 80% of the annual fluctuations in a typical school’s scores are random or due to one time factors alone, unrelated to the amount of learning taking place. Thus, given the formula used by the Department of Education, a school’s grade may be based more on chance than anything else.

Yet here is one typical headline from last week: School report cards stabilize after years of unpredictability. Here is the NY Times account:

“We have a really high level of stability this year, which is a good thing,” said Shael Polakow-Suransky, chief academic officer for the city’s Department of Education…. There is movement and that’s good because we are measuring one year of data and we expect schools will go up and down, but we don’t want to see movement caused by something that’s external to the kids,” Mr. Polakow-Suransky said, referring to changes in the state exams that caused incredible increases and then a drop-off in schools’ grades.

Of course, if one year’s movement up and down is primarily random, that by definition is “external.” 

Nor have any education reporters bothered to report that Jim Liebman, who designed the system, testified to the City Council when the grades were introduced that the DOE would improve the reliability of the system to incorporate three years worth of test score data instead– which both he and Suransky have refused to do.

Indeed, as recounted on p. 121 of Beth Fertig’s book, Why can’t U teach me 2 read,  Liebman is quoted as responding to Michael Markowitz’s observations that the grading system was designed to provide essentially random results this way:

“There’s a lot I actually agree with, he said in a concession to his opponent…He then proceeded to explain how the system would eventually include three years’ worth of data on every school so the risk of big fluctuations from one year to the next wouldn’t be such a problem.”

And yet no one, including Fertig, has mentioned this discrepancy and DOE’s lack of rationale for intentionally allowing an essentially single unreliable grade to determine a school’s future – with 10% of schools now guaranteed to be given a failing grade and thus liable to being closed, based largely on chance. 


Saturday, November 27, 2010

The Not-Quite-Good-Enough-Chancellor and her Sidekick?


Bowing to the painfully obvious, even the stacked panel assembled by Commissioner Steiner voted to deny Cathie Black the waiver she needs to overcome her utter lack of qualifications to be NYC's schools chancellor. But our very clever Commissioner had something up his sleeve: he gave the panel a third choice besides "yes" or "no": a co-chancellorship of sorts with someone who actually knows something about education. Bloomberg promptly submitted a revised waiver application, adding man with education credentials Shael Polakow-Suransky to help out the corporate exec formerly billed as the only person who could do the tough job of NYC schools chancellor.

What good can come of this scenario?

Looking through the very long list of things Suransky will be responsible for, one can’t help but ask: will there be anything left for Cathie Black to do besides wielding the budget ax? That certainly entails “difficult decisions” (as Bloomberg never tires of reminding us), but it’s hardly worth the highest salary in the city. Ms. Black should have the decency to cut her salary to $1/year, which she can certainly afford and will go a ways towards plugging that gaping “public interest” hole in her résumé (at the press conference announcing her appointment, Bloomberg actually talked about her husband’s public service, LOL).

And why is this new position--formally, Senior Deputy and Chief Academic Officer-- necessary? At the press conference, Bloomberg dismissed all questions about Black’s lack of credentials or prior interest in education by claiming she would rely on the formidable cadre of education experts put together by Klein, especially the deputy chancellors. Suransky is already a Deputy Chancellor and Chief Accountability Officer--why does he need a different title if Black was going to rely on him and the rest of her team (including presumably former Klein heir-apparent Eric Nadelstern) for all things education anyway? It's also worth noting that the very qualities that make a good CEO don’t make a good team player (that’s quite different from getting subordinates to work as a team); many companies have tried the dual-CEO route, often after a merger--–it doesn’t work. The Economist recently summed it up this way ("The Trouble with Tandems"):

Almost all these relationships have ultimately come unstuck. That should hardly come as a surprise because joint stewardships are all too often a recipe for chaos. Rather than allowing companies to get the best from both bosses, they trigger damaging internal power struggles as each jockeys for the upper hand. Having two people in charge can also make it tougher for boards to hold either to account. At the very least, firms end up footing the bill for two chief-executive-sized pay packets.

Why should we believe DOE is any different? Especially since the very logic of naming Cathie Black to lead it is that education should be run like a business?

Thursday, September 30, 2010

Why the school grading system, and Joel Klein, still deserve a big "F"

Amidst all the hype and furor of the release of today’s NYC school "progress reports", everyone should remember how the grades are not to be trusted. By their inherent design, the grades are statistically invalid, and the DOE must be fully aware of this fact. Why?

See this Daily News oped I wrote in 2007, in which all the criticisms still hold true, “Why parents and teachers should reject the new grades”.
In part, this is because 85% of each school’s grade depends on one year’s test scores alone – which according to experts, is highly unreliable. Researchers have found that 32 to 80% of the annual fluctuations in a typical school’s scores are random or due to one time factors alone, unrelated to the amount of learning taking place. Thus, given the formula used by the Department of Education, a school’s grade may be based more on chance than anything else.
(source: Thomas Kane, Douglas O. Staiger, “The Promise and Pitfalls of Using Imprecise School Accountability Measures, The Journal of Economic Perspectives, Autumn, 2002.)

Now Jim Liebman admitted this fact, that one year’s test score data was inherently unreliable, in testimony to the City Council, and to numerous parent groups, including to CEC D2, as recounted on p. 121 of Beth Fertig’s book, Why can’t U teach me 2 read.” In responding to Michael Markowitz’s observations that the grading system was designed to provide essentially random results, he admitted:

“There’s a lot I actually agree with, he said in a concession to his opponent…He then proceeded to explain how the system would eventually include three years’ worth of data on every school so the risk of big fluctuations from one year to the next wouldn’t be such a problem.”

Nevertheless, the DOE and Liebman have refused to comply with this promise, which reveals a basic intellectual dishonesty. This is what Suransky emailed me about the issue, a couple of weeks ago, when I asked him about it before our NY Law school “debate.”

“We use one year of data because it is critical to focus schools’ attention on making progress with their students every year. While we have made gains as a system over the last 9 years, we still have a long way to reach our goal of ensuring that all students who come out of a New York City school are prepared for post-secondary opportunities. Measuring multiple years’ results on the Progress Report could allow some schools to “ride the coattails” of prior years’ success or unduly punish schools that rebound quickly from a difficult year.”

Of course, this is nonsense. No educators would “coast” on a prior year’s “success”, but they would be far more confident in a system that didn’t give them an inherently inaccurate rating.

Given the fact that that school grades bounce up and down each year, most teachers, administrators and even parents have long figured out how they should be discounted, and justifiably believe that any administration that would punish or reward a school based on such invalid measures is not to be trusted.

That DOE has changed the school grading formula in other ways every year for the last three years also doesn’t give one any confidence….though they refuse to change the most fundamental flaw. Yet another major problem is while the teacher data reports take class size into account as a significant limiting factor in how much schools can get student test scores to improve, the progress reports do not.

There are lots more problems with the school grading system, including the fact that they are primarily based upon state exams that we know are themselves completely unreliable. As MIT professor Doug Ariely recently wrote about the damaging nature of value-added teacher pay, because of the way they are based on highly unreliable measurements:

…What if, after you finished kicking [a ball] somebody comes and moves the ball either 20 feet right or 20 feet left? How good would you be under those conditions? It turns out you would be terrible. Because human beings can learn very well in deterministic systems, but in a probabilistic system—what we call a stochastic system, with some random error—people very quickly become very bad at it.

So now imagine a schoolteacher. A schoolteacher is doing what [he or she] thinks is best for the class, who then gets feedback. Feedback, for example, from a standardized test. How much random error is in the feedback of the teacher? How much is somebody moving the ball right and left? A ton. Teachers actually control a very small part of the variance. Parents control some of it. Neighborhoods control some of it. What people decide to put on the test controls some of it. And the weather, and whether a kid is sick, and lots of other things determine the final score.

So when we create these score-based systems, we not only tend to focus teachers on a very small subset of [what we want schools to accomplish], but we also reward them largely on things that are outside of their control. And that's a very, very bad system.”

Indeed. The invalid nature of the school grades are just one more indication of the fundamentally dishonest nature of the Bloomberg/Klein administration, and yet another reason for the cynicism, frustration and justifiable anger of teachers and parents.

Also be sure to check out this Aaron Pallas classic: Could a Monkey Do a Better Job of Predicting Which Schools Show Student Progress in English Skills than the New York City Department of Education?

Sunday, September 19, 2010

My unvideotaped debate with DOE's Suransky re NCLB, testing, and NYC's dismal results

On Wednesday, September 15, I was invited to New York Law School to debate Shael Suransky, NYC's Deputy Chancellor for Accountability, about NCLB and the negative effects of high stakes accountability systems.

I also took the opportunity to rebut the claims of impressive progress in student achievement in NYC that DOE continues to make, even after the state test score bubble has burst, and to point out the many errors in Chancellor Klein's written statements concerning this issue.

Unfortunately, NY Law School did not allow Lindsey Christ of NY1 or Norm Scott of Education Notes to videotape the event, reportedly because of pressure from DOE.

Lindsey was quite annoyed, and said she had never been barred from taping any such forum, either at NYU, Columbia, the New School, CUNY or SUNY.

For more on what transpired, you can see Norm Scott's accounts here and here, and the email exchange between Lindsey, the very testy VP for PR at NY Law School, and me.

As many people have asked for it, I am posting my powerpoint here, part 1 and part II. If you would like me to present it to your organization, please email me at classsizematters@gmail.com


-- Leonie Haimson, Executive Director, Class Size Matters