Showing posts with label James Liebman. Show all posts
Showing posts with label James Liebman. Show all posts

Monday, September 3, 2018

Bob Hughes, now at Gates & formerly New Visions, provides $14M to New Visions & $2M to Jim Liebman for "evaluation"


Last week, Bob Hughes, appointed director of the Gates Foundation K12 division in 2016, made his first big move.   
He announced $92 million in grants for the new Networks of School Improvement initiative to be given to 19 organizations, collaboratives and districts, out of 532 applications submitted.   
New Visions, the NYC-based organization that Hughes ran before coming to the Foundation, received the second largest grant at $14 million – to work with 75 NYC schools, as yet unidentified.
The grant was more than the amount given to the entire Baltimore system of public schools – despite New Visions’ spotty record.
Though Hughes admitted that there’s not much evidence behind the theory of network improvement, he’s determined to push forward nonetheless:
 “I don’t think the research base is fully developed, and that’s one reason we’re making these investments,” said Hughes.
Asked by EdWeek reporter Steven Sawchuk how the results of this new initiative would be evaluated,
 Hughes replied that “ the foundation is still formulating its research approach.” And:  
"We don't have details for you, but we remain deeply committed to a third party evaluation of all our work and transparency about the results of those evaluations so we can enable the field to understand what we do well and what we don't do well," he said. 
Yet it appears that The Center for Public Research and Leadership (CPRL) at Columbia Law School has already been chosen by Hughes to evaluate the networks initiative.
As the CPRL website notes, “In January 2018, CPRL received a two and one-half year grant to report on the research underlying the NSI [Networks for School Improvement] initiative and to use the research to design and conduct a formative evaluation of the initiative’s initial implementation.”
(Sure enough, the Gates Foundation lists a grant for $1.9 million over 31 months to be awarded “Columbia University”  to "support evaluation”.)
The first Gates-funded CPRL study was a literature review of network impacts.  The findings were described by Sawchuk this way:
A Gates-commissioned review of the research on the topic from Columbia University's Center for Public Research and Leadership noted that there are more studies on the norms and conditions needed to support healthy networks than on how they affect K-12 outcomes; most of the 34 studies were case studies or qualitative, rather than quasi-experimental designs that sought to answer cause-and-effect questions.

CPRL is headed by Columbia Law professor James Liebman, who was appointed head of the NYC Department of Education’s Accountability Office under Joel Klein, despite the fact that he had no K12 education experience either as a teacher, administrator or researcher.   
Liebman made a mess of the School Progress Reports at DOE, instituting  a volatile, unstable system in which school grades wildly veered from year to year.  A blog post by Professor Aaron Pallas in Edweek was memorably entitled, “Could a Monkey Do a Better Job of Predicting Which Schools Show Student Progress in English Skills than the New York City Department of Education?”  Under Liebman’s direction, DOE efforts were statistically inept and I would not trust his ability to undertake a credible  evaluation.
 Liebman also commissioned the  expensive ARIS data system, which lived up to none of its promises.  It was rarely used by parents or teachers and was finally ditched in 2015 after costing the city $95 million.
In any case, I hope the Gates Foundation has not decided against commissioning an evaluation from more experienced, credible organization like RAND.  
RAND recently released a highly critical analysis of the results of the Gates-funded Teacher Evaluation Initiative and before that, a skeptical evaluation of the Gates-funded NextGeneration Learning Challenge schools, those  that feature  “personalized [online] learning.”
John F. Pane, senior scientist at RAND and the chief author of the latter study frankly pointed out to Ed Week, the evidence base for personalized learning is still "very weak."
Hughes himself doesn’t have the greatest reputation for transparency.  In 2005, he tried to suppress a Gates-funded research study that contained negative findings about the  New Visions Gates-funded small schools initiative in New York City, a study that was subsequently leaked to the NY Times .  
In 2007, it was revealed that New Visions threatened these small schools that they would not receive their full Gates grants unless they chose New Visions as their DOE “partnership support network” and paid the organization a fee in return.
"I thought, 'Oh, my God, what a huge conflict of interest,'" a principal said. "We have to join their PSO and pay them for support in order to get this grant that we qualified for?"
Only time will tell, but the hints of insular cronyism in these decisions by Hughes to award grants to New Visions and to Jim Liebman’s outfit do not bode well for the future.

Thursday, September 30, 2010

Why the school grading system, and Joel Klein, still deserve a big "F"

Amidst all the hype and furor of the release of today’s NYC school "progress reports", everyone should remember how the grades are not to be trusted. By their inherent design, the grades are statistically invalid, and the DOE must be fully aware of this fact. Why?

See this Daily News oped I wrote in 2007, in which all the criticisms still hold true, “Why parents and teachers should reject the new grades”.
In part, this is because 85% of each school’s grade depends on one year’s test scores alone – which according to experts, is highly unreliable. Researchers have found that 32 to 80% of the annual fluctuations in a typical school’s scores are random or due to one time factors alone, unrelated to the amount of learning taking place. Thus, given the formula used by the Department of Education, a school’s grade may be based more on chance than anything else.
(source: Thomas Kane, Douglas O. Staiger, “The Promise and Pitfalls of Using Imprecise School Accountability Measures, The Journal of Economic Perspectives, Autumn, 2002.)

Now Jim Liebman admitted this fact, that one year’s test score data was inherently unreliable, in testimony to the City Council, and to numerous parent groups, including to CEC D2, as recounted on p. 121 of Beth Fertig’s book, Why can’t U teach me 2 read.” In responding to Michael Markowitz’s observations that the grading system was designed to provide essentially random results, he admitted:

“There’s a lot I actually agree with, he said in a concession to his opponent…He then proceeded to explain how the system would eventually include three years’ worth of data on every school so the risk of big fluctuations from one year to the next wouldn’t be such a problem.”

Nevertheless, the DOE and Liebman have refused to comply with this promise, which reveals a basic intellectual dishonesty. This is what Suransky emailed me about the issue, a couple of weeks ago, when I asked him about it before our NY Law school “debate.”

“We use one year of data because it is critical to focus schools’ attention on making progress with their students every year. While we have made gains as a system over the last 9 years, we still have a long way to reach our goal of ensuring that all students who come out of a New York City school are prepared for post-secondary opportunities. Measuring multiple years’ results on the Progress Report could allow some schools to “ride the coattails” of prior years’ success or unduly punish schools that rebound quickly from a difficult year.”

Of course, this is nonsense. No educators would “coast” on a prior year’s “success”, but they would be far more confident in a system that didn’t give them an inherently inaccurate rating.

Given the fact that that school grades bounce up and down each year, most teachers, administrators and even parents have long figured out how they should be discounted, and justifiably believe that any administration that would punish or reward a school based on such invalid measures is not to be trusted.

That DOE has changed the school grading formula in other ways every year for the last three years also doesn’t give one any confidence….though they refuse to change the most fundamental flaw. Yet another major problem is while the teacher data reports take class size into account as a significant limiting factor in how much schools can get student test scores to improve, the progress reports do not.

There are lots more problems with the school grading system, including the fact that they are primarily based upon state exams that we know are themselves completely unreliable. As MIT professor Doug Ariely recently wrote about the damaging nature of value-added teacher pay, because of the way they are based on highly unreliable measurements:

…What if, after you finished kicking [a ball] somebody comes and moves the ball either 20 feet right or 20 feet left? How good would you be under those conditions? It turns out you would be terrible. Because human beings can learn very well in deterministic systems, but in a probabilistic system—what we call a stochastic system, with some random error—people very quickly become very bad at it.

So now imagine a schoolteacher. A schoolteacher is doing what [he or she] thinks is best for the class, who then gets feedback. Feedback, for example, from a standardized test. How much random error is in the feedback of the teacher? How much is somebody moving the ball right and left? A ton. Teachers actually control a very small part of the variance. Parents control some of it. Neighborhoods control some of it. What people decide to put on the test controls some of it. And the weather, and whether a kid is sick, and lots of other things determine the final score.

So when we create these score-based systems, we not only tend to focus teachers on a very small subset of [what we want schools to accomplish], but we also reward them largely on things that are outside of their control. And that's a very, very bad system.”

Indeed. The invalid nature of the school grades are just one more indication of the fundamentally dishonest nature of the Bloomberg/Klein administration, and yet another reason for the cynicism, frustration and justifiable anger of teachers and parents.

Also be sure to check out this Aaron Pallas classic: Could a Monkey Do a Better Job of Predicting Which Schools Show Student Progress in English Skills than the New York City Department of Education?

Thursday, July 9, 2009

A Failing Grade for Mr. Liebman

Several articles appeared today about James Liebman's resignation after serving three years as head of the Tweed's Office of Accountability -- finally returning to Columbia University law school full time: Chief Accountability Officer for City Schools Resigns (NY Times); and New accountability chief says he’ll carry on Liebman’s legacy (Gotham Schools).

Let us remember that this man had no qualifications for the job, and proved this repeatedly over the years. In fact the only person who probably knew less about education and how to nurture conditions for learning was the man who hired him: Chancellor Klein. Columbia University finally woke up to the fact that he had been double-dipping: while holding the office of Chief Accountability Officer at Tweed, he was also supposedly on the full-time law faculty for the last year.

The progress reports he designed were widely derided as unreliable and statistically untenable; the quality reviews were an expensive waste of time and paperwork, and ignored when DOE was deciding which schools to close and which schools to commend; the $80 million supercomputer called ARIS was a super-expensive super-mugging by IBM, according to techies who found it laughable how much DOE was taken for a ride.

The surveys were badly designed, and counted for only a small percentage of school grades. Yet because principals were terrified of bad results, parents were pressured into giving favorable reviews for fear their schools would otherwise be punished. And the top priority of parents on these surveys — class size reduction — was ignored; worse, it was repeatedly derided by Liebman et. al. as a goal not worthy to pursue.

Under his leadership or lack thereof, the Accountability office continued to mushroom with more and more high priced educrats, "Knowledge Managers" and the like, few of whom, like him, had any experience or qualifications for the job, no less an understanding of statistics or the limitations of data.

One would think that a man who had focused professionally on the large error rate in capital punishment cases would have a little humility in terms of recognizing the fallibility of human judgment -- but no such luck. When confronted with the question of why schools should be given single grades, rather than a more nuanced system that might recognize their variety of attributes, he opined that a single grade, from A to F was useful "to concentrate the mind."

The ostensible point of the test score data from the periodic assessments and standardized tests, collected and spewed out by ARIS, to be analyzed by each school's "data inquiry teams” and "Senior Achievement Facilitators" was supposedly to encourage “differentiated instruction” to occur , although this goal was severely hampered by the fact that under Klein's leadership or lack thereof, overcrowding and excessive class sizes have continued.

No matter how much data is available — even assuming it is statistically reliable— the best way to allow differentiated instruction to occur is to lower class size.

And let us not forget Liebman’s cowardly run out the back door of City Hall in order to escape parents and hundreds of petitions collected by Time out from Testing — even though City Council Education Chair Robert Jackson had specifically requested that he leave through the front door of the chambers after he testified so that he could receive the petitions with the respect that they deserved. A perfect emblem of his three years at DOE.

Saturday, September 20, 2008

The absurd NYC school grade system


There is now abundant evidence to show that the grades that DOE awards schools – their so-called “progress reports” -- are meaningless and rely on chance, like throwing dice, because they rely predominantly on whether a school’s test scores last year were above or below the year before.

Sixty percent of the grade depends on so-called “progress”, which means annual gains or losses in test scores, 25% on the test scores themselves, and only 15% on survey results. As a result, this year, many schools went from failing grades to grades of A , and just as occurred last year, some highly regarded schools received “Fs”.

Last year, in a Daily News oped entitled Why parents & teachers should reject the new school grades”, I pointed out that research shows that 30-80% of the annual gains or losses in test scores are random – and sure enough, this is what the grades turned out to be. Last week, Daniel Koretz, professor at Harvard and national expert on testing, wrote:

My advice to New Yorkers is to … ignore the letter grades and to push for improvements to the evaluation system…It does not make sense for parents to choose schools, or for policymakers to praise or berate schools, for a rating that is so strongly influenced by error.”

And yet according to the DOE, eighteen schools that received a D or an F last year have new principals this year – showing how lousy decisions are being based upon these inherently unreliable measures.

As Ellen Foote points out, the principal at IS 89, a school that received a “D” last year despite being selected as the only NYC middle school to receive an award for its achievements from the federal government:

Last year’s grades often reversed the state’s opinion of schools… She knew of one District 2 middle school that got a B last year but was on the state’s list of schools that need improvement. Under No Child Left Behind, many students in that school transferred to I.S. 89, a Blue Ribbon school. That meant the students transferred from a B school to a D school. This year, both schools received an A.

Indeed, as Eduwonkette has written, the proportion of NYC schools receiving an “F” that are in good standing according to federal or state government standards is larger than the proportion of schools receiving an “A”.

“How do you reconcile those discrepancies?” Foote said. “How do parents make sense of that? It just is all over the place — it’s such a disservice to schools and to parents…. It’s so simplistic at best and confusing and probably invalid at the worst, at a very high cost in terms of money, resources and morale.”

The smaller the school or number of students tested, the more likely annual gains or losses are invalid; and as this post in Gotham Schools reveals, the smaller the school, the more likely it received an extremely high or low grade this year.

In response to this sort of criticism, Jim Liebman, the head of the DOE accountability office, said last spring that this year’s progress score would be based on two years of data instead of one year– which would have improved its reliability, yet for some reason, he went back on his word.

Why? To my knowledge, the DOE has never explained.

Rather than making things better, Liebman made the grade even more unreliable by increasing the importance of the “progress” score to account for 60% of a school's grade, an even larger percentage -- up from 55% last year. Why? Again, this has not been explained.

Was this to take advantage of last year’s anomalously high increases in scores? Were Klein and Liebman themselves gaming the system to ensure that 80% of schools would receive “A”s or “B”s --so that they could claim great credit for their so-called but illusory improvements? Who’s to know?

The idea that 80% of NYC schools could be rated “A” or “B” is in itself absurd, given the fact that our school system has among the highest class sizes in the nation and among the lowest graduation rates. This is grade inflation of the highest order, apparently crafted so the administration can pat itself on the back.

I wish those reporters who uncritically reported on the $20 million teacher bonus program on Friday – which also rely on improving a school’s score in the unreliable progress category – would now dig a little deeper, to examine how these cash rewards were likely awarded randomly as well.

Saturday, August 2, 2008

New report on special education and D75 schools

See this new report, "Improving Special Education in NYC's District 75," about the multiple problems with the separate special education district known as D75, which also reveals how dysfunctional the entire system of special education is in NYC public schools. The study was carried out by specialists and administrators from other large school districts, under the aegis of the Council of the Great City Schools, and recommends keeping the special nature and services of D75 intact, but integrating them into the rest of the school system more effectively and efficiently.

The report reveals a far higher proportion of NYC special education students in separate, segregated settings than other districts throughout the state; pervasive conflicts between the principals of D75 schools that are located within regular schools over the way in which their students are deprived of access to essential facilities; how low-achieving students are referred to D75 placements as the time nears for state assessments grows near, so their school of origin won't be saddled with their test scores; how D75 students have few opportunities to be incorporated in the regular school’s classes or to interact with the rest of the student population, even when located in the same building; the way in which the new schools started by DOE commonly exclude and discriminate against D75 students; the manner in which Leadership Academy graduates principals with no apparent interest or training in improving outcomes for these students; the inappropriate practice of referring eighth grade students to D75 schools who have been retained multiple times, as well as high school students with few course credits, and students with “challenging behavior,” and the failure of DOE to include “anything in the accountability system pertaining to incentives or sanctions for the achievement of students with disabilities.”

Here's just one meaty excerpt, detailing yet another inadequate aspect of the school progress reports, as designed by
Columbia Law Professor James Liebman, head of
DOE's Accountability office:

Community schools can be recognized for the exemplary performance of students with disabilities (as well as other high-need students), but the process does not recognize the differences among students with disabilities ranging from relatively small-impact speech-language impairment to challenging sensory, emotional, or cognitive impairments.

A school can gain only three extra credit points on its overall score for exemplary gains among high-need students, such as those with disabilities.

• The Progress Report does not appear to take into consideration the extent to which students—

Are referred to other schools just prior to state assessments

Are enrolled by community schools following a District 75 placement

Are included and supported in the general education program

Are given access to general education programs and activities, if they are in District 75.


But check out the whole report; it is an excellent if rather depressing read.

Tuesday, December 18, 2007

Tweed's consultation with parents, or lack thereof

Mike Meenan of NY1 had an excellent story last night about the Chancellor’s failure to consult with any Community Education Councils before closing schools in their districts – contrary to the language and the spirit of state law, which says the following:

"The chancellor shall consult with the affected community district education council before... substantially expanding or reducing such an existing school or program within a community district.”

Yet Klein never bothered to do so in this case – nor in previous years. James Liebman said that these decisions were made because of long-term educational failure over many years -- yet several of the schools being closed are in good standing with both the state and federal government.

Liebman also claimed in his testimony to the City Council that he had conferred with many experts, union officials, and other “educational and community leaders” who provided “valuable feedback” during the development of the school grading system. Several of those cited in his testimony contradicted him later that same the day, including Ernest Logan, the President of the the principal’s union, Amanda Gentile, a VP of the UFT, and Ann Cook from the Consortium for Performance Standards.

His testimony also included the statement that his office had “directly reached over 20,000 parents in conversation about explanation of the Progress report and associated accountability tools.”

Yet according to an article in today's NY Sun, these sessions included visits to Laundromats and trips on the subway! It turns out that DOE employees had handed out flyers about the parent surveys at many of these places -- but there's no evidence that any real “consultations” or “conversations” about the school progress reports took place, and it’s not clear how many of the 20,000 people they supposedly contacted were actually parents.

Check out this list, provided by the DOE, which include not only visits to laundromats, but also a dentist office (each time supposedly involving 50 parents), and assorted trips on the #1,#2, #3, #4 trains – and just so the IND didn’t feel left out, also the E, F, G, V, and the R trains.

Many of these direct “conversations” appear to have occurred during trainings of parent coordinators – who after all, are not necessarily parents themselves.

In short, another incredible product of the unaccountable Accountability office at Tweed.

Monday, December 17, 2007

Voices in opposition to the school grading system grow louder

Diane Ravitch has an oped in today's NY Sun about the new school grades:

Is the grading system accurate and reliable? Did the grading system identify the worst schools? Is the closure of the lowest-performing schools likely to improve public education? Could the Department have taken other actions that might have been more effective than closing schools?

The answers to all of these questions, she suggests, is no. Diane also provides an important critique of the whole notion that simply closing schools is the best way to make significant progress:

Nor is it enough to turn out the lights. Schools are not a franchise operation. They are deeply embedded community institutions. They should be improved with additional resources, smaller classes, and additional training for educators. The starting point in reforming schools is to have a valid evaluation system that correctly identifies the schools that need extra help. It may not be easy to transform the schools that are in trouble, but if we want a good public education system, there really is no alternative.

Indeed, this is an essential element
of the school reform process for which Tweed no longer feels accountable -- their responsibility to provide the support and resources schools need to improve.

See the show on PBS about the NYC school grading controversy, including parents and principals at some of the schools that got low marks, and one that got high marks, talking about the meaning and impact of these grades. The show also includes an interview with the Chancellor, in which he attempts to explains the "F" that PS 35, the Staten Island neighborhood school received, despite having 98% of students at grade level in math, by comparing it unfavorably to Anderson School – a highly selective gifted and talented school.

The interviewer, Rafael Pi Roman points out that William Sanders, the father of value-added accountability systems, told him that the sort of one year’s test score gains that the NYC grades are based upon are not meaningful. Klein responds that nevertheless, the school grade is a positive motivational factor in getting schools to work harder on improving test scores.

You can also listen to audio clips from the City Council hearings on the school grades from December 10, now posted on You Tube:

Public Advocate Betsy Gotbaum , who says out that closing schools unilaterally, as the Chancellor has done, without first consulting Community Education Councils is potentially illegal.

City Council Education Chair Robert Jackson (Part 1, Part 2, and Part 3), who aggressively questions James Liebman on many issues, including whether the DOE reached out to parents sufficiently.

Council Member Lew Fidler of Brooklyn, who flunks the school grades for their lack of transparency. (Part 1 and Part 2.)

And Council Member John Liu , who is masterful in showing that these grades are derived primarily from the results of only two tests -- though Liebman keeps trying to argue that these are really "multiple assessments" given out over "multiple days." (Part 1 and Part 2.)

Finally, watch the Channel 2 news segment featuring the hearings and showing Liebman fleeing from parents, now also posted on YouTube.

UPDATE: see also this article in City Limits:
PARENTS, COUNCIL STILL ANGRY ABOUT SCHOOL GRADES

Friday, December 14, 2007

Mark Weprin's City Council Testimony

The following is Assemblyman Mark Weprin's magnificent testimony before the City Council Education Committee on Dec. 10:


Good morning. I am Mark Weprin and I represent the 24th Assembly District in Eastern Queens. As a father of two public school students and a champion of New York City public schools, I submit the following testimony to the New York City Council on the subject of the New York City Department of Education (DOE)’s recently released school progress reports.

The progress reports are an attempt to inform the public about the
performance of New York City public schools. While I agree with DOE’s
focus on academic excellence, I take issue with its methodology and its
failure to fully explain the assessments to the public. The grades,
which were supposed to provide parents with valuable information, have
mostly generated confusion, and the media has exacerbated the situation
with fuzzy terminology: DOE’s Progress Reports have been regularly
referred to as report cards, which is a misnomer. The grades are meant
to show schools’ progress – which is not the same as school quality –
and they do not achieve even that much. While I support evaluating
public schools, I believe that DOE’s recent attempt falls far short of
its goals.

The first problem is that the category of “student progress” accounted
for fifty-five percent of a school’s grade, and the DOE equated student
progress with changes in test scores from one year to the next. So a
school in which the students scored the same for two years in a row is
considered to have shown no progress, even if most students did well
both years, while a school in which the students’ test scores increased,
even if they remained low, gets points for improvement. This method of
grading unfairly penalizes high-performing schools such as those in
Eastern Queens.

Even worse, DOE’s definition of academic progress is based on the idea
that high-stakes standardized tests accurately assess how much students
have learned, but there are several reasons to doubt that premise. As I
have often stated, the extreme emphasis on test preparation has taken
away from real learning in classrooms across the City. So if the
students in a school increased their test scores from one year to the
next, their “improvement” is just as likely to be a result of excessive
test preparation drills as a reflection of academic progress. And if
higher test scores stem from more time spent on test preparation, they
may in fact indicate that less learning has taken place.

On the other hand, a decrease in test scores could mean that a few
students were not feeling well on the day of the test, or that they
happened to choose the wrong answers on a couple of multiple choice
questions. If students’ scores went down from third grade to fourth
grade, maybe it’s because the third graders take each State test for two
days while the fourth graders spend three days per test. (New York’s bar
exam is only two days.) Test scores can decline for a number of reasons,
but the change does not mean that students and teachers in a school are
suddenly performing at a lower level than they did the previous year.

I also have serious reservations about the surveys of parents, students,
and teachers that the DOE used to evaluate the portion of a school’s
grade that reflects “school environment.” Every community has a few
naysayers who are always full of criticism. Unfortunately, they are the
most likely to submit surveys and to influence others to share in their
negativism. Such individuals can have a disproportionate impact on the
school’s grade.

The blatant inconsistencies in the grades reveal how ridiculous they
really are. Some schools that did well on their Quality Reviews did
poorly on the Progress Reports; some schools that were listed as among
the most persistently dangerous in New York received A’s and B’s from
DOE. What are parents to think when they receive such contradictory
information?

I have no qualms about the concept of issuing progress reports for New
York City schools. Any institution that uses taxpayer dollars must be
accountable to the public. But a single letter grade cannot possibly
represent everything the public needs to know about a school and its
progress. Fair evaluations would take into account student safety,
parent involvement, teacher qualifications, art and music offerings, and
the school’s learning environment. Feedback from parents and teachers
should come from large groups of survey responders who filled out clear,
intuitive questionnaires. Most of all, we should not rely on scores from
high-stakes standardized tests. Changes in test results from one year to
the next do not reveal what we really need to know about our schools:
how hard teachers and principals have worked and how much students have
learned. The Progress Reports are not report cards, and the DOE grades
simply are not accurate assessments of our schools.

Assemblymember Mark S. Weprin

56-21 Marathon Parkway

Little Neck, New York 11362



Telephone (718) 428-7900

Facsimile (718) 428-8575



weprinm@assembly.state.ny.us

Wednesday, December 12, 2007

James Liebman on the run

On Monday, at the City Council hearings on the school grades, James Liebman, the chief accountability czar and former law professor, faced fierce criticism from Council Members. No wonder; his testimony was evasive, full of misleading statements and outright errors.

Liebman went on to make many questionable statements, among them, that a school at which "hundreds of children on average lost 10 percent of a proficient level in a year almost certainly has a significant problem."
Instead, experts say that one year's gain or losses in test scores at the school level is 34-80% random, and unrelated to the amount of learning taking place.
Liebman also claimed that factors related to overcrowding and class size were taken into account when devising the grades, when they clearly weren't.
In his testimony and power point, he claimed that he had consulted with many groups and experts, including the United Federation of Teachers, the Council of Supervisors and Administrators (the Principal's Union), CPAC, Community Education Councils and the NY Performance Standards Consortium in devising these grades.

Ann Cook, the co-chair of the Consortium, later testified to the fact that this was untrue. Her group had asked for and gotten a meeting about the interim assessments, but the topic of the school grades never even came up.

Ernest Logan, President of the CSA also denied that he had ever been consulted, and laughed when Jackson asked him this question. (See this letter from Logan to the Chancellor, about the many flaws in the school grades.) The UFT VP, Aminda Gentile, said they had “conversations” with DOE about the school grades, but there was no consultation.

Betsy Gotbaum, the Public Advocate, also criticized the unreliability of the school grades, and said that the Chancellor's decision to close schools without consulting first with Community Education Councils is against the law. She cited the state law, (2590-h) , which says that the Chancellor has the authority to:

Establish, control and operate new schools or programs…or…discontinue any such schools and programs as he or she may determine; provided however, that the chancellor shall consult with the affected Community District Education Council before substantially expanding or reducing such an existing school or program within a community district. (The law is posted here.)

Yet, she added, this has not happened in this case. "And the truth is, I can't think of an example where it has happened."

When asked by the chair of the Education committee, Robert Jackson, Liebman admitted that CECs had not been consulted before the announcement to close schools. Instead, they had been consulted afterwards, "entirely consistent with the process that has applied for the last several years."

Did he believe that parents should be consulted? Liebman said that the process that was used "was sufficient and adequate and very comprehensive."

Jackson said this response was "totally unacceptable", and if this was the direction the chancellor is going, he is in "big trouble." Liebman also claimed that the method he used was very "transparent" with very "clear rules" and that the results of the Quinnipiac polls showed that parents understood the methods used. (!!)
Liebman kept returning to the results of this poll in his defense, though it turns out that only 143 public school parents were polled.

City Council Member Lou Fidler was concerned that stigmatizing schools with failing grades will likely accelerate the decline of these schools, rather than helping them improve. Melinda Katz said it best: In her 14 years as an elected official, she’s never seen an agency so sure they’re right, when all the parents she has spoken to believe they’re wrong.

John Liu was very effective, asking Liebman repeatedly if the 85% of each school's grade was not just based upon a single measure, the results of a test taken once a year. Liebman kept on evading the issue, saying these grades were not based on one measure but actually "many measures" from a "series of assessments" that take place over a "series of daysm" and that each assessment "cuts across many hundreds of different items, and many skill areas." Liu pointed out the fact that its still only one test!

Finally, Liebman blurted out, "Life is one test" and everyone booed. Liu concluded that not only was Liebman trying to obfuscate, but that that his entire testimony was an obfuscation.

At the end of Liebman's three-hour testimony, the Chair, Robert Jackson, politely requested that he step outside the hearing room to receive petitions from Time Out from Testing and Class Size Matters, signed by nearly 7,000 parents, calling for a halt to the school grades. (Thanks so much to those of you who signed.)

In preparation, we filed out in an orderly fashion, (see above photo from the NY Times) but rather than have to confront us directly, Liebman slipped out a side door, out the back exit of City Hall, and ran away from us like a thief in the night, as we tried to catch up. He then entered the private gates to Tweed, but refused to let us in.

Liebman’s flight from parents was captured on video on many of the nightly news shows. As Lisa Donlan was quoted as saying in the Daily News, all this is symbolic of DOE’s arrogant and dismissive attitude. "He wouldn't even stay to hear our questions ... after we sat for three hours and listened to his testimony."

Here is an excerpt from today’s Times story, “Defending School Report Cards, Over a Chorus of Boos”:

“Mr. Liebman, whose title is chief accountability officer of the Education Department, ducked out a side door, leaving parents to chase him out the back of City Hall to behind the Education Department’s headquarters at Tweed Courthouse.

There, several education officials ran in circles for several minutes to avoid Jane Hirschmann, the director of Time Out From Testing, an advocacy group, as well as parents and reporters.”

Later in a phone interview, Liebman claimed to Times reporter that “he had not deliberately avoided the parents.” This claim is about as trustworthy as the school grades themselves.

See also article in Daily News, Escape from NY parents, the CBS newsclip here and NY1 here.

The CBS story repeats the erroneous statement that Liebman has met with Time out from Testing “many times”; in fact, according to Jane Hirschmann, head of the group, he has refused to ever meet with them.

I also gave testimony posted here about how unfair, inaccurate and destructive these school grades are, and entered into the record the comments criticizing the school grades from many of you, including parents, teachers, and at least one retired principal, that were posted online at our petition.

Update: Erin Einhorn of the NY Daily News pointed out today in Only in NY schools can get an 'A' & 'F' that of the 26 SURR schools on the state failing list, nine got As or Bs.

"The city can do whatever they want to do, but at the end of the day, I think the public deserves better," said Merryl Tisch, the vice chancellor of the state Board of Regents and a longtime supporter of Mayor Bloomberg.