The Princeton Review described their much-derided NYC interim assessments as “formative” --the tests that DOE paid millions for and later rejected. See this Princeton Review presentation:
“Interim Assessment with Instructional Impact: How to use the formative, low-stakes testing system to support teaching and learning in
The subsequently renamed “periodic assessments” that the Accountability office under James Liebman contracted out to McGraw-Hill at $22 million annually, also known as “Acuity,” are commonly characterized as “formative” by Liebman et. al. See this recent pdf document from the DOE Accountability office:
Periodic Assessments support schools by providing …. formative, instructionally valuable feedback to support differentiation of instruction, determination of professional development needs, and selection of instructional resources.”
See the long list of periodic assessments now required in all NYC public schools in the chart above.
Unfortunately, they appear to be lying to a very eager clientele at the DOE.
Thanks for posting this. Very timely and relevant.
ReplyDeleteThe links to the BOE pdf and the Princeton review don't appear to be working. Can you repost the links?
Thanks!
I have reposted the links, but they are also here:
ReplyDeletePrinceton review:
http://schools.nyc.gov/daa/InterimAssessments/ela-math/NYC%20Interim%20Assessment.pdf
DOE
http://schools.nyc.gov/NR/rdonlyres/8A63337A-7EA7-4A3E-89AB-675547C3AC20/37808/20082009_DYO_Periodic_Assessment_Policy1.pdf.
There is no reason why testing companies cannot produce formative tests. The fact that they are usually designed by teachers does not mean that they have to be.
ReplyDeleteIn fact, given the poor ability of most teachers to design good tests -- due to lack fo training in what makes good design -- the testing company could likely do a much better job.
The problem is that these tests being sold as formatative are not. Repackaging summative tests as formative tests does not make them so. Trying to use them as such, does not make them so.
This is an other aspect of one of the huge problems with modern testing: misuse of tests. That is, tests designed for a particular use are being used for another. For example, tests which are supposedly designed to judge schools being used to judge teachers. Tests that designed to judge students in a broad way being used to judge schools.
Is this is a results of testing companies trying to oversell their products and maximize profit? Perhaps, but I have spoken to enough of them not to believe it. They, also, are very concerned about misuse of their tests. At the same time, there is a a growing appetite/demand for tests that they do not have ready to sell. In response to this demand, they are misselling, leading to misuse.
Let's be clear here. This is a demand-side problem, not a supply-side problem. It's not pushers, it's junkies.
No one is twisting their arms, rather they are banging on their doors, and they ARE answering.
No one wishes more than me that testing be done wisely and well. But, as a former teacher and someon who has studied educational measurement, I am under no illusions that teacher-made tests are very good products, either.
Didn't Photo Anagnostopoulos develop the Acuity tests? Could this possibly be a conflict of interest?
ReplyDeleteYes, see this from the McGraw/Hill website:
ReplyDeletehttp://www.mcgraw-hill.com/releases/education/20060329.shtml
"We designed Acuity for teachers and classroom use," said Photeine Anagnostopoulos, president of McGraw-Hill Digital Learning. "The Acuity Diagnostic Benchmarks are tailored to district curriculum pacing and assess student retention and knowledge of core content areas. Diagnostic reports show specific mistakes students make so teachers can target instruction to improvement needs -- a powerful way to accelerate student performance and help educators meet achievement goals."
Multiple forms of the Acuity benchmarks are administered within a classroom period every two months to monitor student progress. These benchmarks are on a common scale so that student growth can be observed within and across the grades of a content area. Empirically-based predictions to expected performance on the state NCLB tests become increasingly accurate over the academic year with the administration of each Predictive Benchmark form."