My wife and I are graduates of the specialized high schools, and our daughter currently attends one. We are supporters of public schools, and would not want to see these schools weakened. But every student – regardless of race, gender, or ethnicity -- deserves a properly vetted system for determining who is admitted to these schools. And that’s not what the NYC Department of Education currently provides.
I recently published a study -- High Stakes, but Low Validity? A Case Study of Standardized Tests and Admissions into
Using test results from the 2005 and 2006 specialized high schools test (SHSAT), which is the sole determinant of admissions, I found a number of glaring violations of widely-accepted educational testing standards and practices.
For example, thousands of students of all backgrounds are rejected with scores that are statistically indistinguishable from those who are admitted. And the NYC Department of Education fails to provide estimates of how well the SHSAT is able to differentiate among students who score close to the admission/rejection line, or whether other criteria could be used to reduce these uncertainties. I made several requests for this information to senior officials at the NYCDOE, to no avail.
Different test versions are used, but no details are provided about how these versions are statistically equated and how accurate that equating is (again, despite requests, and in violation of testing standards and practices). The scaled scores vary across different versions more than the chance distribution would suggest is plausible, suggesting that the equating system may not be leveling the playing field across test versions of varying difficulty – so that students who received certain versions may be more likely to gain admission than those who received other versions.
The SHSAT exhibits an unusual scoring feature that is not widely known, and may give an edge to those who have access to expensive tutors. Someone with a very high score in math and a relatively poor score in English, or vice versa, has a better chance of admission than someone with relatively strong performances in both. Alternative scoring systems would yield far different results, and no evidence is offered to support the current system.
No predictive validity studies have ever been done– not only to see if the test suffers from prediction bias across genders and ethnic groups, but to see if the test is linked to any desired outcomes. In fact, the NYCDOE has never established what specific, measurable objectives the SHSAT is supposed to achieve. Without well-specified objectives and carefully constructed validity studies, there’s no way to know if these admissions criteria are serving their purpose, or if an alternative system would be more reliable.
The SHSAT is widely assumed to produce clear-cut, valid, and equitable results. But for many students who are rejected, they might have been admitted if they’d been assigned a different test version, if the winds of random variation had blown a bit differently, if a slightly different scoring system had been used, or if they’d been made aware in advance of how the scoring was done.
Of course, no admissions criteria is “perfect.” Uncertainty and imprecision are inherent in all decisions, whether they be based on test scores, grades, portfolios, or a combination of the above. Standard psychometric practice is to choose criteria that minimize uncertainties and enable students to demonstrate the skills needed to succeed in ways other than captured on a single standardized test.
The only systematic, objective way to do this is by conducting predictive validity studies, as are regularly carried out for tests like the SAT to help refine the test, and help colleges decide how much weight to put on SAT scores, grades, and other factors in their admission decisions. Overwhelmingly, studies have found that multiple criteria, used in tandem, provide a better guide to future student performance than a single one. Indeed, it’s partly because of such validity studies that psychometric standards caution strenuously against using any single metric as the sole criterion for admission, and virtually all educational institutions use multiple criteria to determine admissions decisions.
The DOE violates accepted psychometric standards, by refusing to provide detailed information about these exams, refusing to carry out any validity studies for them, or even reveal what the tests are designed to accomplish.
We should press the DOE for answers, and more importantly, to reform the system. Formal predictive validity studies need to be carried out. Based on the results of these studies, a determination should be made as to what admissions process is most likely to achieve a specific, quantifiable admissions goal in a transparent, equitable way.
If these studies conclude that it is best to use additional criteria along with a standardized test, the
As parents, we should bring these issues to the attention of our elected representatives, on the City Council and in the State Legislature. I sent a copy of my study to my City Council member Jessica Lappin, my State Senator Liz Krueger, and State Senator Kenneth LaValle -- but have yet to hear back from any of them. This is unacceptable. --Josh Feinman
You can contact Josh for more information at email@example.com