Fred Smith, testing expert and fellow Leonie Haimson Skinny Award winner, describes another problem in NY State achievement test results:
I believe the opt-out movement is viable and capable of growth in NYC--even though we have a Mayor and chancellor who are advocates of mass testing in grades 3-8.
The Grade 6 ELA results for New York City are screwy. They strike me as a weak link in Questar’s testing chain. The percentage of students deemed proficient this year is 48.9%. It was 32.3% last year. That’s a 16.6 difference– or a shift of from nearly one-third to one-half of (65,000) sixth graders who are now “proficient.” In no other grade is it more than 8.0.
Surprisingly, differences of the same magnitude hold for all ethnic groups.
[I know we were warned not to compare the 2018 results directly with the 2017 results. Still that’s a singular difference since the same publisher, Questar, produced both tests under a $44 million, five-year contract with SED.]
NYC ELA Percent Proficient by Grade
3 - 8
NYC Grade 6 ELA Percent Proficient x Race/Ethnicity
And how does this useless testing program serve educators who are judged by such inexplicable data and who must design programs to meet the academic needs of students–based on such shaky (as in meaningless) information???
An outcome like this is an example of why we need to have timely information about how the items on the examination functioned. Yet, SED and DOE have not provided data at their disposal that would shed light on the matter. Instead, NYC parents are expected to march their children off to the deadening testing drumbeat for the next three years uninformed about the workings of the exams.
We must figure out a way to demand and obtain the information hidden behind the curtain of the test questions.* If SED and the DOE are unwilling to disclose the facts, this would give impetus to a citywide campaign that builds on the reported four percent (4%) opt out rate and escalates it in 2019.
*The information consists of item-level statistics that SED and DOE routinely keeps. It would allow multiple-choice items and constructed response questions to be studied to see how students answered them. For M-C items, we should have classical item analysis data on the percentage of students selecting each option. For CRQs, we should have the percentage of students receiving each score from trained raters. Having both sets of information would give us a picture of the response and scoring distributions generated by students and lead us to evidence-based insights into the quality of the exams. Not only must SED and the City already have such overall data, they also have—or should be able to produce it by subgroup—i.e., for ELLs, students with disabilities and for students by race/ethnicity—that would give us further understanding.
(If you agree, please post and share the above with allies and potential allies in places I am incapable of reaching.)