I have written on several occasions (such as here) about deceptive test score reporting practices in the Jefferson County school system.
For several years, Jefferson County has turned the CATS Kentucky Core Content Test results on their ear, claiming that any child who scores above the “Novice” level is doing “At or Above Grade Level” work.
That is absolutely untrue.
Kentucky’s CATS program has no “At or Above Grade Level” scoring, and no-one at the Kentucky Department of Education who is responsible for the assessment says that it is credible to claim anything above Novice indicates At or Above Grade Level performance.
Now, the deception may be expanding.
In response to yesterday’s blog about “Would you call this “Substantial” progress?” an anonymous commentator claimed the National Assessment of Educational Progress also reports scores in an “At or Above Grade Level” format.
That is absolutely untrue, as well.
The anonymous commentator cited supposed “At or Above Grade Level” NAEP results that exactly matched the percentage of students who scored at or above the NAEP “Basic” level. While NAEP “Basic” is not tied to CATS “Apprentice” scores in any way, the logical deception used by the Anonymous commentator is chillingly similar to the one Louisville’s schools have been using with the CATS scores.
Now, here is a possible key to all this.
As I also recently reported here and here, Jefferson County Public Schools participated as a separate entity in the recent 2009 NAEP Trial Urban District Assessment in Mathematics. The NAEP Proficiency rates were quite low, disastrously so for blacks, who only had a seven percent proficiency rate in eighth grade. So, it’s no wonder some in the Louisville school bureaucracy would like to confuse the truth.
Is the Louisville system now trying to redefine the NAEP down just as they did with the CATS results so people in Kentucky’s biggest city will be confused about how bad the performance there really is?
Stay tuned for more.
Tuesday, December 22, 2009
Is the school scoring deception expanding?
Subscribe to:
Post Comments (Atom)
11 comments:
Here is the problem. In 1990, the National Assessment Governing Board set and defined three achievement levels, i.e., Basic, Proficient and Advanced, as goals to which the nation should aspire. In 2001, the National Assessment Governing Board made it perfectly clear that NAEP Proficient is more than “grade-level” performance. Also in 2001, the No Child Left Behind Act required states to establish achievement levels with the same labels (i.e., proficient and advanced) but made the states define the labels differently. The No Child Left Behind Act required states to identify grade-level expectations for their students, and to define “proficient” as meeting grade-level expectations (to the satisfaction of an on-site peer-review team assigned by the U.S.Department of Education).
As a result, there are two federal definitions of “proficient,” one for the national assessment and one for state tests. When NAEP reports percent at or above Proficient, it looks at students whose work is more than (or above) grade-level performance. When states report percent at or above proficient, they look at students whose work is at or above grade-level performance. It is pure nonsense to compare NAEP “percent at or above Proficient” to a state’s “percent at or above proficient.” While the Bluegrass Policy Blog is free to ignore the No Child Left Behind Act when convenient, a state must always attend to the law's demands.
RE: Anonymous 22Dec09 at 2:51 PM
This comment just repeats what Anonymous said about another post from several days ago.
As I mentioned in that comment string, I doubt the National Center for Education Statistics would have issued multiple reports comparing NAEP results to state test results if it was pure nonsense to do such comparisons. Anonymous is in very limited company with his or her opinion.
Also, Anonymous made the same accusation before about the Bluegrass Institute ignoring NCLB.
Here, again, Anonymous does not provide anything of substance to back up his or her opinions.
As always, if Anonymous can come up with valid examples where the Bluegrass Institute has erred in our discussions of NCLB, we will be happy to acknowledge that, correct our position and move forward. Our kids and all the people of the Commonwealth deserve that.
Six rejoinders from “several days ago”: [Blog said], I replied.
[1] There has indeed been considerable concern about what passes for proficient work on state assessments like CATS versus what the technical experts who create the NAEP consider to be truly proficient performances.
It is surprising after all this time that the Bluegrass Policy Blog still doesn’t know that “proficient” on the state test had to pass an on-site review conducted by a team of out-of-state experts in curriculum, measurement and statistics before the term could be implemented (a review of technical experts, if you will). The U.S. Department of Education assigned membership to the team, not the state.
[2] We see dramatically different results for state and NAEP testing today because the states violated the trust of congress to do the right thing for kids versus doing things to protect the status quo for the education bureaucracy.
We see dramatically different results for state and NAEP testing when someone makes apples to oranges comparisons between the two tests. The dramatic differences reside in the organizations and individuals who conduct and publish bogus comparisons that “violate the trust of congress,” not in anything the states have done.
[3] Your entitled to your opinion that it is pure nonsense to compare NAEP and state tests, but that opinion isn't very widely shared.
You totally misrepresent my opinion. It is pure nonsense to compare NAEP apples to state oranges. The NAEP Validity Studies Panel (in the American Institutes for Research), for example, has statistically demonstrated that NAEP Basic is the most appropriate comparison for state proficient. It seems these NAEP experts don’t measure up to the Bluegrass Policy Blog’s credibility standards.
to be continued . . .
[4] For example, NCES has completed several studies that directly compare state results to the NAEP. NCES probably wouldn't have done that if it was inappropriate.
I am unaware of any NCES studies that “directly compare state results to NAEP.” NCES has conducted studies using the NAEP scale to define and compare the “rigor” of the various state standards, and to compare the rigor of the state standards to the NAEP achievement level cut-scores. [NCES mapping study No. 2010-456, for example.] The only clear finding from these NCES mapping studies is there is little or no correlation between the rigor of a state’s standards and the overall student achievement in the state.
[5] If you try to sell the idea that NCLB required states to adopt low testing standards, I think you will find yourself in very limited company.
My position is that NCLB required states in 2001 to define the term “proficient” by the common meaning of the term, namely meeting the state’s grade-level expectations for each grade (and enforced that requirement via a mandatory review process conducted by curriculum, measurement, and statistics experts). NAEP did not use the common language definition for the term in 1990. In fact, the National Assessment Governing Board stipulated a definition for NAEP that describes some of the best students you know as “Proficient.”
One NCES technical expert responsible for implementing NAEP policy has make it clear, in conjunction with the release of the most recent NAEP mapping study, that "NAEP's policy definition of its 'Proficient' achievement level is 'competency over challenging subject matter' and is implicitly intended to be higher than grade-level performance." Unfortunately, it seems this NCES expert, a senior technical advisor, fails to meet the Bluegrass Policy Blog’s credibility standards.
[6] One last point, how, exactly, do you think the Bluegrass Institute is ignoring NCLB? Please provide some specific examples. Your general accusation won't due.
The Bluegrass Policy Bog, for instance, as illustrated by all of the above, absolutely ignores the NCLB burden placed on the states to define “proficient” differently than NAEP defined “Proficient.” Ignoring this has lead to bogus studies (i.e., NAEP Proficient vs. state proficient) that misinform and undermine public dialog by creating and publishing magical “dramatic differences” in student achievement in NAEP and state reports that do not exist in reality.
the end . . .
RE: Anonymous'Other Repetitive Posts
These are also seriously off the mark. Anonymous has a very inaccurate view of the research, who is doing it, and what it really shows.
See my expanded comments in the earlier blog post at "Would you call this “Substantial” progress?"
From the most recent NCES mapping study here are the "rigor score" for the state test and the "average achievement score" for the state on the eighth grade reading test, both on the NAEP scale. I invite you to do a correlation to see if you get something other than the 0.18 that I computed. In my world, a Pearson r of 0.18 is very close to no correlation. It seems the NCES study has demonstrated there is little to no correlation between the rigor of a state's standards and the overall achievement of students in the state.
State Rigor Average
Alabama 234 252
Alaska 233 259
Arizona 245 255
Arkansas 249 258
California 261 251
Colorado 230 266
Connecticut 245 267
Delaware 240 265
Florida 262 260
Georgia 215 259
Hawaii 245 251
Idaho 233 265
Illinois 236 263
Indiana 251 264
Iowa 252 267
Kansas 241 267
Kentucky 251 262
Louisiana 246 253
Maine 261 270
Maryland 250 265
Massachusetts 252 273
Michigan 238 260
Minnesota 265 268
Mississippi 251 250
Missouri 272 263
Montana 250 271
Nevada 247 252
New Hampshire 258 270
New Jersey 252 270
New Mexico 248 251
New York 260 264
North Carolina 217 259
North Dakota 251 268
Ohio 240 268
Oklahoma 232 260
Oregon 251 266
Pennsylvania 245 268
Rhode Island 253 258
South Carolina 281 257
South Dakota 249 270
Tennessee 211 259
Texas 222 261
Vermont 263 273
Virginia 239 267
Washington 253 265
West Virginia 229 255
Wisconsin 231 264
Wyoming 247 266
---------------------------
RE: Anonymous 23Dec09 at 2:50 PM
The correlation is interesting and could raise a number of questions.
One of those is that maybe reliance on testing alone isn't a good way to improve education.
If so, that would be problematic for KERA, because the fundamental change philosophy in Kentucky's reform was that change would be driven by the KIRIS and CATS assessments.
It is also possible the correlation just indicates there hasn't been enough time for differences in state standards to have real impacts on the level of instruction and resulting student performance.
NCLB wasn't enacted until December, 2001, and it wasn't really implemented by the states for another year or so. The 2007 data Anonymous cites is basically fourth year evidence.
In that same, four year period many states altered their testing standards, which further complicates analysis of what the correlation might mean.
Still, this correlation is worth watching as NCLB continues.
I don't have the equipment with me at present to run correlations on the other test results from 2007, so I don't know if the eighth grade reading situation is repeated for math and fourth grade reading.
Anonymous, have you shared this interesting finding with NCES and the technical community?
NCES and the technical community are well aware of this "interesting finding."
Andrew Kolstadt (Senior Technical Advisor in the Assessment Division at NCES) made a PowerPoint presentation via WebEx about "Comparing State Proficiency Standards" with an eye on the release of the most recent NCES mapping study. Consider two slides from that persentation.
Slide 9: Meaning of "proficient:
//State assessments often define “proficiency” as solid grade-level performance, often indicating readiness for promotion to the next grade.
//NAEP’s policy definition of its “Proficient” achievement level is “competency over challenging subject matter” and is implicitly intended to be higher than grade-level performance/
Slide 14: State Standards Not Related to Students’ Performance
//Now that we’ve looked at where states set their standard for proficiency, it’s important to understand that where a state sets its standard is not necessarily related to how its students perform on NAEP.
//North Carolina [259], South Carolina [257], and Georgia [259] students all scored about the same on NAEP.
//Now let’s look at their NAEP equivalent score. We see that North Carolina [217] and Georgia [215] set their standard for proficiency at about the same level. But, South Carolina, with a NAEP Equivalent score of 281, has a more rigorous standard.
Sorry, slide 14 was about eighth grade reading in 2007.
I just calculated the Pearson r for the other grade-level tests: rigor vs achievement.
Reading Grade 4 - 0.35
Reading Grade 8 - 0.18
Mathematics Grade 4 - 0.36
Mathematics Grade 8 - 0.34
When r is around 0.35, the rigor of the standard "accounts" for less than 15 percent of the variance in achievement, which in my world is "little correlation" between the rigor of the state's standard and overall achievement in the state.
Regarding Anonymous' postings in the 8th to 10th comments.
As I mentioned earlier, the correlations are interesting, but they might only show that No Child Left Behind testing hasn't been going on long enough to create a notable change. That doesn't mean such change isn't going on, however.
Also, the "other" correlations that are in Anonymous' latest comment, while small, are notably larger than the original one for grade 8 reading. So, maybe changes are starting to occur.
Post a Comment