Tuesday, April 7, 2009

Will Kentucky’s New Public School Assessments be Driven by the Right Research?

Last week, the Education Week newspaper’s print edition headline story (subscription) dealt with some real surprises. These surprises came from the latest, experimentally-based studies from the US Institute of Education Sciences (IES), which is part of the US Department of Education.

Quite simply, once scientific split-sample testing of different education programs is conducted, most of those programs show no improvement effects.

The new IES studies covered a number of different topics such as assigning student mentors (in general, no positive impacts), elementary school math programs (two showed some improvement, two others didn’t), the effectiveness of software designed to help teach math and reading (Out of 10 programs evaluated, only three showed promise), an evaluation of whether teachers from alternate certification programs did better or worse than regularly certified teachers (finding – no difference), and impacts of different professional development programs for teachers on the reading performance of their students (no improvement).

Obviously, these findings caused a real stir in the education world, which until recently hardly ever conducted this sort of rigorous, scientific research. Educators normally do much less insightful work, conducting unsophisticated studies that cannot actually establish cause and effect relationships or prove that anything really works.

Of course, that didn’t stop KERA era educators from telling us time and again that “the research shows” this or that education idea, be it overuse of calculators or deemphasizing instruction of fractions, somehow was working wonders in classrooms.

Well, a lot of those ideas were wrong, and the “research that showed” – didn’t.

That brings us to an important provision in Senate Bill 1, which throws out our CATS assessment and sets up the process to replace it with something hopefully better.

The legislation specifically requires the people who will revise the assessment and its underlying content standards to use evidence-based research. However, most of the research that brought us CATS probably cited some sort of evidence. That isn’t enough.

What our new assessment’s creators need to do is intelligently sift through the mountain of unscientific research on education. They will need to know what real, scientific research looks like, feels like and maybe even tastes like. Then, these working groups will need to mostly ignore all the rest of the “stuff” educators have written. Otherwise, we will just wind up with CATS – II.

No comments: