Wednesday, December 3, 2008

Who’s Really Watching Our Failing School Systems?

Next week the Kentucky Board of Education (KBE) considers progress reporting requirements for Kentucky’s three worst performing school districts (Jefferson County, Christian County, and Covington Independent). Signaling where the worst of the worst problems lie, the Kentucky Department of Education recommends that Jefferson and Christian County should only have to report to the KBE twice a year, while the clearly more problematic Covington district will have to report four times a year.

But, is this really sufficient oversight for chronically low performance?

To explore that question, I talked to Rick Loghry, past president of Mason & Hanger – a high-tech firm previously headquartered in Lexington that has extensive expertise and experience in improving production plant management. Loghry was very emphatic that seriously under-performing organizations need much more frequent oversight – monthly – to create an effective turn-around in a reasonable amount of time.

Then, I asked Loghry a key question – given that our highly criticized CATS assessment only provides results for schools and districts annually, and actually only provides a final judgment on schools and districts every other year, what could be used as a basis to make meaningful reports on monthly progress? We both quickly realized that Kentucky’s current school assessment system is totally inadequate to support any sort of meaningful monitoring function, be it monthly, once a quarter, or even once a year.

As things stand, any monitoring in these troubled school systems will have to rely on other indicators and testing programs. However, there has been lots of controversy about how well other tests and measures do, or do not, mirror Kentucky’s curriculum and core content for assessment documents. Who knows if the results from these alternate measures will have any validity?

It’s obvious that Kentucky’s failure to establish an effective school assessment program has implications that run far deeper than questions about how frequently failing schools or districts need to report on progress. It’s the development of the measurement tools to make such reporting meaningful, not how often they are used, which should be the KBE’s first priority of business.

2 comments:

Anonymous said...

Richard, you are exactly right that CATS is not a good short- (or even perhaps long-)term monitoring tool. The solution, while not complicated, is also not easy to implement in a district that is so far behind. The state curriculum (ensconced in the Core Content for Assesment and Program of Studies) has to be translated into meaningful learning objectives that can then become the basis of daily teacher lesson and unit plans. Teachers who teach common grades or courses must teach to exactly the same learning targets. High-quality classroom assessments have to be designed that measure student progress toward the learning targets, and again, the assessments have to be common among like grades or courses so every classroom's outcome can be measured against the same common learning targets. Immediate, short-term interventions have to be designed for students who are not making progress toward the standards, and enrichments should be available for students who are.

It sounds more complicated than it is. It is truly not rocket science. But this level of coordination goes directly against the culture of professional autonomy and isolation that teachers and school administrators used to. If districts have been consistently doing this work all along, it's not that big a deal. Chronically struggling districts are usually years and years behind on this work, and that's why they continue to struggle.

By the way, KDE knows all of this. They could easily force this work to be done in the failing districts in question, but they would have to engage in vigorous monitoring to make sure it happens.

Anonymous said...

Great comments, Gary.

Your suggestions mesh very well with the instructional development courses I took in the United States Air Force, an agency highly focused on good student outcomes (poor student learning caused accidents) and not hampered by misguided arguments about autonomy and isolation.