Sunday, April 09, 2006

Troubles at What Works

I was recently pointed to an article by Alan Schoenfeld in Educational Researcher. The article recounts why Schoenfeld resigned from his position as a "senior content advisor" at the What Works Clearinghouse.

Its an ugly story. Basically, Schoenfeld wrote an article mildly critical of the WWC for a special issue of a journal about the WWC. The Department of Education withdrew funding for the journal, so the article never got published.

This is really sad.

First, Schoenfeld's criticism of the WWC was exactly the kind of constructive criticism that they should welcome. He basically said that the WWC needs to focus more on the construct validity of evaluations. (If you don't want to follow the link, the "construct validity" issue is basically about whether the exam used to test students really corresponds to what we mean when we say a student knows mathematics).

For what it's worth, I think Schoenfeld's criticism goes too far. Sure, you can argue that the FCAT (the Florida state exam) isn't a good measure of mathematical knowledge, but it's clear that the FCAT is a measure of what the state of Florida considers mathematical knowledge (or, at least, as close a measure as they were able to produce, given all the constraints on creating the exam). So, sure, warn people that they may or may not care about FCAT scores. But show them how the kids did on that measure.

Except that, well, here's where the politics gets in the way. You see, Schoenfeld's criticism isn't really of the WWC; it's of NCLB itself. The WWC has an easy way to address construct validity. They can get guys like Schoenfeld to evaluate the FCAT (and other exams) and see if it aligns with what NCTM thinks math is about. And you can get Mathematically Correct to evaluate the FCAT (and other exams) and say whether that's what they think math is about. Hell, they could set up a Wiki and let anyone blab on about how great or pathetic a particular exam is.

And then teachers and administrators can go to the WWC and see which curricula seem to do well on what they, personally, think math is about.

But that's not how NCLB works. Teachers and administrators don't get to say what their mathematical goals are. Only the state gets to say that. And the state says it by constructing exams that embody those goals.

So, a school district might like a curriculum that shows strong performance on the FCAT, but if the district's in Maryland, and the curriculum doesn't do well in Maryland, then the district would be foolish to use it.

Bringing up the issue of construct validity just makes this flaw in NCLB too obvious. You just can't have people questioning whether the tests that are at the heart of NCLB accountability are really testing what we say they're testing. And that, I bet, is the Department of Education's real problem with the criticism.

If you'll bear with me, though, here's my final twist. On both the WWC and NCLB fronts, the Department of Education is being way too sensitive. I'll bet the reality is that most exams don't differ too much from each other. Carnegie Learning has been fairly reckless in supporting evaluations on any "reasonable" measure of mathematics - ETS, NWEA, FCAT, SAT, Iowa, whatever. The fact is, there's a core of mathematics in common in these and, I think, we address that core. Sure, we do better on more problem-solving focused exams than skills-based ones, but we tend to do well on all of them. That's our goal. And that should be everyone's goal.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?