Our tests are flunking

Posted on June 15, 2007 | Type: In the News | Author: Robert Robb
  • Twitter
  • Facebook
  • Email

What do we know about how Arizona students are doing?

After reviewing a flurry of recent national and state studies, the best answer appears to be: not much.

The Center on Education Policy tried to answer the question of whether the student testing provisions of No Child Left Behind were increasing student learning. It looked at the trends on testing in all 50 states.

The results were moderately encouraging. Student performance on state achievement tests had modestly improved and the trend had accelerated slightly after NCLB.

And were Arizona students part of this moderately positive trend? According to the Center on Education Policy, there were too many changes in the Arizona testing scheme to tell. It cited changes in the passing score, but the test itself changed rather significantly as well.

Meanwhile, a study published this week by the Goldwater Institute's Matt Ladner and psychometrician Gregory Stone raises some serious questions about the state's current achievement test.

There are two things that are important to know about student achievement. You want to know whether students are learning what the state is trying to teach them. In the field, that's known as a criterion-referenced test. And you want to know how students stack up against other students from around the country. That's known as a norm-referenced test.

Arizona used to give two tests to ascertain that information. AIMS was the criterion-referenced test. The Stanford 9 was the norm-referenced test.

To reduce the time spent testing, Superintendent of Public Instruction Tom Horne proposed combining the criterion-referenced test and the norm-referenced test into a single, dual-purpose test.

Some AIMS questions are asked. Some norm-referenced questions are asked from a different test, the TerraNova. Some questions are used for both assessments.

The claim is made that with this single test, valid criterion-referenced and norm-referenced results are obtained.

According to Ladner and Stone, there's no way to know whether that is true. The only way to know would be to test the claim: have students take all three tests and see if the dual-purpose test truly replicates the results from the two stand-alone tests.

This sensible validation exercise wasn't done. As a result, according to Ladner and Stone, the dual-purpose AIMS test doesn't really tell us how Arizona students stack up against other students from across the country.

The Goldwater study suggests that we do know that from the National Assessment of Educational Progress results published by the federal Department of Education. Those consistently show Arizona students performing below the national average, in contrast to both the Stanford 9 and TerraNova, which show Arizona students performing above the national average.

Since the TerraNova results are suspect, we should conclude from the NAEP results that Arizona students are doing lousy; that is the implicit conclusion.

Now, I'm boxing well above my weight class here. Both Ladner and Stone are Ph.D.s. I had an undergraduate statistics course more than 30 years ago. Nevertheless, I don't buy it.

The Stanford 9 or the TerraNova were given to all Arizona students in all grades every year. The NAEP is given every couple of years in just two grades and to a sample of only around 2,500 students.

My memory of that statistics course is dusty, but I don't recall a theory under which a sample was regarded as more representative than the whole set.

Moreover, the extent to which Arizona lags behind on the NAEP is exaggerated. On average-scale scores, Arizona lags behind on fourth-grade results, but is largely caught up by eighth grade, particularly if demographic groups are compared - White students to White students, Latino students to Latino students.

In reality, the NAEP doesn't reveal a wide range of differences between the states in educational achievement.

Moreover, the federal agency that administers the NAEP recently reviewed all the state achievement tests to determine how they compared to the NAEP standards. Arizona did OK, with passing grades that were slightly above the NAEP standard for basic achievement.

I apologize for the tedium of this discussion. However, Arizona has invested a lot in establishing an accountability-through-testing regimen. Knowing how students are doing is critically important. There is mounting evidence that accountability-through-testing can produce at least modest improvements in student performance.

Finding out at this stage that, apparently, Arizona is still largely flying blind is terribly distressing and frustrating.

Advanced Search

Date
to Go >>

Recent Facebook Activity