Sign up for our newsletter
Home » News & Analysis » Commentary » If You Control the Test, You Control the Results (Dave Johnston)

If You Control the Test, You Control the Results (Dave Johnston)

As I started reading this Sacramento Bee column by Peter Schrag, I was feeling pretty good. I thought that Peter and I were on the same page regarding the "Getting Down to Facts" study. Then Peter took a left turn out of nowhere that left us on opposite sides of this street.

But its iteration of California’s weak student test scores vis-à-vis other states on NAEP, the National Assessment of Educational Progress, ignores the big gains California has made on the very standards and tests it expects to drive the system.

In the years between 1999, when California’s accountability system was put in place, and 2005, California’s lowest ranking elementary schools — the bottom 10 percent — gained 190 points on an 800 point scale, exceeding the academic scores of the state’s midrange schools in 1999. All schools in the ninth decile in 2005 would have been in the top rank in 1999. More improvements are expected when new rankings are released next week.

Similarly, in the period from 2003 to 2006, the number of students rated proficient in algebra, regarded as essential to later success, rose from 103,000 to 163,000; in biology the number went from 124,000 to 174,000.

What the heck? Peter is so wrong on this one. He’s being so selective in his statistics that it led me wonder if he was Jack O’Connell’s long lost brother.

First, the Academic Performance Index (API) is not a great year-to-year measure of success. Actually, it isn’t a good snapshot measure of success either, but that’s for another diatribe. The API gets adjusted each year by the California Department of Education (CDE). They change weights of tests, add tests, drop tests, change confidence intervals and a whole myriad of other "adjustments" that make it impossible to fairly compare one year to the next. That’s why CDE does this weird, two-phase approach for setting scores with a Base API based on last year’s test and a Growth API based on this year’s test. They re-adjust that base each and every year. Even CDE used to insist that you shouldn’t compare APIs from year to year despite doing it themselves to show "growth."

The API as a composite score, doesn’t reflect the achievement of individual groups of students from ethnic minorities or students living in poverty. The API also provides the greatest increase in scores from moving students from the far below basic to below basic categories. So, schools can show large API improvement despite not getting a single new student to grade-level.

In the "Getting Down to Facts" report, the report that looked at schools who were outperforming other schools chose to use the California Standards Test (CST) and California High School Exit Exam (CAHSEE) as their measures of student success because they found that the API was a "moving target." So as far as I’m concerned, Peter’s claim of "big gains" can’t be supported by improvement in the API.

Now, on to this idea that a rising number of proficient students is a good measure of success. That might be true if the number of students tested remained the same year after year. That’s simply not the case.

I’m not sure where he got his Algebra I figure above, but DataQuest shows growth from 106,235 proficient students in 2003 to 162,676 in 2006. Of course the number of students tested went from 505,883 in 2003 to 707,285 in 2006. The actual proficiency rate went from 21% in 2003 to 23% in 2006. That two percent gain in proficiency doesn’t sound nearly as exciting an extra 55,000 students. Too bad we added 200,000 extra students who were taking the test in 2006.

So, there are two examples of getting the results you want by controlling the test in this story. First, CDE’s "adjustment" of the API each year allows them to get the growth that they need in order to show how well California’s schools are doing. Second, Peter got the results that he wanted, "proof" that California schools are improving, by selectively picking the data.

I believe that the percent of students at grade level, as shown by proficiency on the CST is really the only fair measure of student success. You can look at proficiency rates of each subgroup of students and compare them to their peers to see how achievement gaps are closing or increasing. You can compare it year to year without worrying about CDE "adjustments." It only makes sense. Now, it is up to CDE to quit trying to justify their use of the API, get with the program and start using grade-level proficiency as the sole accountability measure in California.

Dave Johnston lives in Ukiah, California.  This previously appeared on his blog, Friends of Dave. 

Comments

  1. No comments at this time.

Join the conversation

Your email address will not be published. Required fields are marked *