What the NCES study says about itself (and some other odds n' ends)
This comes directly from the study. First off, from the executive summary, p. v:
When interpreting the results from any of these analyses, it should be borne in mind that private schools constitute a heterogeneous category and may differ from one another as much as they differ from public schools. Public schools also constitute a heterogeneous category. Consequently, an overall comparison of the two types of schools is of modest utility. The more focused comparisons conducted as part of this study may be of greater value. However, interpretations of the results should take into account the variability due to the relatively small sizes of the samples drawn from each category of private school, as well as the possible bias introduced by the differential participation rates across private school categories.
In other words, there’s as much variation among public schools as there is among public vs. private, making this overall comparison, in the authors’ words, "of modest utility" (or usefulness).
There are a number of other caveats. First, the conclusions pertain to national estimates. Results based on a survey of schools in a particular jurisdiction may differ. Second, the data are obtained from an observational study rather than a randomized experiment, so the estimated effects should not be interpreted in terms of causal relationships. In particular, private schools are “schools of choice.” Without further information, such as measures of prior achievement, there is no way to determine how patterns of self-selection may have affected the estimates presented. That is, the estimates of the average difference in school mean scores are confounded with average differences in the student populations, which are not fully captured by the selected student characteristics employed in this analysis. (emphasis added)
This is particularly important. Far too many people (and, er, unions) want to give these sorts of studies the same weight as a controlled, randomized experiment, when in point of fact such studies just can’t carry it. But as Greg Forster points out, the sorts of studies (there have been seven in all) that take a hard look at school choice programs by directly comparing students who receive vouchers to those who tried and failed to do so conclude that, at worst, school choice doesn’t hurt. In the overwhelming majority of cases, the programs produce overwhelming benefits across the board, in standardized test scores and parent satisfaction. The unions tend to sidestep these sorts of findings. I wonder why?
Andrew Coulson pointed out a number of flaws in the study, but there’s one in particular–the lack of examination of per-pupil spending–that caught my eye:
Private school tuition, according to the NCES itself, is about half of the average public school expenditure per pupil. While private schools have some other sources of revenue, they still spend thousands of dollars less per pupil than public schools even after taking these other revenues into account, and so may be dramatically more efficient even if their absolute achievement levels are comparable to those in public schools. Hence it is possible that, if spending were equalized, private schools would raise student learning substantially compared to current levels (while it has been shown that spending and achievement are largely unrelated in the public sector, this has not been demonstrated in the private sector. In fact, evidence from developing countries suggests that higher spending in private schools DOES increase student achievement).
Even assuming that the study is correct in its findings, public schools still come out looking rather badly. They get roughly double the funding of private schools, but still can’t do any better than a statistical dead heat? Based on this level of funding, if public schools (according to NEA President Reg Weaver) really are so "outstanding", shouldn’t we expect their NCES scores to be at least somewhat higher?
One final point. Per-pupil spending, church-and-state separation, taking poshots at the "privatization" bogeyman–all these sideshow fistfights tend to obscure a bigger principle: the right of parents to make decisions for their children. We all seem incredibly interested–frankly, in some cases a bit desperate–to cling to some statement from the latest policy wonk wiseman of the minute who has descended from on high with some pronouncement. But parents–well, parents just can’t be trusted to know what’s best for their children, what’s really best or most effective, or believed when they say that a given school or teacher just isn’t working for his or her child. That’s a facet of the debate that, oddly enough, just keeps getting lost in the shuffle.
UPDATE: My mistake–there have been eight studies on the choice programs, not seven. Greg Forster sends a helpful clarification:
The eighth study found no statistically significant difference between voucher users and the control group. But it only achieved this result by flagrantly violating several fundamental rules of social science; what’s more, Paul Peterson has demonstrated that their analysis was not only done scientifically wrong, but that it had to be done wrong in exactly the way they did it in order to produce a null result; if they had gone wrong in any other way they would still have gotten a positive result for vouchers. So that’s pretty suggestive of what was really going on in that analysis. Unfortunately, most people won’t take your word for it if you tell them this, so my usual approach is to treat this study as totally discredited (which it is) and not mention it unless I have to.