D.C. Voucher Students: Higher Graduation Rates and Other Positive Outcomes

Report Education

D.C. Voucher Students: Higher Graduation Rates and Other Positive Outcomes

July 28, 2010 3 min read Download Report
Jason Richwine, Ph.D.
Senior Research Fellow
Jason Richwine is a quantitative analyst at The Heritage Foundation specializing in...

Congress put school vouchers to the test in 2004 when it authorized the D.C. Opportunity Scholarship Program (DCOSP), a federally funded voucher program serving low-income students in the nation’s capital. It has awarded $7,500 scholarships to more than 3,700 students over the past six years.

Congress mandated a formal evaluation of the program, and researchers hired by the Department of Education have now released their latest report. Its content should help determine the future of DCOSP. Without reauthorization from Congress, the program will expire after next year.

What the Facts Demonstrate

Among the report’s key findings:

  • Parental satisfaction. School satisfaction was higher among parents of voucher students.
  • School safety. Parents of voucher students were more likely to describe their children’s schools as safe and orderly.
  • Graduation rates. Voucher-using students achieved a graduation rate of 91 percent, compared to 70 percent for non-voucher students.
  • Test scores. On reading tests, voucher students scored slightly higher (by 0.13 standard deviations[1]) compared to non-voucher students, but the difference is not statistically significant.[2] DCOSP did not produce any gains in mathematics scores.

These results should be considered in light of the study’s quality of methodology and consistency with past findings.

Quality of Methodology

Because parents, teachers, or the students themselves must elect to participate in programs like DCOSP, participants tend to be different from non-participants in terms of ability, motivation, family background, and many other variables. An essential part of any program evaluation is to avoid mistaking these initial differences for the effect of the program itself. To do this, evaluators need a control group that is as similar as possible to the students who participate in the program.

The DCOSP evaluation uses the best possible control group, which is constructed from a random lottery. Among 5,547 eligible applicants, 3,738 were randomly selected to receive a voucher. The DCOSP evaluation then compared students who used a voucher versus those who were denied the voucher by random chance.[3]

A lottery is the “gold standard” method of evaluation, which produces results deserving the most attention. If statistically significant differences between participants and non-participants emerge from this strict comparison, policymakers can be sure that the program in question has had an impact.

Without a lottery, the next most desirable evaluation method is careful matching of participants and non-participants on as many background variables as possible. Since researchers can never account for every variable, matching is less reliable than the lottery method, but it can still be informative when performed carefully. Recent examples of effective matching include the evaluations of the Milwaukee Parental Choice Program and the Edgewood Voucher Program in Texas. Each came to similar conclusions as the DCOSP report.[4]

Less scholarly studies use raw comparisons or insufficient matching of participants and non-participants. These evaluations are rarely informative. A recent example is a comparison of voucher and non-voucher students in New Orleans published by the non-profit group Educate Now.[5] The comparison involved no matching at all, making the results uninterpretable.

Consistency with Past Findings

Increases in graduation rates and parental satisfaction are frequent findings in the school choice literature. Recent examples are the aforementioned Edgewood and Milwaukee evaluations, which used matching techniques. As the most rigorous evaluation, the DCOSP study can be viewed as confirmation of the positive effects on graduation and parental satisfaction observed in other voucher studies.

In prior years of this evaluation series, DCOSP did produce small reading gains that reached statistical significance, but the gains were insignificant in this round. Test scores are notoriously hard to raise through intervention. However, increasing funding for public schools—through class size reduction, teacher training, stricter certification requirements, etc.—also rarely results in significant test score improvement.[6]

Policy Implications

Because Congress funded the DCOSP and mandated the evaluation, the results outlined above—greater parental satisfaction, safer schools, and higher graduation rates—should inform upcoming policy debates. But if scholars and policymakers focus on DCOSP’s modest test score effects, they may overlook the broader benefits of school choice.

Even holding test scores constant, improving high school graduation rates has a substantial positive impact on students’ earnings later in life.[7] And given the higher levels of parental satisfaction produced by DCOSP, test scores are clearly only one factor parents consider in evaluating schools.

In fact, parents probably understand the limitations of social policy better than most academics and policymakers do. Rather than obsessing over elusive test score gains, parents seem to have a more nuanced and child-specific set of criteria: They want schools that are safe, cultivate a positive attitude about learning, and best fit their children’s abilities and interests. Only school choice programs can satisfy these diverse preferences and expectations.

The Big Picture

In summary, the DCOSP evaluation uses the gold standard of scholarly rigor and reliability, and its findings corroborate past school choice studies. Among the positive outcomes of DCOSP are greater parental satisfaction—especially regarding school safety—and a significant increase in high school graduation rates. As the DCOSP evaluation makes clear, school choice offers real benefits to students and their families.

Jason Richwine is Senior Policy Analyst in the Center for Data Analysis at The Heritage Foundation.



[1]Different tests are graded on different scales, which makes interpreting the size of gains difficult. For this reason, scores are often reported in terms of standard deviation (SD), which gives researchers a consistent measure of spread. One SD is roughly equivalent to the difference between scoring 650 on the SAT in math and scoring 760. In terms of raising test scores, effect sizes between zero and 0.2 SDs are generally considered small, effects between 0.2 and 0.4 SDs are moderate, and effects greater than 0.4 SDs are large.

[2]A “statistically significant” finding is one that is highly unlikely to occur by chance. To be significant at the 99 percent level, for example, means that random chance would have produced the same results only 1 percent of the time. The minimum level typically used by statisticians to establish significance—and the one required by Congress for DCOSP—is 95 percent. All of the findings mentioned in this memo meet that requirement, except where noted.

[3]Throughout this memo, a “voucher student” is someone who used a voucher, and a “non-voucher student” is someone in the lottery not offered a voucher. The clarification is important because not everyone offered a voucher actually uses it. Results for voucher users indicate the level to which students benefited when they took advantage of the DCOSP. Results for students offered a voucher (regardless of whether they used it) give a sense of the community-wide impact of the DCOSP. Deciding which set of results to emphasize is a classic dilemma in program evaluation.

[4]Patrick J. Wolf, “The Comprehensive Longitudinal Evaluation of the Milwaukee Parental Choice Program: Summary of Third Year Reports,” School Choice Demonstration Project, April 2010, at http://www.uark.edu/ua/der/SCDP/Milwaukee_Eval/Report_14.pdf (July 28, 2010); John Merrifield and Nathan Gray, “An Evaluation of the CEO Horizon, 1998-2008, Edgewood Tuition Voucher Program,” University of Texas at San Antonio, August 31, 2009, at http://faculty.business.utsa.edu/jmerrifi/evp.pdf (July 28, 2010).

[5]Leslie Jacobs, “RSD Schools Significantly Outperform Voucher Program,” Educate Now, June 25, 2010, at http://educatenow.net/2010/06/25/rsd-schools-significantly-outperform-voucher-program/#more-74 (July 27, 2010).

[6]See Eric A. Hanushek, “The Failure of Input-Based Schooling Policies,” The Economic Journal, Vol. 113 (February 2003), pp. F64–F98, at http://web.missouri.edu/~podgurskym/Econ_4345/syl_articles/hanushek_failure_of_input_EJ_2003.pdf (July 28, 2010).

[7]Samuel Bowles, Herbert Gintis, and Melissa Osborne, “The Determinants of Earnings: A Behavioral Approach,” Journal of Economic Literature, Vol. 39 (2001), pp. 1137–1176.

Authors

Jason Richwine, Ph.D.

Senior Research Fellow

Exclusive Offers

5 Shocking Cases of Election Fraud

Read real stories of fraudulent ballots, harvesting schemes, and more in this new eBook.

The Heritage Guide to the Constitution

Receive a clause-by-clause analysis of the Constitution with input from more than 100 scholars and legal experts.

The Real Costs of America’s Border Crisis

Learn the facts and help others understand just how bad illegal immigration is for America.