Over the past 50 years, there has been a tremendous amount of research into the plight of the nation's poor children. Compared with children raised in more fortunate circumstances, we know that many disadvantaged children have social and academic skills deficits when they formally enter school.
Commenting on the creation of Head Start in 1965—a Great Society preschool program intended to help disadvantaged children catch up to children living in more fortunate circumstances—President Lyndon Johnson asserted, "I believe this response reflects a realistic and a wholesome awakening in America. It shows that we are recognizing that poverty perpetuates itself." Ever since, the federal government has been actively devoted to helping the nation's poor children catch up—spending more than $202.5 billion on Head Start.
Early-childhood educations programs, such as Head Start, are automatically assumed by advocates to level the playing field by helping disadvantaged children arrive at school without learning deficits. From time to time, an early-childhood education program will appear to work. When a particular innovative early-childhood education program seems to produce compelling evidence of success, policymakers and advocates of government social programs around the country appropriately take notice.
Based on these clearly limited evaluations of these programs, President Obama called for a large expansion in federal funding of the early-childhood education programs. In his fiscal 2015 budget proposal, Obama states: "Research shows that one of the best investments we can make in a child's life is high-quality early education. This year, we will invest in new partnerships with states and communities across the country to expand access to high-quality early education, and I am again calling on the Congress to make high-quality preschool available to every 4-year-old child."
The president's proposal is well-meaning. However, the president's reasoning is based upon the "single-instance fallacy." This fallacy occurs when a person believes that a small-scale social program that appears to work in one instance will yield the same results when replicated elsewhere. Compounding the effects of this fallacy, we often do not truly know why an apparently effective program worked in the first place. So how can we replicate it?
There are good reasons to question the assumption that the federal government can replicate the beneficial outcomes purported to have been caused by the Perry and Abecedarian Projects. Ignoring the fact that these studies are not based upon well-implemented random-assignment studies, the evaluations of these small-scale programs are outdated. And despite all the hoopla, the results have never been replicated. In more than 50 years, not a single experimental evaluation of the Perry approach applied in another setting or on a larger-scale has produced the same results. The same holds true for the Abecedarian program, which began in 1972.
Simply put,there is no evidence that these programs can produce the same results today. If we really knew how these programs actually produced success, would not these results have been replicated elsewhere?
In addition, the federal government has a poor track record of replicating successful programs on a national scale. This point is almost never raised by advocates of expanding the federal government's involvement in early-childhood education programs. Just consider what we really know about Head Start and Early Head Start.
Despite Head Start's long life, the program never underwent a thorough, scientifically rigorous evaluation of its effectiveness until Congress mandated an evaluation in 1998. Advocates of Head Start asserted that such an evaluation was unnecessary because we already knew that Head Start worked because of the Perry Preschool Project.
After decades of claiming the Perry Preschool Study included all the information the country needed, the Health and Human Services Department commissioned a national representative sample of Head Start centers and randomly assigned almost 5,000 children to study and control groups. The short-term results measuring the impact of the program at the end of Head Start were published in 2005. Researchers published information about the same group of children after they completed kindergarten and first grade in 2010. They followed this up with a 2012 study evaluating the students after third grade.
The results have been disappointing. The Head Start evaluation showed that almost all of the benefits of participating in Head Start disappear by kindergarten. Alarmingly, Head Start actually had a harmful effect on 3-year-old participants once they entered kindergarten, with teachers reported that nonparticipating children were more prepared in math skills than the children who attended Head Start.
Early Head Start has not proven much better. Created during the 1990s, Early Head Start is a federally funded, community-based program that serves low-income families with pregnant women, infants, and toddlers up to age 3. The program was inspired by the findings of the Abecedarian Project. A pair of larger-scale and better-quality studies made public in 2005 and 2010 examined the lives of children who attended 17 Early Head Start sites selected by the HHS. Researchers evaluated a group of 3,001 study and control families when participating children reached age 3 and again in fifth grade.
It's true that by the time participants reached age 3, Early Head Start had a few modest beneficial impacts on child cognitive and language development and child-social-emotional growth. However, the modest short-term effects of Early Head Start disappeared by the fifth grade.
All federal government programs are not failures. NASA's Apollo program was wonderfully successful. Sending Americans safely to and from the moon was a singular achievement. For that matter, does anyone doubt that the initial creation of the interstate highway system was a success?
However, federal social programs intended to improve human behavior, like Head Start and Early Head Start, have not produced similar successes. The reality is that many Americans have an inflated sense of what the federal government can achieve when it comes to social engineering. In many ways, social engineering is a much more elusive human endeavor than building rockets and paving highways.
Further, the debate over expanding the federal role in early-childhood education is missing a much more significant problem faced by young children. As political scientist Charles Murray aptly pointed out earlier this year, "America has far too many children born to men and women who do not provide safe, warm, and nurturing environments for their offspring—not because there's no money to be found for food, clothing, and shelter, but because they are not committed to fulfilling the obligations that child-bearing brings with it."
The likelihood of the federal government creating a social program to solve the dilemma identified by Murray is about as likely as the federal government successfully scaling up early-childhood education programs. Yet, advocates of increased federal spending on early-childhood education programs ignore the federal government's poor track record in replicating small-scale programs originally thought to be successful.
And here is the problem. With no scientific certainty, advocates cannot answer the following question: Will increased federal spending on early-childhood education programs improve children's futures? Instead, the decision to favor a federal expansion of preschool learning opportunities is most often based on the answer to a less scientifically rigorous question: Will proposing increased federal spending on early-childhood programs make advocates feel that they are making a difference in the lives of children?
The answer to the latter, simpler question is almost certainly yes. Unfortunately, this faulty decision making process often results in federal boondoggles like Head Start and its sibling Early Head Start.
For the sake of taxpayers, let's not create more of them.
- David B. Muhlhausen, Ph.D., is research fellow in empirical policy analysis in the Center for Data Analysis at the Heritage Foundation
Originally appeared in the National Journal