Highlights

Print Post
  • A new study by David Rea and Tony Burton, using an extensive set of data about various programs’ efficacy, fails to support the idea that programs targeting kids are systematically more cost-effective. Tweet This
  • In terms of the benefit-to-cost ratios we’re so focused on, how long are people being followed in evaluations, and are we always measuring outcomes the right way? Tweet This

For about two decades, the economist James Heckman has been advancing a striking narrative about human capital. Programs that invest in disadvantaged young children, he claims, have a much higher payoff than do similar investments in disadvantaged adults. Indeed, in his reading of the evidence, investments in adults “past a certain age and below a certain skill level” don’t produce enough in benefits to exceed the costs at all. To take the canonical example, he believes preschools programs deliver far more rewards, per dollar spent, than job training.

Thus the “Heckman Curve” is downward-sloping. The older the person, the less the benefit of trying to improve their human capital.

This is intuitive in some ways: The younger someone is at the time of an intervention, the more years he has left to benefit from it, and we tend to think of kids as “impressionable.” But it’s counterintuitive in another: The effects of an intervention often wear off over time, and the outcomes we care about most—crime, employment, unintended pregnancy, etc.—occur long after preschool age. And a new study by David Rea and Tony Burton, using an extensive set of data about various programs’ efficacy, fails to support the idea that programs targeting kids are systematically more cost-effective.

Before getting to the new research, it’s worth noting that both sides of the preschool-is-awesome/job-training-is-worthless claim are disputable. The effectiveness of preschool is in fact hotly contested, with Grover Whitehurst of the Brookings Institution being a leading skeptic. In a fascinating recent report, Whitehurst advanced a theory that the early-childhood environment merely needs to be “good enough”: we can gain a lot by helping the very most deprived kids, but beyond a certain minimal level of safety and stimulation, kids’ environments don’t make all that much of a difference to their later outcomes. Thus, preschool interventions can help, but to a far more limited extent than many think.

It’s also important that a good deal of Heckman’s faith in preschool, reflected in the high value of early-childhood interventions he reports, stems from just two interventions, the Abecedarian and Perry Preschool Projects. These experiments produced long-term improvements so strong that many find them implausible, may have suffered from methodological difficulties (such as incorrect randomization of which kids were given the treatment), and may not scale up to a nationwide level because they were very small, very focused, and very expensive.

Meanwhile, it may not be the case that job training is generally useless because some newer programs have had decent results. In fact, a recent report on child poverty from the National Academies included one program, WorkAdvance, on its list of solutions. WorkAdvance is a “sectoral” training approach, meaning the program works with employers to get people into job sectors where employment opportunities are strong. There’s good evidence that men in particular benefit, though we could use a lot more research on women, as well as longer-term follow-up to see for how long the program boosts employment and earnings.

Anyhow, to get at the Heckman Curve’s core assertion in a more comprehensive way, the new study relies on a database maintained by the Washington Institute for Public Policy. This is a compilation of information about various interventions, and conveniently it includes estimated cost-to-benefit ratios gathered from the relevant academic research.

Here’s how the institute itself describes its methods (there’s a technical report as well):

First, we systematically assess all high-quality studies from the United States and elsewhere to identify policy options that have been tested and found to achieve improvements in outcomes. Second, we determine how much it would cost Washington taxpayers to produce the results found in Step 1, and calculate how much it would be worth to people in Washington State to achieve the improved outcome. That is, in dollars and cents terms, we compare the benefits and costs of each policy option. It is important to note that the benefit-cost estimates pertain specifically to Washington State; results will vary from state to state. Third, we assess the risk in the estimates to determine the odds that a particular policy option will at least break even.

Within this data set, it turns out there isn’t much of a systematic relationship between the value of an intervention and the age of the people it targets.

As Rea and Burton write:

The average benefit cost ratios for interventions targeted at those aged 5 years and under are lower than for other age groups. However, it is important to note there are large standard errors for many of the estimates, and the difference is not always statistically significant. At a minimum the data suggests that interventions targeted at young children do not have higher rates of return than those targeted at older age groups.

It would be an overstatement to say this debunks Heckman’s narrative, especially the idea that early-childhood development is incredibly important. The authors themselves stop short of saying that, and as Andrew Gelman notes, some of the benefit-to-cost ratios included in the study seem preposterously high, raising questions about the underlying data.

Interested readers can go through the numbers estimate-by-estimate here. I was especially troubled by a ratio of 94.89:1 for “growth mindset interventions ”—“psychological interventions that encourage students to believe that intelligence is malleable and can be changed with experience and learning,” which apparently cost $40 but deliver $3,765 in benefits —given that such interventions proved weak in an enormous meta-analysis last year. In the institute’s defense, however, its methods have been extensively peer-reviewed, as described in the technical document linked above.

But this certainly throws some cold water on the Heckman Curve. It also encourages us to think a little more deeply about what the Heckman Curve means. Even if we do find a systematic relationship between the age ranges an intervention targets and its cost-effectiveness as estimated in academic research, what can we truly conclude from it?

To raise just a few of the difficulties inherent in evaluating a claim like this, there’s the question of which interventions are even being tried and with how much enthusiasm, since you can’t plot the effectiveness of interventions that aren’t being tried. Maybe we’re failing to help some age groups because we’re not trying hard enough or haven’t hit on the right ideas (“sectoral” job training is one candidate), not because those age groups are a lost cause. Or maybe we try too hard with some groups, funding ideas that aren’t that promising to begin with. Or maybe researchers are willing to “tweak” their methods a little more to justify programs they like, and which programs they like depends on the folks targeted.

And don’t forget Whitehurst’s argument that some children might benefit far more from interventions than others, which raises another issue: the degree to which a given age group appears helpable might also be a function of how well programs are targeting the most helpable people within them. Rea and Burton highlight the same thing regarding efforts to reduce youth offending: "While early prevention programs may be effective at reducing offending, they are not necessarily more cost effective than later interventions if they require considerable investment in those who are not at risk."

Further, in terms of the benefit-to-cost ratios we’re so focused on, how long are people being followed in evaluations, and are we always measuring outcomes the right way? Some interventions, such as moving a child to a new neighborhood, might have little impact at first yet improve outcomes years down the line. Other times, an effort to increase test scores might increase softer skills instead, or getting a dad a job might also improve his kids’ outcomes, which the researchers may or may not have thought to measure. And are we “discounting” the benefits we expect in the future too much or too little?

Most of these issues can’t be addressed in a completely satisfactory way. Maybe this entire project—of plotting the “effectiveness” of whatever interventions happen to have been tried, as measured by one’s necessarily subjective assessment of the available studies, against the ages of the people the interventions target—is not helpful. There may not be a systematic relationship between the two at all, as the new study shows, and even if there is a systematic relationship, it may mean any number of things.

It would be better to look at the results of each program by itself, paying very close attention to the methods of the studies used to evaluate it, and increasing or reducing funding from there—and continuing to experiment with promising ideas targeted at all ages.

Robert VerBruggen is a research fellow at the Institute for Family Studies and a deputy managing editor of National Review.