Dustin Houchin: ‘Evidence-based’ claims can be dodgy when questioned

Keywords Opinion / Viewpoint
  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

We hear the term “evidence-based” in the criminal justice system a lot — as in, evidence-based practices or evidence-based decision-making. The phrase most commonly comes up in reference to programs claiming to reduce recidivism.

“You can rely on our program,” they say. “It’s evidence-based.”

In my experience, what most people running these programs mean by “evidence” is this: They administer a pre-test, send the defendant through their program, then administer a post-test. The defendants tend to do better on the post-test than they do the pre-test. The only thing this is evidence of, however, is that defendants learn what the program administrators want them to say in the post-test. It is not evidence that the program has worked to change behavior.

Some programs, though, have been studied a bit more rigorously than the standard pre-test, post-test maneuver. Even so, upon closer examination, most of these evaluation methods turn out to be unreliable, as well. Megan T. Stevenson, University of Virginia School of Law professor of law and economics, points out in a recent Boston University Law Review article the many flaws in criminal
justice research.

First, many of these studies are marred because they are performed by people with a vested interest in the outcome. For instance, a study saying that drug testing works may be paid for by a drug testing company. While this doesn’t make the study per se unreliable, it is not proper practice.

Next, many studies are compromised by selection bias. Selection bias occurs when the sample tested does not reflect the relevant population. A researcher may, for instance, include in the study only those individuals who successfully completed the program, then claim the program is effective.

Another issue is publication bias. When researchers perform a study, they want to get the results published in an academic journal. This desire to get published creates bias in many ways.

One way is that researchers may manipulate the study to get the outcome they want. Professor Stevenson writes:

“In one recent anonymous survey of quantitative criminologists, 39% reported having changed the analysis after an earlier one wasn’t statistically significant and 43% said they failed to report null results. … In other anonymous surveys, some 2%-5% of social scientists reported having gone so far as to falsify data.”

A related publication bias problem is that only studies that show a statistically significant causal relationship will be published. It has been found that for every study finding a causal effect for a given intervention, there will be 19 studies of the same intervention that do not show a causal effect. Yet only the single positive result will be published. The 19 negative ones will not. To put it more simply, if a given study shows that a criminal justice program works, there will be, on average, 19 other studies that were performed on that same program that show it did not work. We will never see those 19, however. Only the one that said it worked will be published.

A final publication bias problem noted by Stevenson is that researchers are rewarded for new and exciting findings. A finding that a program works makes the news. In the instances where positive findings have been retested, however, it is extremely common that later studies are unable to replicate the positive results. This has become such an issue that it is referred to as the “replication crisis” in social science.

Exacerbating the problem, these failure-to-replicate studies often go unpublished or, if they are published at all, are placed in more obscure journals. It’s kind of like when a newspaper makes a mistake on the front page, and they print a correction on the 10th page of a later edition. Due to this incentive structure, there is little motive to retest positive claims, even though doing so will likely debunk the initial claim.

Given all these flaws in the way criminal justice programs are evaluated, how can we tell what programs work? Well, it’s fairly straightforward. You ignore the junk and evaluate the programs based on the highest standards available in the research field. Fortunately, there are many published studies using such high standards. And they show that almost none of the programs work. The common claim that a program is “evidence-based” is not worth much. As judges, legislators and policymakers we must be more discerning.•

__________

Houchin is the Washington County Superior Court judge in Salem. He also is the publisher of Judex, a Substack newsletter on conservative judicial issues at judex.substack.com. Opinions expressed are those of the author.

Please enable JavaScript to view this content.

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Subscribe Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer! Upgrade Now

Get full access to The Indiana Lawyer!

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In

Your go-to for Indy business news.

Try us out for

$1/week

Cancel anytime

Subscribe Now

Already a paid subscriber? Log In