About No-Spin

No-Spin Evidence Review is a project of the Coalition for Evidence-Based Policy. The Coalition is a nonprofit, nonpartisan organization that is unaffiliated with any social program, enabling us to serve as an impartial reviewer of the evidence. Funding for No-Spin is provided by Arnold Ventures, a philanthropic organization. (In cases where No-Spin reports on a study that Arnold Ventures funded, we note this fact in our evidence report.)

Questions? Please contact Brontë Forsgren, Director of Evidence-Based Policy (bforsgren@evidencebasedpolicy.org).   

No-Spin's Review Process

We systematically monitor the literature of all rigorous program evaluations, with a particular focus on sizable randomized controlled trials (RCTs) because - when feasible and well-conducted - they are considered the strongest method of evaluating program effectiveness. We select studies to summarize and report on based on factors such as policy importance, study duration (i.e., whether the study measured more than just short-term effects), publication in a leading journal, and level of press or policy attention.

To gauge the validity of a study’s findings and the accuracy of its reporting, we look at factors such as whether – 

  • The study had an adequate sample size – one large enough to detect meaningful effects of the program.
  • The treatment and control groups were highly similar in their baseline characteristics.
  • The program was well-implemented in the treatment group, with high program take-up rates; and few or no control group members received the program.
  • The study used well-established, valid outcome measures, rather than (for example) tests of educational or psychological outcomes developed by the researchers for purposes of the study.
  • The study measured outcomes that are of policy or practical importance (e.g., employment and earnings, in the case of a job training program), and not just intermediate outcomes (e.g., completion of a training credential) that may or may not predict important outcomes.
  • The study obtained outcome data for a high proportion of the sample members originally randomized (i.e., the study had low sample “attrition”) and this proportion was similar in the treatment and control groups (i.e., attrition was not “differential”).
  • The study, in estimating the program’s effects, kept sample members in the original group – treatment or control – to which they were randomly assigned (i.e., it used an “intention-to-treat” analysis).
  • The study publicly preregistered one or a few primary outcomes, as well as its primary analysis methods, which then serve as the basis for any strong claims about program effectiveness in the reporting of study results.
  • In reporting results, the study characterizes effects on other (non-primary) outcomes as exploratory in nature – i.e., not yet reliable until confirmed in future studies – per established standards (IES, FDA).
  • In reporting on the program’s effects, the study includes the size, duration, and statistical significance of each effect, using significance tests that account for key features of the study design (e.g., whether individuals or groups were randomly assigned, and the greater potential for false findings if the study has multiple primary outcomes).
  • The study abstract reports the effects on the study’s primary outcome(s) and on other (non-primary) outcomes consistent with the hierarchy described above; and identifies any factors that could cast doubt on the validity of the study’s findings.