The importance of good evaluation design, and how the design stage can be used to prevent problems that are much harder to fix using statistical methods at the analysis stage;
Key problems with data collection that can arise when conducting an impact analysis. In particular, low survey response rate and social desirability bias;
Internal and external validity and the extent to which these are threatened by problems with data collection; AND
Statistical power: the consequences of having insufficient power, and the importance of estimating statistical power of evaluation design at the outset.
PART 1: Provide a detailed and critical policy assessment of this evaluation gone awry, and what can be learned from this to be better analysts–Be specific about what you would do to do a better analysis. Make sure to draw on readings from previous
The New York City Teen ACTION Program: A Flawed Evaluation and Lessons Learned
The evaluation of New York City’s Teen ACTION Program serves as a cautionary tale for policymakers and analysts. Here’s a breakdown of the problems with the study and how a better analysis could be conducted:
Flaws in Evaluation Design:
A Better Approach:
A Better Approach:
Data Collection Issues:
A Better Approach:
A Better Approach:
Threats to Validity:
Lessons Learned:
What We Can Do Better:
By learning from the shortcomings of the Teen ACTION Program evaluation, we can design future evaluations that are more rigorous, reliable, and ultimately, more useful in informing evidence-based policy decisions for youth programs.