Mathematica published RCT findings for SSI PROMISE, a $230M federal demonstration program to improve education and workforce outcomes of youth with disabilities. Quick take: High-quality RCTs find none of 6 programs in the demonstration improved education or earnings over 5 years.
Program:
PROMISE funded 6 state agencies to implement programs for youth ages 14-16 with disabilities who were receiving Supplemental Security Income (SSI). The programs offered educational, vocational, and other services; and coordinated state/local agencies to improve service delivery.
Study Design:
Each of the 6 programs was evaluated in a large RCT (with samples of 1900 to 3100 youth).
Findings:
Unfortunately, over the 5-year follow-up, none of the programs had a significant impact on youth education (e.g., high school completion) or earnings, or on parent earnings.
The study also found program costs exceeded benefits by $16k-$38k per youth across the 6 programs. Based on careful review, the 6 RCTs were well-conducted (e.g., low sample attrition, good baseline balance).
Comment:
The results are extremely disappointing, and I believe reflect a flaw in how federal agencies often select programs for large RCTs/demonstrations. I think they underestimate the challenge of finding programs that are truly effective (many plausible-sounding ideas just don't work in practice).
If the goal is - as I think it should be - to build the body of proven-effective programs which, if scaled-up, could improve many thousands of lives, the federal government would do better to focus large RCTs on programs that don't just sound like good ideas (as with PROMISE) but are backed by highly promising prior evidence - e.g., from smaller RCTs or quasi-experiments. Doing so could achieve a much higher success rate, as discussed here.