AERJ published a widely-cited systematic review and meta-analysis of RCTs of tutoring in grades PreK-12. Quick take: Although valuable in certain respects, the study likely overstates (by a lot) the average impact of tutoring.
Study Design & Findings:
The study reviewed 89 RCTs of tutoring programs - most in reading, some in math - published 1980-2020. Pooling the results through meta-analysis, the study found a substantial average effect: 0.29, equivalent to moving the average student from the 50th to 61st percentile.
Comments:
I believe this finding is likely overstated. First, for unknown reasons, the study didn't include some large, high-quality RCTs with disappointing results - e.g., the UK Switch-on Reading effectiveness trial with a sample of 184 schools and zero impact.
Second, in many cases the meta-analysis included RCT effects on pre-reading skills (e.g., ability to sound-out nonsense words) as opposed to actual ability to read with understanding (i.e., comprehension).
Effects on pre-reading are typically much higher than effects on reading. For example, a newly-posted, high-quality RCT of Tutoring with the Lightning Squad - a small-group tutoring program in grades 2 and 3 - found a sizable impact (0.18) on Word Attack skills (a pre-reading outcome) but negligible impact on reading comprehension. Other RCTs have found that effects on pre-reading don't necessarily lead to later (downstream) effects on reading - the hoped-for outcome of greatest policy importance.
For that reason, I'd encourage the authors of the review paper to do a supplemental analysis of tutoring's effects on actual reading.
The paper, in an important contribution, shows tutoring's effects vary greatly based on program features (e.g., frequency/duration of tutoring sessions, use of paid vs volunteer tutors). But its estimate of the average effect, pooled across all programs, is likely much overstated.