top of page
Social Science & Medicine published an RCT of an Indianapolis program that partners police officers with mental health clinicians in responding to behavioral health emergencies. Quick take: The study findings (no significant impacts) are unreliable due to study limitations.
Program & Study Design:
  • The study randomized 686 behavioral health 911 calls to receive (1) co-response from police officer + clinician (treatment) vs (2) police-as-usual (control). The program aimed to divert persons in crisis from arrest & hospitalization, & connect them with needed services.


Findings:
  • Over a 1y follow-up, the study found none of the hope-for impacts (e.g., no discernible decrease in subsequent EMS/911 calls, emergency department visits or jail bookings; no increase in behavioral health treatment). But I think these results aren't reliable due to study limitations.


  • The most important limitation was that the researchers dropped from the sample those 911 calls that didn't receive the assigned intervention (i.e., co-response vs police-as-usual). This led to the loss of 30% of calls in the T group & 14% of calls in the C group.


  • This problem (an "intent-to-treat" violation) can easily undermine T & C group equivalence, & lead to inaccurate results. As an illustrative example, suppose mental health clinicians chose (due to staffing limitations) not to respond to T group calls they deemed less serious.


  • Dropping these no-response (less serious) cases from the T group would distill the T group down to more serious cases, potentially leading to an underestimate of program impacts. Other study limitations included a sample that was likely too small to detect meaningful impacts.

 
Comment:
  • The study is valuable in illustrating how RCT designs might be used to evaluate co-response programs. But unfortunately I think it fell short in execution. Disclosure: My former employer, Arnold Ventures, funded this study.

bottom of page