Wednesday, October 30, 2019 - 4:00pm to 4:30pm
Event Calendar Category
LIDS & Stats Tea
Building and Room Number
Consider the setting where there are N units and I interventions (i.e., treatments) of interest. The aim is to find the best-personalized intervention for each unit. When the units are homogenous, the randomized control trial (RCT) framework (i.e., dividing the N units into I subgroups of size N/I and measuring the average treatment effect across all I subgroups) remains the “gold standard”. In many settings, however, units are heterogeneous. As a result, a single RCT does not suffice for prescribing personalized recommendations. Instead, one needs to, in principle, perform N X I experiments to provide personalized treatments. Unfortunately, attempting all I interventions on each unit is unlikely to ever be feasible in practice. To that end, we propose a method, “Multi-Action, Multi-Dimensional, Robust Synthetic Control” (MA-mRSC), where we perform 2N experiments (instead of N experiments as in an RCT), yet enjoy the information gain of having performed all N X I experiments simultaneously. MA-mRSC extends the widely used synthetic control framework, originally used to measure treatment effects when RCTs are not feasible. In doing so, the MA-mRSC method solves an important open problem in the synthetic control literature of creating effective synthetic controls when there are multiple interventions of interest. Theoretically, we prove the soundness of our algorithm under a generalized factor model (i.e., a latent variable model), and provide finite-sample guarantees for the post-intervention prediction error of MA-mRSC across all interventions and metrics of interest. Experimentally, we verify the efficacy of our algorithm on (i) a large development economics study, which aims to measure the treatment effects of 20 different interventions on immunization rates across 1302 villages in Haryana, India; (ii) a large web-based fantasy sports company doing A/B testing to increase customer engagement. We also provide a data-driven diagnostic tool to verify the set of interventions and metrics for which our framework will provide accurate post-intervention predictions.
Anish Agarwal is a PhD student in EECS at MIT co-advised by Munther Dahleh and Devavrat Shah. His research interests are in high-dimensional statistics and mechanism design.