Skip to main content
background
Fubon Center Doctoral Fellow Research

Making Automatic Intervention Decisions Using Machine Learning

Carlos Fernández


Making Automatic Intervention Decisions Using Machine Learning

Machine learning models are increasingly deployed to make automated decisions. Large-scale examples include targeting online advertisements, targeting retention incentives to customers who are at high risk for attrition, and making online recommendations. While these tasks are typically modeled as predictive problems, their underlying goal is often to improve an outcome by means of an intervention. For example, in many advertising settings, the goal is not to target ads to people who will purchase after seeing the ad, but to target people for whom the ad will lead them to purchase. So, instead of simply predicting the outcome, for tasks such as these we ideally would like to estimate whether an individual’s outcome can be changed with an intervention.

An increasingly popular approach to make this type of decisions is to use machine learning models to estimate intervention effects, and then use the models to make intervention decisions automatically. However, estimating intervention effects and making intervention decisions are not the same thing. As an example, suppose that some firm is willing to send an offer to customers for whom it increases the probability of purchasing by at least 1%. Then, precisely estimating intervention effects is desirable but not necessary; the only concern is identifying those individuals for whom the effect is greater than 1%. Importantly, overestimating (underestimating) the effect has no bearing on decision making when the focal individuals have an effect greater (smaller) than 1%. 

My research shows that estimates of intervention effects are important for making decisions only to the extent that they help to classify individuals according to their preferred interventions. My dissertation project reveals three critical implications that this has for modeling and decision making:
  1. We should carefully consider the objective function of the machine learning algorithm used to learn targeting models, and if possible, it should optimize for accurate decisions rather than for accurate effect-size estimation. Thus, a practitioner using a machine learning algorithm that is geared toward accurately estimating effects (as most are) may be making a mistake. In the context of music recommendation systems at Spotify, my research shows that learning models by optimizing for intervention decisions can increase user engagement by 28% more than learning models by optimizing for intervention effects.
  2. Biases in the data generating process do not affect decision making in the same way as effect estimation.  The upshot is that biased data may be just as good as unbiased data when the goal is to learn models for automated decision making. This is an encouraging insight given that acquiring unbiased data for modeling can be quite costly and often simply impracticable. Furthermore, even when there is unbiased data available—for example, from a randomized controlled trial—often there is much more biased data, so there is a bias/variance tradeoff to be considered.
  3. Causal statistical modeling may not be necessary at all to support intervention decisions, because there may be some proxy target variable to determine good candidates for the interventions. For example, my research shows that targeting ads based on the likelihood of a purchase (the proxy) can work very well when the goal is to increase purchases, meaning that it is possible to make good targeting decisions even without any data on how people actually behave when targeted with the ad. This also helps to explain at least one broad common marketing practice that seems “wrong” at first blush—the widespread use of non-causal models for targeting interventions.  
Overall, this dissertation develops the perspective that what might traditionally be considered “good” estimates of intervention effects are not necessary to make good intervention decisions. The three implications above are quite important in practice, because acquiring data to estimate effects accurately is often complicated and expensive, and empirically we see that results can be considerably better when modeling decisions rather than effects.