Articles  |    |  February 21, 2018

Manipulating and Measuring Model Interpretability

Article by Microsoft researchers Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, Hanna Wallach.


With the increased use of machine learning in decision-making scenarios, there has been a growing interest in creating human-interpretable machine learning models. While many such models have been proposed, there have been relatively few experimental studies of whether these models achieve their intended effects, such as encouraging people to follow the model’s predictions when the model is correct and to deviate when it makes a mistake. We present a series of randomized, pre-registered experiments comprising 3,800 participants in which people were shown functionally identical models that varied only in two factors thought to influence interpretability: the number of input features and the model transparency (clear or black-box). Predictably, participants who were shown a clear model with a small number of features were better able to simulate the model’s predictions. However, contrary to what one might expect when manipulating interpretability, we found no improvements in the degree to which participants followed the model’s predictions when it was beneficial to do so. Even more surprisingly, increased transparency hampered people’s ability to detect when the model makes a sizable mistake and correct for it, seemingly due to information overload. These counterintuitive results suggest that decision scientists creating interpretable models should harbor a healthy skepticism of their intuitions and empirically verify that interpretable models achieve their intended effects.