Research in human-centered AI has shown the benefits of systems that can
explain their predictions. Methods that allow humans to tune a model in
response to the explanations are similarly useful. While both capabilities are
well-developed for transparent learning models (e.g., linear models and GA2Ms),
and recent techniques (e.g., LIME and SHAP) can generate explanations for
opaque models, no method for tuning opaque models in response to explanations
has been user-tested to date. This paper introduces LIMEADE, a general
framework for tuning an arbitrary machine learning model based on an
explanation of the model's prediction. We demonstrate the generality of our
approach with two case studies. First, we successfully utilize LIMEADE for the
human tuning of opaque image classifiers. Second, we apply our framework to a
neural recommender system for scientific papers on a public website and report
on a user study showing that our framework leads to significantly higher
perceived user control, trust, and satisfaction. Analyzing 300 user logs from
our publicly-deployed website, we uncover a tradeoff between canonical greedy
explanations and diverse explanations that better facilitate human tuning.Comment: 16 pages, 7 figure