Marketing managers often provide much poorer evaluations of model-based marketing decision support systems (MDSSs) than are warranted by the objective performance of those systems. We show that a reason for this discrepant evaluation may be that MDSSs are often not designed to help users understand and internalize the underlying factors driving the MDSS results and related recommendations. Thus, there is likely to be a gap between a marketing manager’s mental model and the decision model embedded in the MDSS. We suggest that this gap is an important reason for the poor subjective evaluations of MDSSs, even when the MDSSs are of high objective quality, ultimately resulting in unreasonably low levels of MDSS adoption and use. We propose that to have impact, an MDSS should not only be of high objective quality, but should also help reduce any mental model-MDSS model gap. We evaluate two design characteristics that together lead model-users to update their mental models and reduce the mental model-MDSS gap, resulting in better MDSS evaluations: providing feedback on the upside potential for performance improvement and providing specific suggestions for corrective actions to better align the user's mental model with the MDSS. We hypothesize that, in tandem, these two types of MDSS feedback induce marketing managers to update their mental models, a process we call deep learning, whereas individually, these two types of feedback will have much smaller effects on deep learning. We validate our framework in an experimental setting, using a realistic MDSS in the context of a direct marketing decision problem. We then discuss how our findings can lead to design improvements and better returns on investments in MDSSs such as CRM systems, Revenue Management systems, pricing decision support systems, and the like