In explainable artificial intelligence (XAI), researchers try to alleviate the intransparency of high-performing but incomprehensible machine learning models. This should improve their adoption in practice. While many XAI techniques have been developed, the impact of their possibilities on the user is rarely being investigated. Hence, it is neither apparent whether an XAI-based model is perceived as more explainable than existing alternative machine learning models nor is it known whether the explanations actually increase the user’s comprehension of the problem, and thus, their problem-solving ability. In an empirical study, we asked 165 participants about the perceived explainability of different machine learning models and an XAI augmentation. We further tasked them to answer retention, transfer, and recall questions in three scenarios with different stake. The results reveal high comprehensibility and problem-solving performance of XAI augmentation compared to the tested machine learning models