Deconstructing Review Deception: A Study on Counterfactual Explanation and XAI in Detecting Fake and GPT-Generated Reviews

Abstract

Our models not only deliver high-performing predictions but also illuminate the decision-making processes underlying these predictions. By experimenting with five datasets, we have showcased our framework\u27s prowess in generating diverse and specific counterfactuals, thereby enhancing deception detection capabilities and supporting review authenticity assessments. The results demonstrate the significant contribution of our research in furthering the understanding of AI-generated review detection and, more broadly, AI interpretability. Experimentation on five datasets reveals our framework\u27s ability to produce diverse and specific counterfactuals, significantly enriching deception detection capabilities and facilitating the evaluation of review authenticity. Our robust model offers a novel contribution to the understanding of AI applications, marking a significant step forward in both the detection of deceptive reviews and the broader field of AI interpretability

    Similar works