Detecting and explaining unfairness in consumer contracts through memory networks

Abstract

Published online: 11 May 2021Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the classification accuracy, but are also able to offer meaningful, natural language explanations of otherwise opaque classifier outcomes. Sponsor information: Francesca Lagioia has been supported by the European Research Council (ERC) Project “CompuLaw” (Grant Agreement No. 833647) under the European Union’s Horizon 2020 research and innovation programme. Paolo Torroni has been partially supported by the H2020 Project AI4EU (Grant Agreement No. 825619). Marco Lippi would like to thank NVIDIA Corporation for the donation of the Titan X Pascal GPU used for this research.Open access funding provided by Alma Mater Studiorum - Università di Bologna within the CRUI-CARE Agreement

    Similar works