Explaining Expert Search and Team Formation Systems with ExES

Abstract

Expert search and team formation systems operate on collaboration networks with nodes representing individuals, labeled with their skills, and edges denoting collaboration relationships. Given a query corresponding to a set of desired skills, these systems identify experts or teams that best match the query. However, state-of-the-art solutions to this problem lack transparency and interpretability. To address this issue, we propose ExES, an interactive tool designed to explain black-box expert search systems. Our system leverages saliency and counterfactual methods from the field of explainable artificial intelligence (XAI). ExES enables users to understand why individuals were or were not included in the query results and what individuals could do, in terms of perturbing skills or connections, to be included or excluded in the results. Based on several experiments using real-world datasets, we verify the quality and efficiency of our explanation generation methods. We demonstrate that ExES takes a significant step toward interactivity by achieving an average latency reduction of 50% in comparison to an exhaustive approach while maintaining over 82% precision in producing saliency explanations and over 70% precision in identifying optimal counterfactual explanations

Similar works

Full text

thumbnail-image

University of Waterloo's Institutional Repository

redirect
Last time updated on 15/05/2024

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.