In the domain of Mobility Data Science, the intricate task of interpreting
models trained on trajectory data, and elucidating the spatio-temporal movement
of entities, has persistently posed significant challenges. Conventional XAI
techniques, although brimming with potential, frequently overlook the distinct
structure and nuances inherent within trajectory data. Observing this
deficiency, we introduced a comprehensive framework that harmonizes pivotal XAI
techniques: LIME (Local Interpretable Model-agnostic Explanations), SHAP
(SHapley Additive exPlanations), Saliency maps, attention mechanisms, direct
trajectory visualization, and Permutation Feature Importance (PFI). Unlike
conventional strategies that deploy these methods singularly, our unified
approach capitalizes on the collective efficacy of these techniques, yielding
deeper and more granular insights for models reliant on trajectory data. In
crafting this synthesis, we effectively address the multifaceted essence of
trajectories, achieving not only amplified interpretability but also a nuanced,
contextually rich comprehension of model decisions. To validate and enhance our
framework, we undertook a survey to gauge preferences and reception among
various user demographics. Our findings underscored a dichotomy: professionals
with academic orientations, particularly those in roles like Data Scientist, IT
Expert, and ML Engineer, showcased a profound, technical understanding and
often exhibited a predilection for amalgamated methods for interpretability.
Conversely, end-users or individuals less acquainted with AI and Data Science
showcased simpler inclinations, such as bar plots indicating timestep
significance or visual depictions pinpointing pivotal segments of a vessel's
trajectory