94 research outputs found

    Reasoning and learning services for coalition situational understanding

    Get PDF
    Situational understanding requires an ability to assess the current situation and anticipate future situations, requiring both pattern recognition and inference. A coalition involves multiple agencies sharing information and analytics. This paper considers how to harness distributed information sources, including multimodal sensors, together with machine learning and reasoning services, to perform situational understanding in a coalition context. To exemplify the approach we focus on a technology integration experiment in which multimodal data — including video and still imagery, geospatial and weather data — is processed and fused in a service-oriented architecture by heterogeneous pattern recognition and inference components. We show how the architecture: (i) provides awareness of the current situation and prediction of future states, (ii) is robust to individual service failure, (iii) supports the generation of ‘why’ explanations for human analysts (including from components based on ‘black box’ deep neural networks which pose particular challenges to explanation generation), and (iv) allows for the imposition of information sharing constraints in a coalition context where there is varying levels of trust between partner agencies

    Conversational control interface to facilitate situational understanding in a city surveillance setting

    Get PDF
    In this paper we explore the use of a conversational interface to query a decision support system pro- viding information relating to a city surveillance setting. Specifically, we focus on how the use of a Controlled Natural Language (CNL) can provide a method for processing natural language queries whilst also tracking the context of the conversation with relation to past utterances. Ultimately, we pro- pose our conversational approach leads to a versa- tile tool for providing decision support with a low enough learning curve such that untrained users can operate it either within a central command location or when operating within the field (at the tactical edge). The key contribution of this paper is an il- lustration of applied concepts of CNLs as well as furthering the art of conversational context tracking whilst using such a technique. Keywords: Natural Language Processing (NLP), Conversational Systems, Situational Understandin

    Integrating learning and reasoning services for explainable information fusion

    Get PDF
    —We present a distributed information fusion system able to integrate heterogeneous information processing services based on machine learning and reasoning approaches. We focus on higher (semantic) levels of information fusion, and highlight the requirement for the component services, and the system as a whole, to generate explanations of its outputs. Using a case study approach in the domain of traffic monitoring, we introduce component services based on (i) deep neural network approaches and (ii) heuristic-based reasoning. We examine methods for explanation generation in each case, including both transparency (e.g, saliency maps, reasoning traces) and post-hoc methods (e.g, explanation in terms of similar examples, identification of relevant semantic objects). We consider trade-offs in terms of the classification performance of the services and the kinds of available explanations, and show how service integration offers more robust performance and explainability

    Stakeholders in explainable AI

    Get PDF
    There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable. However, there is no general consensus over what is meant by ‘explainable’ and ‘interpretable’. In this paper, we argue that this lack of consensus is due to there being several distinct stakeholder communities. We note that, while the concerns of the individual communities are broadly compatible, they are not identical, which gives rise to different intents and requirements for explainability/ interpretability. We use the software engineering distinction between validation and verification, and the epistemological distinctions between knowns/unknowns, to tease apart the concerns of the stakeholder communities and highlight the areas where their foci overlap or diverge. It is not the purpose of the authors of this paper to ‘take sides’ — we count ourselves as members, to varying degrees, of multiple communities — but rather to help disambiguate what stakeholders mean when they ask ‘Why?’ of an AI
    • …
    corecore