8 research outputs found

    Deep learning for situational understanding

    Get PDF
    Situational understanding (SU) requires a combination of insight — the ability to accurately perceive an existing situation — and foresight — the ability to anticipate how an existing situation may develop in the future. SU involves information fusion as well as model representation and inference. Commonly, heterogenous data sources must be exploited in the fusion process: often including both hard and soft data products. In a coalition context, data and processing resources will also be distributed and subjected to restrictions on information sharing. It will often be necessary for a human to be in the loop in SU processes, to provide key input and guidance, and to interpret outputs in a way that necessitates a degree of transparency in the processing: systems cannot be “black boxes”. In this paper, we characterize the Coalition Situational Understanding (CSU) problem in terms of fusion, temporal, distributed, and human requirements. There is currently significant interest in deep learning (DL) approaches for processing both hard and soft data. We analyze the state-of-the-art in DL in relation to these requirements for CSU, and identify areas where there is currently considerable promise, and key gaps

    Why the Failure? How Adversarial Examples Can Provide Insights for Interpretable Machine Learning

    Get PDF
    Recent advances in Machine Learning (ML) have profoundly changed many detection, classification, recognition and inference tasks. Given the complexity of the battlespace, ML has the potential to revolutionise how Coalition Situation Understanding is synthesised and revised. However, many issues must be overcome before its widespread adoption. In this paper we consider two - interpretability and adversarial attacks. Interpretability is needed because military decision-makers must be able to justify their decisions. Adversarial attacks arise because many ML algorithms are very sensitive to certain kinds of input perturbations. In this paper, we argue that these two issues are conceptually linked, and insights in one can provide insights in the other. We illustrate these ideas with relevant examples from the literature and our own experiments

    Distributed opportunistic sensing and fusion for traffic congestion detection

    Get PDF
    Our particular research in the Distributed Analytics and Information Science International Technology Alliance (DAIS ITA) is focused on ”Anticipatory Situational Understanding for Coalitions”. This paper takes the concrete example of detecting and predicting traffic congestion in the UK road transport network from existing generic sensing sources, such as real-time CCTV imagery and video, which are publicly available for this purpose. This scenario has been chosen carefully as we believe that in a typical city, all data relevant to transport network congestion information is not generally available from a single unified source, and that different organizations in the city (e.g. the weather office, the police force, the general public, etc.) have their own different sensors which can provide information potentially relevant to the traffic congestion problem. In this paper we are looking at the problem of (a) identifying congestion using cameras that, for example, the police department may have access to, and (b) fusing that with other data from other agencies in order to (c) augment any base data provided by the official transportation department feeds. By taking this coalition approach this requires using standard cameras to do different supplementary tasks like car counting, and in this paper we examine how well those tasks can be done with RNN/CNN, and other distributed machine learning processes. In this paper we provide details of an initial four-layer architecture and potential tooling to enable rapid formation of human/machine hybrid teams in this setting, with a focus on opportunistic and distributed processing of the data at the edge of the network. In future work we plan to integrate additional data-sources to further augment the core imagery data

    Asking 'why' in AI: Explainability of intelligent systems - perspectives and challenges

    Get PDF
    Recent rapid progress in machine learning (ML), particularly so‐called ‘deep learning’, has led to a resurgence in interest in explainability of artificial intelligence (AI) systems, reviving an area of research dating back to the 1970s. The aim of this article is to view current issues concerning ML‐based AI systems from the perspective of classical AI, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explainable AI today

    Learning and reasoning in complex coalition information environments: a critical analysis

    Get PDF
    In this paper we provide a critical analysis with met- rics that will inform guidelines for designing distributed systems for Collective Situational Understanding (CSU). CSU requires both collective insight—i.e., accurate and deep understanding of a situation derived from uncertain and often sparse data and collective foresight—i.e., the ability to predict what will happen in the future. When it comes to complex scenarios, the need for a distributed CSU naturally emerges, as a single monolithic approach not only is unfeasible: it is also undesirable. We therefore propose a principled, critical analysis of AI techniques that can support specific tasks for CSU to derive guidelines for designing distributed systems for CSU
    corecore