9 research outputs found

    Conversational intelligence analysis

    Get PDF
    Social networks foster the development of social sensing to gather data about situations in the environment. Making sense of this information is, however, a challenge because the process is not linear and additional sensed information may be needed to better understand a situation. In this paper we explore how two complementary technologies, Moira and CISpaces, operate in unison to support collaboration among human-agent teams to iteratively gather and analyse information to improve situational awareness. The integrated system is developed for supporting intelligence analysis in a coalition environment. Moira is a conversational interface for information gathering, querying and evidence aggregation that supports cooperative data-driven analytics via Controlled Natural Language. CISpaces supports collaborative sensemaking among analysts via argumentation-based evidential reasoning to guide the identification of plausible hypotheses, including reasoning about provenance to explore credibility. In concert, these components enable teams of analysts to collaborate in constructing structured hypotheses with machine-based systems and external collaborators

    Coalitions of things: supporting ISR tasks via Internet of Things approaches

    Get PDF
    In the wake of rapid maturing of Internet of Things (IoT) approaches and technologies in the commercial sector, the IoT is increasingly seen as a key ‘disruptive’ technology in military environments. Future operational environments are expected to be characterized by a lower proportion of human participants and a higher proportion of autonomous and semi-autonomous devices. This view is reflected in both US ‘third offset’ and UK ‘information age’ thinking and is likely to have a profound effect on how multinational coalition operations are conducted in the future. Much of the initial consideration of IoT adoption in the military domain has rightly focused on security concerns, reflecting similar cautions in the early era of electronic commerce. As IoT approaches mature, this initial technical focus is likely to shift to considerations of interactivity and policy. In this paper, rather than considering the broader range of IoT applications in the military context, we focus on roles for IoT concepts and devices in future intelligence, surveillance and reconnaissance (ISR) tasks, drawing on experience in sensor-mission resourcing and human-computer collaboration (HCC) for ISR. We highlight the importance of low training overheads in the adoption of IoT approaches, and the need to balance proactivity and interactivity (push vs pull modes). As with sensing systems over the last decade, we emphasize that, to be valuable in ISR tasks, IoT devices will need a degree of mission-awareness in addition to an ability to self-manage their limited resources (power, memory, bandwidth, computation, etc). In coalition operations, the management and potential sharing of IoT devices and systems among partners (e.g., in cross-coalition tactical-edge ISR teams) becomes a key issue due heterogeneous factors such as language, policy, procedure and doctrine. Finally, we briefly outline a platform that we have developed in order to experiment with human-IoT teaming on ISR tasks, in both physical and virtual settings

    EMIL: Extracting Meaning from Inconsistent Language

    Get PDF
    Developments in formal and computational theories of argumentation reason with inconsistency. Developments in Computational Linguistics extract arguments from large textual corpora. Both developments head in the direction of automated processing and reasoning with inconsistent, linguistic knowledge so as to explain and justify arguments in a humanly accessible form. Yet, there is a gap between the coarse-grained, semi-structured knowledge-bases of computational theories of argumentation and fine-grained, highly-structured inferences from knowledge-bases derived from natural language. We identify several subproblems which must be addressed in order to bridge the gap. We provide a direct semantics for argumentation. It has attractive properties in terms of expressivity and complexity, enables reasoning by cases, and can be more highly structured. For language processing, we work with an existing controlled natural language (CNL), which interfaces with our computational theory of argumentation; the tool processes natural language input, translates them into a form for automated inference engines, outputs argument extensions, then generates natural language statements. The key novel adaptation incorporates the defeasible expression ‘it is usual that’. This is an important, albeit incremental, step to incorporate linguistic expressions of defeasibility. Overall, the novel contribution of the paper is an integrated, end-to-end argumentation system which bridges between automated defeasible reasoning and a natural language interface. Specific novel contributions are the theory of ‘direct semantics’, motivations for our theory, results with respect to the direct semantics, an implementation, experimental results, the tie between the formalisation and the CNL, the introduction into a CNL of a natural language expression of defeasibility, and an ‘engineering’ approach to fine-grained argument analysis

    Supporting scientific enquiry with uncertain sources

    Get PDF
    In this paper we propose a computational method- ology for assessing the impact of trust associated to sources of information in scientific enquiry activities building upon recent proposals of an ontology for situational understanding and results in computational argumentation. Often trust in the source of information serves as a proxy for evaluating the quality of the information itself, especially in the cases of information overhead. We show how our computational methodology, composed of an ontology for representing uncertain information and sources, as well as an argumentative process of conjecture and refutation, support human analysts in scientific enquiry, as well as high- lighting issues that demand further investigation

    Intentional dialogues in multi-agent systems based on ontologies and argumentation

    Get PDF
    Some areas of application, for example, healthcare, are known to resist the replacement of human operators by fully autonomous systems. It is typically not transparent to users how artificial intelligence systems make decisions or obtain information, making it difficult for users to trust them. To address this issue, we investigate how argumentation theory and ontology techniques can be used together with reasoning about intentions to build complex natural language dialogues to support human decision-making. Based on such an investigation, we propose MAIDS, a framework for developing multi-agent intentional dialogue systems, which can be used in different domains. Our framework is modular so that it can be used in its entirety or just the modules that fulfil the requirements of each system to be developed. Our work also includes the formalisation of a novel dialogue-subdialogue structure with which we can address ontological or theory-of-mind issues and later return to the main subject. As a case study, we have developed a multi-agent system using the MAIDS framework to support healthcare professionals in making decisions on hospital bed allocations. Furthermore, we evaluated this multi-agent system with domain experts using real data from a hospital. The specialists who evaluated our system strongly agree or agree that the dialogues in which they participated fulfil Cohen’s desiderata for task-oriented dialogue systems. Our agents have the ability to explain to the user how they arrived at certain conclusions. Moreover, they have semantic representations as well as representations of the mental state of the dialogue participants, allowing the formulation of coherent justifications expressed in natural language, therefore, easy for human participants to understand. This indicates the potential of the framework introduced in this thesis for the practical development of explainable intelligent systems as well as systems supporting hybrid intelligence

    Role of affect and task category-knowledge sharing tools fit on behavioural intention to use KS tools among knowledge workers

    Get PDF
    Knowledge sharing is an essential practice by organizations of the 21st century. To leverage on knowledge sharing activities and cultivate a knowledge based ecosystem, organizations have invested and deployed many types of Knowledge Sharing tools (KS tools). KS tools allow knowledge workers to share and use knowledge in organizations. The low usage of KS tools justify the need to study the usage and also the intention to use these tools in organizations particularly among knowledge workers. In addition, the decline of Knowledge Economy Index (KEI) and Knowledge Index (KI) for Malaysia showed that knowledge sharing and knowledge contribution in education, innovation, and ICT are deteriorating. The need to investigate knowledge workers' intention to use knowledge sharing tools to support knowledge practices seems a reasonable research goal. In this research, the focus is on the behavioral intention of knowledge workers to use KS tools among knowledge workers in Multimedia Super Corridor (MSC) status organizations. MSC-status organizations play a key role that contributes to the national KEI and KI for Malaysia. The main objective of this study is to identify factors that influence the intention to use KS tools among knowledge workers. In an attempt to provide answers to the research objective, Affective Technology Acceptance Model (A.T.A Model) is developed to examine the antecedents that influence the attitude and behavioral intention of the knowledge workers to use KS tools in their day-to-day tasks. The A.T.A Model integrates Technology Acceptance Model with Task-Technology Fit to examine the acceptance of technology by hypothesize fit between Task Category and KS tools to Behavioral Intention to use KS tools. The proposed research model also includes the role of affect drawing from theories by Russell’s Circumplex of Affect and Watson, Clark and Tellegen’s Consensual Model of Affect into the propose model. The A.T.A model also considers organizational factors and motivational factors that influence the Behavioral Intention to use KS tools among knowledge workers. Quantitative method using survey approach is adopted to collect data from respondents. The proposed A.T.A model is empirically examined using two hundred ninety five (295) respondents who comprised of knowledge workers from a sampling frame of two thousand five hundred and five (2505) knowledge workers in twenty-three (23) MSC-status organizations that participated in this research. The outcomes of the analysis support the overall structure of the model whereby sixteen (16) of the twenty two (22) hypothesis are supported. The Behavioral Intention to use KS tools is supported and explained by knowledge workers' Attitude, Task-Category and KS tools fit, Positive Affect and Trust. In this research, Attitude has the highest impact on Behavioral Intention, followed by Task Category-KS tools fit, Positive Affect and Trust. On the other hand, Negative Affect influences Behavioral Intention knowledge workers for three (3) different points in time only ("At the Moment", "Past Few Days", and "Past Few Weeks"). However, Extrinsic and Intrinsic Rewards are found to have no influence on Behavioral Intention to use KS tools. The findings highlighted that change in Positive Affect is able to create a positive impact in Behavioral Intention of knowledge workers to use KS tools besides Attitude and TCK fit. The findings highlighted that Positive Affect has an influence on Perceived Usefulness, but Negative Affect has no influence on it. However, both Positive and Negative Affect have an influence on Perceived Ease of Use. The results also found that Task Category KS tools fit influences Behavioral Intention significantly. This is consistent with past research, which claimed that integrating Task Category-KS tools fit to acceptance model is able to provide better explanation on the intention of individuals to use KS tools. On the contrary, this research found Extrinsic and Intrinsic Rewards, and Management Support have no significant relationship with Behavioral Intention to used KS tools in the proposed A.T.A model. Overall, the results of this study contribute to the literature of technology acceptance by shedding light on the behavioral intention to use KS tools among knowledge workers

    Conversational intelligence analysis

    No full text
    Social networks foster the development of social sensing to gather data about situations in the environment. Making sense of this information is, however, a challenge because the process is not linear and additional sensed information may be needed to better understand a situation. In this paper we explore how two complementary technologies, Moira and CISpaces, operate in unison to support collaboration among human-agent teams to iteratively gather and analyse information to improve situational awareness. The integrated system is developed for supporting intelligence analysis in a coalition environment. Moira is a conversational interface for information gathering, querying and evidence aggregation that supports cooperative data-driven analytics via Controlled Natural Language. CISpaces supports collaborative sensemaking among analysts via argumentation-based evidential reasoning to guide the identification of plausible hypotheses, including reasoning about provenance to explore credibility. In concert, these components enable teams of analysts to collaborate in constructing structured hypotheses with machine-based systems and external collaborators.</p
    corecore