3,196 research outputs found

    Algorithmic transparency of conversational agents

    Get PDF
    A lack of algorithmic transparency is a major barrier to the adoption of artificial intelligence technologies within contexts which require high risk and high consequence decision making. In this paper we present a framework for providing transparency of algorithmic processes. We include important considerations not identified in research to date for the high risk and high consequence context of defence intelligence analysis. To demonstrate the core concepts of our framework we explore an example application (a conversational agent for knowledge exploration) which demonstrates shared human-machine reasoning in a critical decision making scenario. We include new findings from interviews with a small number of analysts and recommendations for future research

    Designing for Conversational System Trustworthiness: The Impact of Model Transparency on Trust and Task Performance

    Get PDF
    Designing for system trustworthiness promises to address challenges of opaqueness and uncertainty introduced through Machine Learning (ML)-based systems by allowing users to understand and interpret systems’ underlying working mechanisms. However, empirical exploration of trustworthiness measures and their effectiveness is scarce and inconclusive. We investigated how varying model confidence (70% versus 90%) and making confidence levels transparent to the user (explanatory statement versus no explanatory statement) may influence perceptions of trust and performance in an information retrieval task assisted by a conversational system. In a field experiment with 104 users, our findings indicate that neither model confidence nor transparency seem to impact trust in the conversational system. However, users’ task performance is positively influenced by both transparency and trust in the system. While this study considers the complex interplay of system trustworthiness, trust, and subsequent behavioral outcomes, our results call into question the relation between system trustworthiness and user trust

    Developing conversational agents for use in criminal investigations

    Get PDF
    The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision-making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints; and brittleness, (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this article, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues. We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments, and our research has broader application than the use case discussed

    Developing conversational agents for use in criminal investigations

    Get PDF
    The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints, and brittleness (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this paper, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues.We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments and our research has broader application than the use case discussed

    Pan: conversational agent for criminal investigations

    Get PDF
    We present an early prototype conversational agent (CA), called Pan, for retrieving information to support criminal investigations. Our approach tackles the issue of algorithmic transparency, which is critical in unpredictable, high risk, and high consequence domains. We present a novel method to flexibly model CA intentions and provide transparency of attributes that is underpinned with human recognition. We propose that Pan can be used for experimentation to probe analyst requirements and to evaluate the effectiveness of our explanation structure

    Pan: conversational agent for criminal investigations

    Get PDF
    We present an early prototype conversational agent (CA), called Pan, for retrieving information to support criminal investigations. Our approach tackles the issue of algorithmic transparency, which is critical in unpredictable, high risk, and high consequence domains. We present a novel method to flexibly model CA intentions and provide transparency of attributes that is underpinned with human recognition. We propose that Pan can be used for experimentation to probe analyst requirements and to evaluate the effectiveness of our explanation structure

    How analysts think: a preliminary study of human needs and demands for AI-based conversational agents

    Get PDF
    For conversational agents to provide benefit to intelligence analysis they need to be able to recognise and respond to the analysts intentions. Furthermore, they must provide transparency to their algorithms and be able to adapt to new situations and lines of inquiry. We present a preliminary analysis as a first step towards developing conversational agents for intelligence analysis: that of understanding and modeling analyst intentions so they can be recognised by conversational agents. We describe in-depth interviews conducted with experienced intelligence analysts and implications for designing conversational agent intentions using Formal Concept Analysis

    Providing a foundation for interpretable autonomous agents through elicitation and modeling of criminal investigation pathways

    Get PDF
    Criminal investigations are guided by repetitive and time-consuming information retrieval tasks, often with high risk and high consequence. If Artificial intelligence (AI) systems can automate lines of inquiry, it could reduce the burden on analysts and allow them to focus their efforts on analysis. However, there is a critical need for algorithmic transparency to address ethical concerns. In this paper, we use data gathered from Cognitive Task Analysis (CTA) interviews of criminal intelligence analysts and perform a novel analysis method to elicit question networks. We show how these networks form an event tree, where events are consolidated by capturing analyst intentions. The event tree is simplified with a Dynamic Chain Event Graph (DCEG) that provides a foundation for transparent autonomous investigations
    • …
    corecore