25 research outputs found

    Intelligence analysis support guide: development and validation

    Get PDF
    Research shows that intelligence analysts do not routinely follow a logical workflow, do not always use critical thinking, and that analysts’ training and experience are unrelated to analysts’ performance. The Analysis Support Guide (ASG) aims to capture, communicate, and encourage good analytic practice. The ASG is informed by organizational intelligence doctrine and past research on intelligence analysis. The ASG includes the generic analytic workflow, prompts for good practice at each stage of the workflow, indicators of good and poor analytic practice, and an analytic investigation questionnaire. The findings of a small-scale content validation study of the ASG are reported here. Fourteen analysts provided detailed feedback on its content. The results informed a revision of the ASG that is currently used to train new and experienced analysts. The ASG can also inform the development of analytic technologies and future research on the psychology of intelligence analysi

    A survey of intelligence analysts’ strategies for solving analytic tasks

    Get PDF
    Analytic performance may be assessed by the nature of the process applied to intelligence tasks and analysts are expected to use a 'critical' or deliberative mindset. However, there is little research on how analysts do their work. We report the findings of a quantitative survey of 113 intelligence analysts who were asked to report how often they would apply strategies involving more or less critical thinking when performing representative tasks along the analytic workflow. Analysts reported using ‘deliberative’ strategies significantly more often than ‘intuitive’ ones when capturing customer requirements, processing data, and communicating conclusions. Years of experience working in the intelligence community, skill level, analytic thinking training, and time spent working collaboratively (opposed to individually) were largely unrelated to reported strategy use. We discuss the implications of these findings for both improving intelligence analysis and developing an evidence-based approach to policy and practice in this domain

    A survey of intelligence analysts’ strategies for solving analytic tasks

    Get PDF
    Analytic performance may be assessed by the nature of the process applied to intelligence tasks and analysts are expected to use a 'critical' or deliberative mindset. However, there is little research on how analysts do their work. We report the findings of a quantitative survey of 113 intelligence analysts who were asked to report how often they would apply strategies involving more or less critical thinking when performing representative tasks along the analytic workflow. Analysts reported using ‘deliberative’ strategies significantly more often than ‘intuitive’ ones when capturing customer requirements, processing data, and communicating conclusions. Years of experience working in the intelligence community, skill level, analytic thinking training, and time spent working collaboratively (opposed to individually) were largely unrelated to reported strategy use. We discuss the implications of these findings for both improving intelligence analysis and developing an evidence-based approach to policy and practice in this domain

    Profiling exploratory browsing behaviour with a semantic data browser.

    Get PDF
    Semantic Web technologies are increasingly being adopted for aggregating Web data. Tools such as Semantic Data Browsers have been proposed to assist users to access and make sense of the vast semantic space. However, further investigations are needed to understand how users make use of the additional semantic features provided by these new breed of browsers and their effectiveness in supporting exploration of a domain. Measurements of browsing behaviour in a semantic space are also needed. Using the log data from a semantic browser (MusicPinta) for the music domain, this paper takes the first step in profiling browsing behaviour of users in a semantic space and compares the outcome against their task performance. Two exploratory search tasks were designed for the experiment. Movements in terms of users traversing the provided semantic links in the browser were captured and the patterns of clicks between abstract and concrete concepts were analysed

    Exploratory information searching in the enterprise: a study of user satisfaction and task performance.

    Get PDF
    No prior research has been identified that investigates the causal factors for workplace exploratory search task performance. The impact of user, task, and environmental factors on user satisfaction and task performance was investigated through a mixed methods study with 26 experienced information professionals using enterprise search in an oil and gas enterprise. Some participants found 75% of high-value items, others found none, with an average of 27%. No association was found between self-reported search expertise and task performance, with a tendency for many participants to overestimate their search expertise. Successful searchers may have more accurate mental models of both search systems and the information space. Organizations may not have effective exploratory search task performance feedback loops, a lack of learning. This may be caused by management bias towards technology, not capability, a lack of systems thinking. Furthermore, organizations may not “know” they “don't know” their true level of search expertise, a lack of knowing. A metamodel is presented identifying the causal factors for workplace exploratory search task performance. Semistructured qualitative interviews with search staff from the defense, pharmaceutical, and aerospace sectors indicates the potential transferability of the finding that organizations may not know their search expertise levels

    Trade-Offs Under Pressure: Heuristics and Observations Of Teams Resolving Internet Service Outages

    Get PDF
    The increasing complexity of software applications and architectures in Internet services challenge the reasoning of operators tasked with diagnosing and resolving outages and degradations as they arise. Although a growing body of literature focuses on how failures can be prevented through more robust and fault-tolerant design of these systems, a dearth of research explores the cognitive challenges engineers face when those preventative designs fail and they are left to think and react to scenarios that hadn’t been imagined. This study explores what heuristics or rules-of-thumb engineers employ when faced with an outage or degradation scenario in a business-critical Internet service. A case study approach was used, focusing on an actual outage of functionality during a high period of buying activity on a popular online marketplace. Heuristics and other tacit knowledge were identified, and provide a promising avenue for both training and future interface design opportunities. Three diagnostic heuristics were identified as being in use: a) initially look for correlation between the behaviour and any recent changes made in the software, b) upon finding no correlation with a software change, widen the search to any potential contributors imagined, and c) when choosing a diagnostic direction, reduce it by focusing on the one that most easily comes to mind, either because symptoms match those of a difficult-to-diagnose event in the past, or those of any recent events. A fourth heuristic is coordinative in nature: when making changes to software in an effort to mitigate the untoward effects or to resolve the issue completely, rely on peer review of the changes more than automated testing (if at all.

    Exploratory information searching in the enterprise: A study of user satisfaction and task performance

    Get PDF
    No prior research has been identified which investigates the causal factors for workplace exploratory search task performance. The impact of user, task and environmental factors on user satisfaction and task performance was investigated through a mixed methods study with 26 experienced information professionals using enterprise search in an oil and gas enterprise. Some participants found 75% of high value items, others found none with an average of 27%. No association was found between self-reported search expertise and task performance, with a tendency for many participants to overestimate their search expertise. Successful searchers may have more accurate mental models of both search systems and the information space. Organizations may not have effective exploratory search task performance feedback loops, a lack of learning. This may be caused by management bias towards technology not capability, a lack of systems thinking. Furthermore, organizations may not ‘know’ they ‘don’t know’ their true level of search expertise, a lack of knowing. A metamodel is presented identifying the causal factors for workplace exploratory search task performance. Semi-structured qualitative interviews with search staff from the Defence, Pharmaceutical and Aerospace sectors indicates the potential transferability of the finding that organizations may not know their search expertise levels

    Exploratory information searching in the enterprise: A study of user satisfaction and task performance

    Get PDF
    No prior research has been identified which investigates the causal factors for workplace exploratory search task performance. The impact of user, task and environmental factors on user satisfaction and task performance was investigated through a mixed methods study with 26 experienced information professionals using enterprise search in an oil and gas enterprise. Some participants found 75% of high value items, others found none with an average of 27%. No association was found between self-reported search expertise and task performance, with a tendency for many participants to overestimate their search expertise. Successful searchers may have more accurate mental models of both search systems and the information space. Organizations may not have effective exploratory search task performance feedback loops, a lack of learning. This may be caused by management bias towards technology not capability, a lack of systems thinking. Furthermore, organizations may not ‘know’ they ‘don’t know’ their true level of search expertise, a lack of knowing. A metamodel is presented identifying the causal factors for workplace exploratory search task performance. Semi-structured qualitative interviews with search staff from the Defence, Pharmaceutical and Aerospace sectors indicates the potential transferability of the finding that organizations may not know their search expertise levels

    Assisting People to Become Independent Learners in the Analysis of Intelligence

    Get PDF
    Section 1: What Makes Intelligence Analysis Difficult? A Cognitive Task Analysis of Intelligence Analysts by Susan G. Hutchins, Peter L. Pirolli, and Stuart K. Card; Section 2: Evaluation of a Computer Support Tool for Analysis of Competing Hypotheses by Peter Pirolli, Lance Good, Julie Heiser, Jeff Shrager, and Susan Huthins; Section 3: Collaborative Intelligence Analysis with CACHE and its Effects on Information Gathering and Cognitive Bias by Dorrit Billman, Gregorio Convertino, Jeff Shrager, J.P. Massar, Peter PirolliThe purpose of this project was to conduct applied research with exemplary technology to support post-graduate instruction in intelligence analysis. The first phase of research used Cognitive Task Analysis (CTA) to understand the nature of subject matter expertise for this domain, as well as leverage points for technology support. Results from the CTA and advice from intelligence analysis instructors at the Naval Postgraduate School lead us to focus on the development of a collaborative computer tool (CACHE) to support a method called the Analysis of Competing Hypotheses (ACH). We first evaluated a non-collaborative version of an ACH tool in an NPS intelligence classroom setting, followed by an evaluation of the collaborative tool, CACHE at NPS. These evaluations, along with similar studies conducted in coordination with NIST and MITRE, suggested that ACH and CACHE can support intelligence activities and mitigate confirmation bias. However, collaborative analysis has a number of trade-offs: it incurs overhead costs, and can mitigate or exacerbate confirmation bias, depending on the mixture of predisposing biases of collaborators.Office of Naval Researc
    corecore