9 research outputs found

    Critical review of analytic techniques

    Get PDF
    In this paper, we classify 75 analytic techniques in terms of their primary function. We then highlight where across the stages of the generic analytic workflow the techniques might be best applied. Importantly, most of the techniques have some shortcomings, and none guarantee an accurate or bias-free analytic conclusion. We discuss how the findings of the present paper can be used to develop criteria for evaluating analytic techniques as well as the performance of analysts. We also discuss which sets of techniques ought to be consolidated as well as reveal gaps that need to be filled by new techniques

    A survey of intelligence analysts’ strategies for solving analytic tasks

    Get PDF
    Analytic performance may be assessed by the nature of the process applied to intelligence tasks and analysts are expected to use a 'critical' or deliberative mindset. However, there is little research on how analysts do their work. We report the findings of a quantitative survey of 113 intelligence analysts who were asked to report how often they would apply strategies involving more or less critical thinking when performing representative tasks along the analytic workflow. Analysts reported using ‘deliberative’ strategies significantly more often than ‘intuitive’ ones when capturing customer requirements, processing data, and communicating conclusions. Years of experience working in the intelligence community, skill level, analytic thinking training, and time spent working collaboratively (opposed to individually) were largely unrelated to reported strategy use. We discuss the implications of these findings for both improving intelligence analysis and developing an evidence-based approach to policy and practice in this domain

    A survey of intelligence analysts’ strategies for solving analytic tasks

    Get PDF
    Analytic performance may be assessed by the nature of the process applied to intelligence tasks and analysts are expected to use a 'critical' or deliberative mindset. However, there is little research on how analysts do their work. We report the findings of a quantitative survey of 113 intelligence analysts who were asked to report how often they would apply strategies involving more or less critical thinking when performing representative tasks along the analytic workflow. Analysts reported using ‘deliberative’ strategies significantly more often than ‘intuitive’ ones when capturing customer requirements, processing data, and communicating conclusions. Years of experience working in the intelligence community, skill level, analytic thinking training, and time spent working collaboratively (opposed to individually) were largely unrelated to reported strategy use. We discuss the implications of these findings for both improving intelligence analysis and developing an evidence-based approach to policy and practice in this domain

    Using scenarios to forecast outcomes of a refugee crisis

    Get PDF
    The Syrian civil war has led to millions of Syrians fleeing the country, and has resulted in a humanitarian crisis. By considering how such socio-political events may unfold, scenarios can lead to informed forecasts that can be used for decision-making. We examined the relationship between scenarios and forecasts in the context of the Syrian refugee crisis. Forty Turkish students trained to use a brainstorming technique generated scenarios that might follow within six months of the Turkish government banning Syrian refugees from entering the country. Participants generated from 3-6 scenarios. Over half were rated as ‘high’ quality in terms of completeness, relevance/pertinence, plausibility, coherence, and transparency (order effects). Scenario quality was unaffected by scenario quantity. Even though no forecasts were requested, participants’ first scenarios contained from 0-17 forecasts. Mean forecast accuracy was 45% and this was unaffected by forecast quantity. Therefore, brainstorming can offer a simple and quick way of generating scenarios and forecasts that can potentially help decision-makers tackle humanitarian crises

    Scenario generation and scenario quality using the cone of plausibility

    Get PDF
    The intelligence analysis domain is a critical area for futures work. Indeed, intelligence analysts’ judgments of security threats are based on considerations of how futures may unfold, and as such play a vital role in informing policy- and decision-making. In this domain, futures are typically considered using qualitative scenario generation techniques such as the cone of plausibility (CoP). We empirically examined the quality of scenarios generated using this technique on five criteria: completeness, context (otherwise known as ‘relevance/pertinence’), plausibility, coherence, and order effects (i.e., ‘transparency’). Participants were trained to use the CoP and then asked to generate scenarios that might follow within six months of the Turkish government banning Syrian refugees from entering the country. On average, participants generated three scenarios, and these could be characterized as baseline, best case, and worst case. All scenarios were significantly more likely to be of high quality on the ‘coherence’ criterion compared to the other criteria. Scenario quality was independent of scenario type. However, scenarios generated first were significantly more likely to be of high quality on the context and order effects criteria compared to those generated afterwards. We discuss the implications of these findings for the use of the CoP as well as other qualitative scenario generation techniques in futures studies

    Intelligence analysis support guide: development and validation

    Get PDF
    Research shows that intelligence analysts do not routinely follow a logical workflow, do not always use critical thinking, and that analysts’ training and experience are unrelated to analysts’ performance. The Analysis Support Guide (ASG) aims to capture, communicate, and encourage good analytic practice. The ASG is informed by organizational intelligence doctrine and past research on intelligence analysis. The ASG includes the generic analytic workflow, prompts for good practice at each stage of the workflow, indicators of good and poor analytic practice, and an analytic investigation questionnaire. The findings of a small-scale content validation study of the ASG are reported here. Fourteen analysts provided detailed feedback on its content. The results informed a revision of the ASG that is currently used to train new and experienced analysts. The ASG can also inform the development of analytic technologies and future research on the psychology of intelligence analysi

    Boosting intelligence analysts’ judgment accuracy: what works, what fails?

    Get PDF
    A routine part of intelligence analysis is judging the probability of alternative hypotheses given available evidence. Intelligence organizations advise analysts to use intelligence-tradecraft methods such as Analysis of Competing Hypotheses (ACH) to improve judgment, but such methods have not been rigorously tested. We compared the evidence evaluation and judgment accuracy of a group of intelligence analysts who were recently trained in ACH and then used it on a probability judgment task to another group of analysts from the same cohort that were neither trained in ACH nor asked to use any specific method. Although the ACH group assessed information usefulness better than the control group, the control group was a little more accurate (and coherent) than the ACH group. Both groups, however, exhibited suboptimal judgment and were susceptible to unpacking effects. Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy. Specifically, mean absolute error (MAE) in analysts’ probability judgments decreased by 61% after first coherentizing their judgments (a process that ensures judgments respect the unitarity axiom) and then aggregating their judgments. The findings cast doubt on the efficacy of ACH, and show the promise of statistical methods for boosting judgment quality in intelligence and other organizations that routinely produce expert judgments

    Using scenarios to forecast outcomes of a refugee crisis

    Get PDF
    The Syrian civil war has led to millions of Syrians fleeing the country, and has resulted in a humanitarian crisis. By considering how such socio-political events may unfold, scenarios can lead to informed forecasts that can be used for decision-making. We examined the relationship between scenarios and forecasts in the context of the Syrian refugee crisis. Forty Turkish students trained to use a brainstorming technique generated scenarios that might follow within six months of the Turkish government banning Syrian refugees from entering the country. Participants generated from 3-6 scenarios. Over half were rated as ‘high’ quality in terms of completeness, relevance/pertinence, plausibility, coherence, and transparency (order effects). Scenario quality was unaffected by scenario quantity. Even though no forecasts were requested, participants’ first scenarios contained from 0-17 forecasts. Mean forecast accuracy was 45% and this was unaffected by forecast quantity. Therefore, brainstorming can offer a simple and quick way of generating scenarios and forecasts that can potentially help decision-makers tackle humanitarian crises

    Boosting intelligence analysts’ judgment accuracy: what works, what fails?

    Get PDF
    A routine part of intelligence analysis is judging the probability of alternative hypotheses given available evidence. Intelligence organizations advise analysts to use intelligence-tradecraft methods such as Analysis of Competing Hypotheses (ACH) to improve judgment, but such methods have not been rigorously tested. We compared the evidence evaluation and judgment accuracy of a group of intelligence analysts who were recently trained in ACH and then used it on a probability judgment task to another group of analysts from the same cohort that were neither trained in ACH nor asked to use any specific method. Although the ACH group assessed information usefulness better than the control group, the control group was a little more accurate (and coherent) than the ACH group. Both groups, however, exhibited suboptimal judgment and were susceptible to unpacking effects. Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy. Specifically, mean absolute error (MAE) in analysts’ probability judgments decreased by 61% after first coherentizing their judgments (a process that ensures judgments respect the unitarity axiom) and then aggregating their judgments. The findings cast doubt on the efficacy of ACH, and show the promise of statistical methods for boosting judgment quality in intelligence and other organizations that routinely produce expert judgments
    corecore