501 research outputs found

    Communicating uncertainty using words and numbers

    Get PDF
    Life in an increasingly information-rich but highly uncertain world calls for an effective means of communicating uncertainty to a range of audiences. Senders prefer to convey uncertainty using verbal (e.g. likely) rather than numeric (e.g. 75% chance) probabilities, even in consequential domains such as climate science. However, verbal probabilities can convey something other than uncertainty, and senders may exploit this. For instance, senders can maintain credibility after making erroneous predictions. While verbal probabilities afford ease of expression, they can be easily misunderstood, and the potential for miscommunication is not effectively mitigated by assigning (imprecise) numeric probabilities to words. When making consequential decisions, recipients prefer (precise) numeric probabilities

    Crime as risk taking

    Get PDF
    Engagement in criminal activity may be viewed as risk-taking behaviour as it has both benefits and drawbacks that are probabilistic. In two studies, we examined how individuals' risk perceptions can inform our understanding of their intentions to engage in criminal activity. Study 1 measured youths' perceptions of the value and probability of the benefits and drawbacks of engaging in three common crimes (i.e. shoplifting, forgery, and buying illegal drugs), and examined how well these perceptions predicted youths' forecasted engagement in these crimes, controlling for their past engagement. We found that intentions to engage in criminal activity were best predicted by the perceived value of the benefits that may be obtained, irrespective of their probabilities or the drawbacks that may also be incurred. Study 2 specified the benefit and drawback that youth thought about and examined another crime (i.e. drinking and driving). The findings of Study 1 were replicated under these conditions. The present research supports a limited rationality perspective on criminal intentions, and can have implications for crime prevention/intervention strategies

    Words or numbers? Communicating probability in intelligence analysis

    Get PDF
    Intelligence analysis is fundamentally an exercise in expert judgment made under conditions of uncertainty. These judgments are used to inform consequential decisions. Following the major intelligence failure that led to the 2003 war in Iraq, intelligence organizations implemented policies for communicating probability in their assessments. Virtually all chose to convey probability using standardized linguistic lexicons in which an ordered set of select probability terms (e.g., highly likely) is associated with numeric ranges (e.g., 80-90%). We review the benefits and drawbacks of this approach, drawing on psychological research on probability communication and studies that have examined the effectiveness of standardized lexicons. We further discuss how numeric probabilities can overcome many of the shortcomings of linguistic probabilities. Numeric probabilities are not without drawbacks (e.g., they are more difficult to elicit and may be misunderstood by receivers with poor numeracy). However, these drawbacks can be ameliorated with training and practice, whereas the pitfalls of linguistic probabilities are endemic to the approach. We propose that, on balance, the benefits of using numeric probabilities outweigh their drawbacks. Given the enormous costs associated with intelligence failure, the intelligence community should reconsider its reliance on using linguistic probabilities to convey probability in intelligence assessments. Our discussion also has implications for probability communication in other domains such as climate science

    Identifying Parkinson’s Patients: A Functional Gradient Boosting Approach

    Get PDF
    Parkinson’s, a progressive neural disorder, is difficult to identify due to the hidden nature of the symptoms associated. We present a machine learning approach that uses a definite set of features obtained from the Parkinson’s Progression Markers Initiative (PPMI) study as input and classifies them into one of two classes: PD (Parkinson’s disease) and HC (Healthy Control). As far as we know this is the first work in applying machine learning algorithms for classifying patients with Parkinson’s disease with the involvement of domain expert during the feature selection process. We evaluate our approach on 1194 patients acquired from Parkinson’s Progression Markers Initiative and show that it achieves a state-of-the-art performance with minimal feature engineering

    Towards a pedagogy for critical security studies: politics of migration in the classroom

    Get PDF
    International Relations (IR) has increasingly paid attention to critical pedagogy. Feminist, post-colonial and poststructuralist IR scholarship, in particular, have long been advancing the discussions about how to create a pluralist and democratic classroom where ‘the others’ of politics can be heard by the students, who can critically reflect upon complex power relations in global politics. Despite its normative position, Critical Security Studies (CSS) has so far refrained from joining this pedagogical conversation. Deriving from the literatures of postcolonial and feminist pedagogical practices, it is argued that an IR scholar in the area of CSS can contribute to the production of a critical political subject in the 'uncomfortable classroom', who reflects on violent practices of security. Three pedagogical methods will be introduced: engaging with the students’ life worlds, revealing the positionality of security knowledge claims, and opening up the class-room to the choices about how the youth’s agency can be performed beyond the classroom. The argument is illustrated through the case of forced migration with specific reference to IR and Politics students’ perceptions of Syrian refugees in Turkey. The article advances the discussions in critical IR pedagogy and encourages CSS scholarship to focus on teaching in accordance with its normative position

    Scenario generation and scenario quality using the cone of plausibility

    Get PDF
    The intelligence analysis domain is a critical area for futures work. Indeed, intelligence analysts’ judgments of security threats are based on considerations of how futures may unfold, and as such play a vital role in informing policy- and decision-making. In this domain, futures are typically considered using qualitative scenario generation techniques such as the cone of plausibility (CoP). We empirically examined the quality of scenarios generated using this technique on five criteria: completeness, context (otherwise known as ‘relevance/pertinence’), plausibility, coherence, and order effects (i.e., ‘transparency’). Participants were trained to use the CoP and then asked to generate scenarios that might follow within six months of the Turkish government banning Syrian refugees from entering the country. On average, participants generated three scenarios, and these could be characterized as baseline, best case, and worst case. All scenarios were significantly more likely to be of high quality on the ‘coherence’ criterion compared to the other criteria. Scenario quality was independent of scenario type. However, scenarios generated first were significantly more likely to be of high quality on the context and order effects criteria compared to those generated afterwards. We discuss the implications of these findings for the use of the CoP as well as other qualitative scenario generation techniques in futures studies

    Boosting intelligence analysts’ judgment accuracy: what works, what fails?

    Get PDF
    A routine part of intelligence analysis is judging the probability of alternative hypotheses given available evidence. Intelligence organizations advise analysts to use intelligence-tradecraft methods such as Analysis of Competing Hypotheses (ACH) to improve judgment, but such methods have not been rigorously tested. We compared the evidence evaluation and judgment accuracy of a group of intelligence analysts who were recently trained in ACH and then used it on a probability judgment task to another group of analysts from the same cohort that were neither trained in ACH nor asked to use any specific method. Although the ACH group assessed information usefulness better than the control group, the control group was a little more accurate (and coherent) than the ACH group. Both groups, however, exhibited suboptimal judgment and were susceptible to unpacking effects. Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy. Specifically, mean absolute error (MAE) in analysts’ probability judgments decreased by 61% after first coherentizing their judgments (a process that ensures judgments respect the unitarity axiom) and then aggregating their judgments. The findings cast doubt on the efficacy of ACH, and show the promise of statistical methods for boosting judgment quality in intelligence and other organizations that routinely produce expert judgments

    The "analysis of competing hypotheses" in intelligence analysis

    Get PDF
    The intelligence community uses ‘structured analytic techniques’ to help analysts think critically and avoid cognitive bias. However, little evidence exists of how techniques are applied and whether they are effective. We examined the use of the Analysis of Competing Hypotheses (ACH) – a technique designed to reduce ‘confirmation bias’. Fifty intelligence analysts were randomly assigned to use ACH or not when completing a hypothesis testing task that had probabilistic ground truth. Data on analysts’ judgment processes and conclusions was collected using written protocols that were then coded for statistical analyses. We found that ACH-trained analysts did not follow all of the steps of ACH. There was mixed evidence for ACH’s ability to reduce confirmation bias, and we observed that ACH may increase judgment inconsistency and error. It may be prudent for the intelligence community to consider the conditions under which ACH would prove useful, and to explore alternatives

    Meta-informational cue inconsistency and judgment of information accuracy: spotlight on intelligence analysis

    Get PDF
    Meta-information is information about information that can be used as cues to guide judgments and decisions. Three types of meta-information that are routinely used in intelligence analysis are source reliability, information credibility and classification level. The first two cues are intended to speak to information quality (in particular, the probability that the information is accurate), and classification level is intended to describe the information’s security sensitivity. Two experiments involving professional intelligence analysts (N = 25 and 27, respectively) manipulated meta-information in a 6 (source reliability) by 6 (information credibility) by 2 (classification) repeated-measures design. Ten additional items were retested to measure intra-individual reliability. Analysts judged the probability of information accuracy based on its meta-informational profile. In both experiments, the judged probability of information accuracy was sensitive to ordinal position on the scales and the directionality of linguistic terms used to anchor the levels of the two scales. Directionality led analysts to group the first three levels of each scale in a positive group and the fourth and fifth levels in a negative group, with the neutral term “cannot be judged” falling between these groups. Critically, as reliability and credibility cue inconsistency increased, there was a corresponding decrease in intra-analyst reliability, inter-analyst agreement, and effective cue utilization. Neither experiment found a significant effect of classification on probability judgments

    STUDY OF THERMAL INTERACTION OF CELL-PHONE RADIATIONS WITHIN HUMAN HEAD TISSUES

    Get PDF
    In the present investigation, a theoretical model based on Maxwell equations, microscopic form of ohm's law and Joules law of heating effect is proposed for the study of penetration depth, attenuation coefficient and specific absorption rate (SAR) with varying distance between the source of radiation and exposed human head tissues (skin, fat, brain and bone). In addition, corresponding temperature increase inside these various human head tissues is also calculated. Results of present study indicate that the temperature rise in human tissue depends upon specific absorption rate and the duration for which human body is actually exposed to GSM radiations.  By assuming the distance of 1cm and exposure time of 5 minutes, the highest SAR was estimated to be 1681.7 W/Kg for the brain tissue at 900 MHz and 4038.5 W/Kg for 1800 MHz. Maximum skin depth and attenuation coefficient was found to be in the case of fat and brain tissue, respectively amongst rest head tissues. The corresponding highest temperature rise for the brain tissue was calculated to be 2.31K at 900 MHz and 5.54K at 1800 MHz frequencies
    • 

    corecore