10 research outputs found

    Risk Assessment of Deadly Economic Socio-Political Crisis with Correlational Network and Convolutional Neural Network

    Get PDF
    From social analysis to biology to machine learning, graphs naturally occur in a wide range of applications. In contrast to studying data one at a time, graphs' unique capacity to capture structural relationships among data enables them to yield additional insights. Nevertheless, the capacity to learn from graphs can be difficult because meaningful connectivity should exist between data and the form of data such as text, numbers or categories should allow for building a graph from their relationships. Investigating hidden patterns in the variation of development indicators and severe socio-political crises that happened in low-income countries is an analytical approach that has been experimented with in this research. Evidence of a correlation between socio-political crises and development indicators suggests that a method to assess the risk of crisis should consider the context of each country, as well as the relative means of crisis. This research reviewed different risk assessment methods and proposed a novel method based on a weighted correlation network, and convolution neural network, to generate images representing the signature of development indicators correlating with a crisis. The convolution neural network trained to identify changes in indicators will be able to find countries with similar signatures and provide insights about important indicators that might reduce the number of deadly crises in a country. This research enhances the knowledge of developing a quantitative risk assessment for crisis prevention with development indicators

    Judgement Analysis of Patient Management: General Practitioners' Policies and Self-Insight.

    Get PDF
    In this thesis judgement analysis (multiple linear regression techniques) was used to look at both GPs' decisions to prescribe certain types of drug for patients and their judgements of patients' risk of coronary heart disease. All of these were idiographic analyses in that decision making by each GP was modelled separately. Judgement analysis (paramorphically) describes a subject's judgement or decision making policy in terms of the relative influence of different pieces of information. The amount of information subjects could take into account was limited. For all types of judgement or decision doctors were influenced on average by only four of the thirteen or twelve cues available. The decision to prescribe one of the types of drug was modelled not only in terms of the individual effects of cues (judgement analysis) but also in terms of the influence of the doctor's assessment of the patient's risk. Doctors agreed more about judgements of risk and the factors influencing this than about prescription. Doctors only prescribed to patients they rated as at high risk but factors such as for example smoking behaviour led some doctors not to prescribe to individuals in this group. Judgement and decision making policies (explicit policies) were also elicited verbally from doctors. These showed greater agreement than the policies captured using judgement analysis (tacit policies) did. When these explicit policies were compared to tacit policies a moderate amount of correspondence was found. However, doctors tended to over-rate the importance of certain cues. A number of explanations for this pattern of self-insight were investigated including the possibilities that doctors have self-insight but are unable to state it and that the pattern was an artefact of linear modelling. Both of these hypotheses were rejected. Subjects' explicit policies were found to resemble the pattern of selection of information more than the pattern of its use. Both the hypotheses that subjects' explicit policies were based on phenomenal knowledge and that they are based on some ideal model (influencing which cues are selected) were supported

    Cognitive Foundations for Visual Analytics

    Get PDF
    In this report, we provide an overview of scientific/technical literature on information visualization and VA. Topics discussed include an update and overview of the extensive literature search conducted for this study, the nature and purpose of the field, major research thrusts, and scientific foundations. We review methodologies for evaluating and measuring the impact of VA technologies as well as taxonomies that have been proposed for various purposes to support the VA community. A cognitive science perspective underlies each of these discussions

    CACIC 2015 : XXI Congreso Argentino de Ciencias de la Computación. Libro de actas

    Get PDF
    Actas del XXI Congreso Argentino de Ciencias de la Computación (CACIC 2015), realizado en Sede UNNOBA Junín, del 5 al 9 de octubre de 2015.Red de Universidades con Carreras en Informática (RedUNCI

    A NEW TREATMENT OF LOW PROBABILITY EVENTS WITH PARTICULAR APPLICATION TO NUCLEAR POWER PLANT INCIDENTS

    Get PDF
    PhDTechnological innovation is inescapable if civilisation is to continue in the face of population growth, rising expectations and resource exhaustion. Unfortunately, major innovations, confidently thought to be safe, occasionally fail catastrophically. The fears so engendered are impeding technical progress generally and that of nuclear power in particular. Attempts to allay disquiet about these disastrous Low Probability Events (LPEs) by exhaustive studies of nuclear power plant designs have, so far, been less than successful. The New Treatment adopts instead an approach that, after examination of the LPE in its historical and societal settings, combines theoretical design analysis with construction site and operational realities in pragmatic engineering, the quality of which can be assured by accountable inspection. The LPE is envisaged as a singularity in a stream of largely mundane, but untoward incidents, described as 'Event-noise'. Predictions of the likelihood of plant LPEs by frequency-theory probability are illusory because the LPE is unique and not part of a stable distribution. Again, noise analysis seems to lead to intractable mathematical expressions. While theoretical LPE prognostications depend on the identification of fault sequences in design that can either be designed-out or reduced to plausibly negligible probabilities, the reality of LPE prevention lies with the plant in operation. As absolute safety is unattainable, the approach aims at ensuring that the perceived residual nuclear risk is societally tolerable. An adaption of elementary Catastrophe theory to model the prospective Event-noise field to be experienced by the plant is proposed whereby potential, credible LPEs could be more readily discerned and avoided. In this milieu of increasing sophistication in technology when management in the traditional administrative mold is proving inadequate, the engineer emerges as the proper central decision-maker. The special intellectual capability needed is acquired during his training and experience, a claim that can draw support from new studies in neuropsychology. The Nuclear Installation Inspectorate is cited as an exemplar of a body practising the kind of engineering inspection needed to apprehend those human fallibilities to which most catastrophic failures of technology are due. Nevertheless, such regulatory systems lack accountability and, as Goedel's theorem suggests, cannot assess their own efficiency. Independent appraisal by Signal Detection Theory is suggested as a remedy

    Towards Solving the Table Maker's Dilemma on GPU

    No full text
    Since 1985, the IEEE 754 standard defines formats, rounding modes and basic operations for floating-point arithmetic. In 2008 the standard has been extended, and recommendations have been added about the rounding of some elementary functions such as trigonometric functions (cosine, sine, tangent and their inverses), exponentials, and logarithms. However to guarantee the exact rounding of these functions one has to approximate them with a suffi-cient precision. Finding this precision is known as the Table Maker’s Dilemma. To determine this precision, it is neces-sary to find the hardest-to-round argument of these func-tions. Lefèvre et al. proposed in 1998 an algorithm which improves the exhaustive search by computing a lower bound on the distance between a line segment and a grid. We present in this paper an analysis of this algorithm in or-der to deploy it efficiently on GPU. We manage to obtain a speedup of 15.4 on a NVIDIA Fermi GPU over one single high-end CPU core.
    corecore