38 research outputs found

    Null hypothesis testing ≠ scientific inference: a critique of the shaky premise at the heart of the science and values debate, and a defense of value‐neutral risk assessment

    Get PDF
    Many philosophers and statisticians argue that risk assessors are morally obligated to evaluate the probabilities and consequences of methodological error, and to base their decisions of whether to adopt a given parameter value, model, or hypothesis on those considerations. This argument is couched within the rubric of null hypothesis testing, which I suggest is a poor descriptive and normative model for risk assessment. Risk regulation is not primarily concerned with evaluating the probability of data conditional upon the null hypothesis, but rather with measuring risks, estimating the consequences of available courses of action and inaction, formally characterizing uncertainty, and deciding what to do based upon explicit values and decision criteria. In turn, I defend an ideal of value‐neutrality, whereby the core inferential tasks of risk assessment—such as weighing evidence, estimating parameters, and model selection—should be guided by the aim of correspondence to reality. This is not to say that value judgments be damned, but rather that they should be accounted for within a structured approach to decision analysis, rather than embedded within risk assessment in an informal manner

    Fast and frugal crisis management: an analysis of rule-based judgment and choice during water contamination events

    Get PDF
    Drawing on the fast and frugal research program, this paper describes a retrospective field study of decision making during water contamination events. It characterizes three heuristics employed in real-world decision making. The credibility heuristic discriminates between signals from targets and noise from distracters on the basis of the perceived trustworthiness of the message conveyor. With the precedent heuristic, the response to an unfolding event is determined by searching for past analogues (i.e. precedents) and, if found, treating the current event in the same fashion. By contrast, the facts-trump-speculation heuristic discriminates between conflicting explanations or claims according to how they rank on pre-determined hierarchies of evidence (orders of cue validities), neglecting utilities and avoiding the aggregation of competing lines of evidence. Rather than cataloguing the biases that these heuristics lead to, this paper focuses on the structural factors which shape each heuristic’s ecological rationality. In doing so, the study develops ideas about how particular infrastructure systems and forms of social organization structure the validity of cues, the accessibility of information, and the application of particular heuristics. The study also introduces the concept of safeguards to rule-based reasoning, and the idea that heuristics can be used to rationalize decisions, and deployed strategically to persuade other social actors. The over-arching claim is that the fast and frugal program provides a powerful framework for analyzing judgment and choice in organizations, and offers a bridge between psychological and political models of organizational behavior

    Handling uncertainty in models of seismic and postseismic hazards: toward robust methods and resilient societies

    Get PDF
    Earthquakes, tsunamis, and landslides take a devastating toll on human lives, critical infrastructure, and ecosystems. Harnessing the predictive capacities of hazard models is key to transitioning from reactive approaches to disaster management toward building resilient societies, yet the knowledge that these models produce involves multiple uncertainties. The failure to properly account for these uncertainties has at times had important implications, from the flawed safety measures at the Fukushima power plant, to the reliance on short‐term earthquake prediction models (reportedly at the expense of mitigation efforts) in modern China. This article provides an overview of methods for handling uncertainty in probabilistic seismic hazard assessment, tsunami hazard analysis, and debris flow modeling, considering best practices and areas for improvement. It covers sensitivity analysis, structured approaches to expert elicitation, methods for characterizing structural uncertainty (e.g., ensembles and logic trees), and the value of formal decision‐analytic frameworks even in situations of deep uncertainty

    Heuristics structure and pervade formal risk assessment

    Get PDF
    Lay perceptions of risk appear rooted more in heuristics than in reason. A major concern of the risk regulation literature is that such “error-strewn” perceptions may be replicated in policy, as governments respond to the (mis)fears of the citizenry. This has led many to advocate a relatively technocratic approach to regulating risk, characterized by high reliance on formal risk and cost-benefit analysis. However, through two studies of chemicals regulation, we show that the formal assessment of risk is pervaded by its own set of heuristics. These include rules to categorize potential threats, define what constitutes valid data, guide causal inference, and to select and apply formal models. Some of these heuristics lay claim to theoretical or empirical justifications, others are more back-of-the-envelope calculations, while still more purport not to reflect some truth but simply to constrain discretion or perform a desk-clearing function. These heuristics can be understood as a way of authenticating or formalizing risk assessment as a scientific practice, representing a series of rules for bounding problems, collecting data, and interpreting evidence (a methodology). Heuristics are indispensable elements of induction. And so they are not problematic per se, but they can become so when treated as laws rather than as contingent and provisional rules. Pitfalls include the potential for systematic error, masking uncertainties, strategic manipulation, and entrenchment. Our central claim is that by studying the rules of risk assessment qua rules, we develop a novel representation of the methods, conventions, and biases of the prior art

    Approaches to evaluating model quality across different regime types in environmental and public health governance

    Get PDF
    A reliance on mathematical modelling is a defining feature of modern global environmental and public health governance. Initially hailed as the vanguard of a new era of rational policy-making, models are now habitually subject to critical analyses. Their quality, in other words, is routinely queried, yet what exactly is quality in this context? The prevailing paradigm views model quality as a multi-dimensional concept, encompassing technical dimensions (e.g. precision and bias), value judgments, problem-framing, treatment of 'deep' uncertainties, and pragmatic features of particular decision contexts. Whilst those technical dimensions are relatively simple to characterise, the broader dimensions of quality are less easily formalised and as a result are difficult to take account of during model construction and evaluation. Here, we present a typology of governance regimes (risk-based, precautionary, adaptive and participatory) that helps make explicit what these broader dimensions of model quality are, and sketches out how the emphasis placed on them differs by regime-type. We show that these regime types hold distinct positions on what constitutes sound evidence, on how that evidence should be used in policy-making, and to what social ends. As such, a model may be viewed within one regime as providing legitimate evidence for action, be down-weighted elsewhere for reflecting a flawed problem-framing, and outright rejected in another jurisdiction on the grounds that it doesn't cohere with the preferred ethical framework for decision-making. We illustrate these dynamics by applying our typology to a range of policy domains, emphasising both the disconnects that can occur, as well as the ways that modellers have adapted their practices to ensure that their evidence is brought to bear on policy problems across diverse regime types

    The First Hour of Extra-galactic Data of the Sloan Digital Sky Survey Spectroscopic Commissioning: The Coma Cluster

    Full text link
    On 26 May 1999, one of the Sloan Digital Sky Survey (SDSS) fiber-fed spectrographs saw astronomical first light. This was followed by the first spectroscopic commissioning run during the dark period of June 1999. We present here the first hour of extra-galactic spectroscopy taken during these early commissioning stages: an observation of the Coma cluster of galaxies. Our data samples the Southern part of this cluster, out to a radius of 1.5degrees and thus fully covers the NGC 4839 group. We outline in this paper the main characteristics of the SDSS spectroscopic systems and provide redshifts and spectral classifications for 196 Coma galaxies, of which 45 redshifts are new. For the 151 galaxies in common with the literature, we find excellent agreement between our redshift determinations and the published values. As part of our analysis, we have investigated four different spectral classification algorithms: spectral line strengths, a principal component decomposition, a wavelet analysis and the fitting of spectral synthesis models to the data. We find that a significant fraction (25%) of our observed Coma galaxies show signs of recent star-formation activity and that the velocity dispersion of these active galaxies (emission-line and post-starburst galaxies) is 30% larger than the absorption-line galaxies. We also find no active galaxies within the central (projected) 200 h-1 Kpc of the cluster. The spatial distribution of our Coma active galaxies is consistent with that found at higher redshift for the CNOC1 cluster survey. Beyond the core region, the fraction of bright active galaxies appears to rise slowly out to the virial radius and are randomly distributed within the cluster with no apparent correlation with the potential merger of the NGC 4839 group. [ABRIDGED]Comment: Accepted in AJ, 65 pages, 20 figures, 5 table

    Implementation of corticosteroids in treating COVID-19 in the ISARIC WHO Clinical Characterisation Protocol UK:prospective observational cohort study

    Get PDF
    BACKGROUND: Dexamethasone was the first intervention proven to reduce mortality in patients with COVID-19 being treated in hospital. We aimed to evaluate the adoption of corticosteroids in the treatment of COVID-19 in the UK after the RECOVERY trial publication on June 16, 2020, and to identify discrepancies in care. METHODS: We did an audit of clinical implementation of corticosteroids in a prospective, observational, cohort study in 237 UK acute care hospitals between March 16, 2020, and April 14, 2021, restricted to patients aged 18 years or older with proven or high likelihood of COVID-19, who received supplementary oxygen. The primary outcome was administration of dexamethasone, prednisolone, hydrocortisone, or methylprednisolone. This study is registered with ISRCTN, ISRCTN66726260. FINDINGS: Between June 17, 2020, and April 14, 2021, 47 795 (75·2%) of 63 525 of patients on supplementary oxygen received corticosteroids, higher among patients requiring critical care than in those who received ward care (11 185 [86·6%] of 12 909 vs 36 415 [72·4%] of 50 278). Patients 50 years or older were significantly less likely to receive corticosteroids than those younger than 50 years (adjusted odds ratio 0·79 [95% CI 0·70–0·89], p=0·0001, for 70–79 years; 0·52 [0·46–0·58], p80 years), independent of patient demographics and illness severity. 84 (54·2%) of 155 pregnant women received corticosteroids. Rates of corticosteroid administration increased from 27·5% in the week before June 16, 2020, to 75–80% in January, 2021. INTERPRETATION: Implementation of corticosteroids into clinical practice in the UK for patients with COVID-19 has been successful, but not universal. Patients older than 70 years, independent of illness severity, chronic neurological disease, and dementia, were less likely to receive corticosteroids than those who were younger, as were pregnant women. This could reflect appropriate clinical decision making, but the possibility of inequitable access to life-saving care should be considered. FUNDING: UK National Institute for Health Research and UK Medical Research Council

    Para-infectious brain injury in COVID-19 persists at follow-up despite attenuated cytokine and autoantibody responses

    Get PDF
    To understand neurological complications of COVID-19 better both acutely and for recovery, we measured markers of brain injury, inflammatory mediators, and autoantibodies in 203 hospitalised participants; 111 with acute sera (1–11 days post-admission) and 92 convalescent sera (56 with COVID-19-associated neurological diagnoses). Here we show that compared to 60 uninfected controls, tTau, GFAP, NfL, and UCH-L1 are increased with COVID-19 infection at acute timepoints and NfL and GFAP are significantly higher in participants with neurological complications. Inflammatory mediators (IL-6, IL-12p40, HGF, M-CSF, CCL2, and IL-1RA) are associated with both altered consciousness and markers of brain injury. Autoantibodies are more common in COVID-19 than controls and some (including against MYL7, UCH-L1, and GRIN3B) are more frequent with altered consciousness. Additionally, convalescent participants with neurological complications show elevated GFAP and NfL, unrelated to attenuated systemic inflammatory mediators and to autoantibody responses. Overall, neurological complications of COVID-19 are associated with evidence of neuroglial injury in both acute and late disease and these correlate with dysregulated innate and adaptive immune responses acutely

    What can water utilities do to improve risk management within their business functions? An improved tool and application of process benchmarking.

    Get PDF
    We present a model for benchmarking risk analysis and risk based decision making practice within organisations. It draws on behavioural and normative risk research, the principles of capability maturity modelling and our empirical observations. It codifies the processes of risk analysis and risk based decision making within a framework that distinguishes between different levels of maturity. Application of the model is detailed within the selected business functions of a water and wastewater utility. Observed risk analysis and risk based decision making practices are discussed, together with their maturity of implementation. The findings provide academics, utility professionals, and regulators a deeper understanding of the practical and theoretical underpinnings of risk management, and how distinctions can be made between organisational capabilities in this essential business process

    Benchmarking risk management practice within the water utility sector

    No full text
    Explicit approaches to risk analysis within the water utility sector, traditionally applied to occupational health and safety and public health protection, are now seeing broader application in contexts including corporate level decision making, asset management, watershed protection and network reliability. Our research suggested that neither the development of novel risk analysis techniques nor the refinement of existing ones was of paramount importance in improving the capabilities of water utilities to manage risk. It was thought that a more fruitful approach would be to focus on the implementation of risk management rather than the techniques employed per se. Thus, we developed a prescriptive capability maturity model for benchmarking the maturity of implementation of water utility risk management practice, and applied it to the sector via case study and benchmarking survey. We observed risk management practices ranging from the application of hazard and operability studies, to the use of scenario planning in guiding organisational restructuring programmes. We observed methods for their institutionalisation, including the use of initiation criteria for applying risk analysis techniques; the adoption of formalised procedures to guide their application; and auditing and peer reviews to ensure procedural compliance and provide quality assurance. We then built upon this research to develop a descriptive1 capability maturity model of utility risk analysis and risk based decision making practice, and described its case study application. The contribution to knowledge of this stage of the research was three-fold, we: synthesized empirical observations with behavioral and normative theories to codify the processes of risk analysis and risk based decision making; placed these processes within a maturity framework which distinguishes their relative maturity of implementation from ad hoc to adaptive; and provided a comparative analysis of risk analysis and risk based decision making practices, and their maturity of implementation, across a range of utility functions. The research provides utility managers, technical staff, project managers and chief finance officers with a practical and systematic understanding of how to implement and improve risk management, and offers preliminary guidance to regulators concerning how improved water utility governance can be made real
    corecore