477,889 research outputs found

    Introducing the STAMP method in road tunnel safety assessment

    Get PDF
    After the tremendous accidents in European road tunnels over the past decade, many risk assessment methods have been proposed worldwide, most of them based on Quantitative Risk Assessment (QRA). Although QRAs are helpful to address physical aspects and facilities of tunnels, current approaches in the road tunnel field have limitations to model organizational aspects, software behavior and the adaptation of the tunnel system over time. This paper reviews the aforementioned limitations and highlights the need to enhance the safety assessment process of these critical infrastructures with a complementary approach that links the organizational factors to the operational and technical issues, analyze software behavior and models the dynamics of the tunnel system. To achieve this objective, this paper examines the scope for introducing a safety assessment method which is based on the systems thinking paradigm and draws upon the STAMP model. The method proposed is demonstrated through a case study of a tunnel ventilation system and the results show that it has the potential to identify scenarios that encompass both the technical system and the organizational structure. However, since the method does not provide quantitative estimations of risk, it is recommended to be used as a complementary approach to the traditional risk assessments rather than as an alternative. (C) 2012 Elsevier Ltd. All rights reserved

    Automated measurement of the spontaneous tail coiling of zebrafish embryos as a sensitive behavior endpoint using a workflow in KNIME

    Full text link
    Neuroactive substances are the largest group of chemicals detected in European surface waters. Mixtures of neuroactive substances occurring at low concentrations can induce adverse neurological effects in humans and organisms in the environment. Therefore, there is a need to develop new screening tools to detect these chemicals. Measurement of behavior or motor effects in rodents and fish are usually performed to assess potential neurotoxicity for risk assessment. However, due to pain and stress inflicted on these animals, the scientific community is advocating for new alternative methods based on the 3R principle (reduce, replace and refine). As a result, the behavior measurement of early stages of zebrafish embryos such as locomotor response, photomotor response and spontaneous tail coiling are considered as a valid alternative to adult animal testing. In this study, we developed a workflow to investigate the spontaneous tail coiling (STC) of zebrafish embryos and to accurately measure the STC effect in the KNIME software. We validated the STC protocol with 3 substances (abamectin, chlorpyrifos-oxon and pyracostrobin) which have different mechanisms of action. The KNIME workflow combined with easy and cost-effective method of video acquisition makes this STC protocol a valuable method for neurotoxicity testing

    Wildfire risk for main vegetation units in a biodiversity hotspot : modeling approach in New Caledonia, South Pacific

    Get PDF
    Wildfire has been recognized as one of the most ubiquitous disturbance agents to impact on natural environments. In this study, our main objective was to propose a modeling approach to investigate the potential impact of wildfire on biodiversity. The method is illustrated with an application example in New Caledonia where conservation and sustainable biodiversity management represent an important challenge. Firstly, a biodiversity loss index, including the diversity and the vulnerability indexes, was calculated for every vegetation unit in New Caledonia and mapped according to its distribution over the New Caledonian mainland. Then, based on spatially explicit fire behavior simulations (using the FLAMMAP software) and fire ignition probabilities, two original fire risk assessment approaches were proposed: a one-off event model and a multi-event burn probability model. The spatial distribution of fire risk across New Caledonia was similar for both indices with very small localized spots having high risk. The patterns relating to highest risk are all located around the remaining sclerophyll forest fragments and are representing 0.012% of the mainland surface. A small part of maquis and areas adjacent to dense humid forest on ultramafic substrates should also be monitored. Vegetation interfaces between secondary and primary units displayed high risk and should represent priority zones for fire effects mitigation. Low fire ignition probability in anthropogenic-free areas decreases drastically the risk. A one-off event associated risk allowed localizing of the most likely ignition areas with potential for extensive damage. Emergency actions could aim limiting specific fire spread known to have high impact or consist of on targeting high risk areas to limit one-off fire ignitions. Spatially explicit information on burning probability is necessary for setting strategic fire and fuel management planning. Both risk indices provide clues to preserve New Caledonia hot spot of biodiversity facing wildfires

    Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Optimizing the assessment of suicidal behavior: the application of curtailment techniques

    Get PDF
    Background: Given their length, commonly used scales to assess suicide risk, such as the Beck Scale for Suicide Ideation (SSI) are of limited use as screening tools. In the current study we tested whether deterministic and stochastic curtailment can be applied to shorten the 19-item SSI, without compromising its accuracy. Methods: Data from 366 patients, who were seen by a liaison psychiatry service in a general hospital in Scotland after a suicide attempt, were used. Within 24 h of admission, the SSI was administered; 15 months later, it was determined whether a patient was re-admitted to a hospital as the result of another suicide attempt. We fitted a Receiver Operating Characteristic curve to derive the best cut-off value of the SSI for predicting future suicidal behavior. Using this cut-off, both deterministic and stochastic curtailment were simulated on the item score patterns of the SSI. Results: A cut-off value of SSI≥6 provided the best classification accuracy for future suicidal behavior. Using this cut-off, we found that both deterministic and stochastic curtailment reduce the length of the SSI, without reducing the accuracy of the final classification decision. With stochastic curtailment, on average, less than 8 items are needed to assess whether administration of the full-length test will result in an SSI score below or above the cut-off value of 6. Limitations: New studies using other datasets should re-validate the optimal cut-off for risk of repeated suicidal behavior after being treated in a hospital following an attempt. Conclusions: Curtailment can be used to simplify the assessment of suicidal behavior, and should be considered as an alternative to the full scale

    Towards a scope management of non-functional requirements in requirements engineering

    Get PDF
    Getting business stakeholders’ goals formulated clearly and project scope defined realistically increases the chance of success for any application development process. As a consequence, stakeholders at early project stages acquire as much as possible knowledge about the requirements, their risk estimates and their prioritization. Current industrial practice suggests that in most software projects this scope assessment is performed on the user’s functional requirements (FRs), while the non-functional requirements (NFRs) remain, by and large, ignored. However, the increasing software complexity and competition in the software industry has highlighted the need to consider NFRs as an integral part of software modeling and development. This paper contributes towards harmonizing the need to build the functional behavior of a system with the need to model the associated NFRs while maintaining a scope management for NFRs. The paper presents a systematic and precisely defined model towards an early integration of NFRs within the requirements engineering (RE). Early experiences with the model indicate its ability to facilitate the process of acquiring the knowledge on the priority and risk of NFRs

    Alert-BDI: BDI Model with Adaptive Alertness through Situational Awareness

    Full text link
    In this paper, we address the problems faced by a group of agents that possess situational awareness, but lack a security mechanism, by the introduction of a adaptive risk management system. The Belief-Desire-Intention (BDI) architecture lacks a framework that would facilitate an adaptive risk management system that uses the situational awareness of the agents. We extend the BDI architecture with the concept of adaptive alertness. Agents can modify their level of alertness by monitoring the risks faced by them and by their peers. Alert-BDI enables the agents to detect and assess the risks faced by them in an efficient manner, thereby increasing operational efficiency and resistance against attacks.Comment: 14 pages, 3 figures. Submitted to ICACCI 2013, Mysore, Indi

    Run-time risk management in adaptive ICT systems

    No full text
    We will present results of the SERSCIS project related to risk management and mitigation strategies in adaptive multi-stakeholder ICT systems. The SERSCIS approach involves using semantic threat models to support automated design-time threat identification and mitigation analysis. The focus of this paper is the use of these models at run-time for automated threat detection and diagnosis. This is based on a combination of semantic reasoning and Bayesian inference applied to run-time system monitoring data. The resulting dynamic risk management approach is compared to a conventional ISO 27000 type approach, and validation test results presented from an Airport Collaborative Decision Making (A-CDM) scenario involving data exchange between multiple airport service providers
    corecore