64,774 research outputs found

    Architecture of Environmental Risk Modelling: for a faster and more robust response to natural disasters

    Full text link
    Demands on the disaster response capacity of the European Union are likely to increase, as the impacts of disasters continue to grow both in size and frequency. This has resulted in intensive research on issues concerning spatially-explicit information and modelling and their multiple sources of uncertainty. Geospatial support is one of the forms of assistance frequently required by emergency response centres along with hazard forecast and event management assessment. Robust modelling of natural hazards requires dynamic simulations under an array of multiple inputs from different sources. Uncertainty is associated with meteorological forecast and calibration of the model parameters. Software uncertainty also derives from the data transformation models (D-TM) needed for predicting hazard behaviour and its consequences. On the other hand, social contributions have recently been recognized as valuable in raw-data collection and mapping efforts traditionally dominated by professional organizations. Here an architecture overview is proposed for adaptive and robust modelling of natural hazards, following the Semantic Array Programming paradigm to also include the distributed array of social contributors called Citizen Sensor in a semantically-enhanced strategy for D-TM modelling. The modelling architecture proposes a multicriteria approach for assessing the array of potential impacts with qualitative rapid assessment methods based on a Partial Open Loop Feedback Control (POLFC) schema and complementing more traditional and accurate a-posteriori assessment. We discuss the computational aspect of environmental risk modelling using array-based parallel paradigms on High Performance Computing (HPC) platforms, in order for the implications of urgency to be introduced into the systems (Urgent-HPC).Comment: 12 pages, 1 figure, 1 text box, presented at the 3rd Conference of Computational Interdisciplinary Sciences (CCIS 2014), Asuncion, Paragua

    The development of a rich multimedia training environment for crisis management: using emotional affect to enhance learning

    Get PDF
    PANDORA is an EU FP7-funded project developing a novel training and learning environment for Gold Commanders, individuals who carry executive responsibility for the services and facilities identified as strategically critical e.g. Police, Fire, in crisis management strategic planning situations. A key part of the work for this project is considering the emotional and behavioural state of the trainees, and the creation of more realistic, and thereby stressful, representations of multimedia information to impact on the decision-making of those trainees. Existing training models are predominantly paper-based, table-top exercises, which require an exercise of imagination on the part of the trainees to consider not only the various aspects of a crisis situation but also the impacts of interventions, and remediating actions in the event of the failure of an intervention. However, existing computing models and tools are focused on supporting tactical and operational activities in crisis management, not strategic. Therefore, the PANDORA system will provide a rich multimedia information environment, to provide trainees with the detailed information they require to develop strategic plans to deal with a crisis scenario, and will then provide information on the impacts of the implementation of those plans and provide the opportunity for the trainees to revise and remediate those plans. Since this activity is invariably multi-agency, the training environment must support group-based strategic planning activities and trainees will occupy specific roles within the crisis scenario. The system will also provide a range of non-playing characters (NPC) representing domain experts, high-level controllers (e.g. politicians, ministers), low-level controllers (tactical and operational commanders), and missing trainee roles, to ensure a fully populated scenario can be realised in each instantiation. Within the environment, the emotional and behavioural state of the trainees will be monitored, and interventions, in the form of environmental information controls and mechanisms impacting on the stress levels and decisionmaking capabilities of the trainees, will be used to personalise the training environment. This approach enables a richer and more realistic representation of the crisis scenario to be enacted, leading to better strategic plans and providing trainees with structured feedback on their performance under stress

    CBR and MBR techniques: review for an application in the emergencies domain

    Get PDF
    The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system. RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to: a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location. In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations. This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version

    Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations

    Get PDF
    The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded. The aim of this paper is to outline fundamental concepts and principles of the Agent-Based Modelling (ABM) paradigm, with particular reference to the development of geospatial simulations. The paper begins with a brief definition of modelling, followed by a classification of model types, and a comment regarding a shift (in certain circumstances) towards modelling systems at the individual-level. In particular, automata approaches (e.g. Cellular Automata, CA, and ABM) have been particularly popular, with ABM moving to the fore. A definition of agents and agent-based models is given; identifying their advantages and disadvantages, especially in relation to geospatial modelling. The potential use of agent-based models is discussed, and how-to instructions for developing an agent-based model are provided. Types of simulation / modelling systems available for ABM are defined, supplemented with criteria to consider before choosing a particular system for a modelling endeavour. Information pertaining to a selection of simulation / modelling systems (Swarm, MASON, Repast, StarLogo, NetLogo, OBEUS, AgentSheets and AnyLogic) is provided, categorised by their licensing policy (open source, shareware / freeware and proprietary systems). The evaluation (i.e. verification, calibration, validation and analysis) of agent-based models and their output is examined, and noteworthy applications are discussed.Geographical Information Systems (GIS) are a particularly useful medium for representing model input and output of a geospatial nature. However, GIS are not well suited to dynamic modelling (e.g. ABM). In particular, problems of representing time and change within GIS are highlighted. Consequently, this paper explores the opportunity of linking (through coupling or integration / embedding) a GIS with a simulation / modelling system purposely built, and therefore better suited to supporting the requirements of ABM. This paper concludes with a synthesis of the discussion that has proceeded

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Adaptive Process Management in Cyber-Physical Domains

    Get PDF
    The increasing application of process-oriented approaches in new challenging cyber-physical domains beyond business computing (e.g., personalized healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex processes in such domains. A cyber-physical domain is characterized by the presence of a cyber-physical system coordinating heterogeneous ICT components (PCs, smartphones, sensors, actuators) and involving real world entities (humans, machines, agents, robots, etc.) that perform complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and processes enacted in cyber-physical domains must be robust to unexpected conditions and adaptable to unanticipated exceptions. This demands a more flexible approach in process design and enactment, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. In this chapter, we tackle the above issue and we propose a general approach, a concrete framework and a process management system implementation, called SmartPM, for automatically adapting processes enacted in cyber-physical domains in case of unanticipated exceptions and exogenous events. The adaptation mechanism provided by SmartPM is based on declarative task specifications, execution monitoring for detecting failures and context changes at run-time, and automated planning techniques to self-repair the running process, without requiring to predefine any specific adaptation policy or exception handler at design-time

    Utilization of big data to improve management of the emergency departments. Results of a systematic review

    Get PDF
    Background. The emphasis on using big data is growing exponentially in several sectors including biomedicine, life sciences and scientific research, mainly due to advances in information technologies and data analysis techniques. Actually, medical sciences can rely on a large amount of biomedical information and Big Data can aggregate information around multiple scales, from the DNA to the ecosystems. Given these premises, we wondered if big data could be useful to analyze complex systems such as the Emergency Departments (EDs) to improve their management and eventually patient outcomes. Methods. We performed a systematic review of the literature to identify the studies that implemented the application of big data in EDs and to describe what have already been done and what are the expectations, issues and challenges in this field. Results. Globally, eight studies met our inclusion criteria concerning three main activities: the management of ED visits, the ED process and activities and, finally, the prediction of the outcome of ED patients. Although the results of the studies show good perspectives regarding the use of big data in the management of emergency departments, there are still some issues that make their use still difficult. Most of the predictive models and algorithms have been applied only in retrospective studies, not considering the challenge and the costs of a real-time use of big data. Only few studies highlight the possible usefulness of the large volume of clinical data stored into electronic health records to generate evidence in real time. Conclusion. The proper use of big data in this field still requires a better management information flow to allow real-time application

    Scientific knowledge and scientific uncertainty in bushfire and flood risk mitigation: literature review

    Get PDF
    EXECUTIVE SUMMARY The Scientific Diversity, Scientific Uncertainty and Risk Mitigation Policy and Planning (RMPP) project aims to investigate the diversity and uncertainty of bushfire and flood science, and its contribution to risk mitigation policy and planning. The project investigates how policy makers, practitioners, courts, inquiries and the community differentiate, understand and use scientific knowledge in relation to bushfire and flood risk. It uses qualitative social science methods and case studies to analyse how diverse types of knowledge are ordered and judged as salient, credible and authoritative, and the pragmatic meaning this holds for emergency management across the PPRR spectrum. This research report is the second literature review of the RMPP project and was written before any of the case studies had been completed. It synthesises approximately 250 academic sources on bushfire and flood risk science, including research on hazard modelling, prescribed burning, hydrological engineering, development planning, meteorology, climatology and evacuation planning. The report also incorporates theoretical insights from the fields of risk studies and science and technology studies (STS), as well as indicative research regarding the public understandings of science, risk communication and deliberative planning. This report outlines the key scientific practices (methods and knowledge) and scientific uncertainties in bushfire and flood risk mitigation in Australia. Scientific uncertainties are those ‘known unknowns’ and ‘unknown unknowns’ that emerge from the development and utilisation of scientific knowledge. Risk mitigation involves those processes through which agencies attempt to limit the vulnerability of assets and values to a given hazard. The focus of this report is the uncertainties encountered and managed by risk mitigation professionals in regards to these two hazards, though literature regarding natural sciences and the scientific method more generally are also included where appropriate. It is important to note that while this report excludes professional experience and local knowledge from its consideration of uncertainties and knowledge, these are also very important aspects of risk mitigation which will be addressed in the RMPP project’s case studies. Key findings of this report include: Risk and scientific knowledge are both constructed categories, indicating that attempts to understand any individual instance of risk or scientific knowledge should be understood in light of the social, political, economic, and ecological context in which they emerge. Uncertainty is a necessary element of scientific methods, and as such risk mitigation practitioners and researchers alike should seek to ‘embrace uncertainty’ (Moore et al., 2005) as part of navigating bushfire and flood risk mitigation
    corecore