17,359 research outputs found
Large emergency-response exercises: qualitative characteristics - a survey
Exercises, drills, or simulations are widely used, by governments, agencies and commercial organizations, to simulate serious incidents and train staff how to respond to them. International cooperation has led to increasingly large-scale exercises, often involving hundreds or even thousands of participants in many locations. The difference between âlargeâ and âsmallâ exercises is more than one of size: (a) Large exercises are more âexperientialâ and more likely to undermine any model of reality that single organizations may create; (b) they create a âplay spaceâ in which organizations and individuals act out their own needs and identifications, and a ritual with strong social implications; (c) group-analytic psychotherapy suggests that the emotions aroused in a large group may be stronger and more difficult to control. Feelings are an unacknowledged major factor in the success or failure of exercises; (d) successful large exercises help improve the nature of trust between individuals and the organizations they represent, changing it from a situational trust to a personal trust; (e) it is more difficult to learn from large exercises or to apply the lessons identified; (f) however, large exercises can help develop organizations and individuals. Exercises (and simulation in general) need to be approached from a broader multidisciplinary direction if their full potential is to be realized
Recommended from our members
Practitioner Track Proceedings of the 6th International Learning Analytics & Knowledge Conference (LAK16)
Practitioners spearhead a significant portion of learning analytics, relying on implementation and experimentation rather than on traditional academic research. Both approaches help to improve the state of the art. The LAK conference has created a practitioner track for submissions, which first ran in 2015 as an alternative to the researcher track.
The primary goal of the practitioner track is to share thoughts and findings that stem from learning analytics project implementations. While both large and small implementations are considered, all practitioner track submissions are required to relate to initiatives that are designed for large-scale and/or long-term use (as opposed to research-focused initiatives). Other guidelines include:
⢠Implementation track record The project should have been used by an institution or have been deployed on a learning site. There are no hard guidelines about user numbers or how long the project has been running.
⢠Learning/education related Submissions have to describe work that addresses learning/academic analytics, either at an educational institution or in an area (such as corporate training, health care or informal learning) where the goal is to improve the learning environment or learning outcomes.
⢠Institutional involvement Neither submissions nor presentations have to include a named person from an academic institution. However, all submissions have to include information collected from people who have used the tool or initiative in a learning environment (such as faculty, students, administrators and trainees).
⢠No sales pitches While submissions from commercial suppliers are welcome; reviewers do not accept overt (or covert) sales pitches. Reviewers look for evidence that a presentation will take into account challenges faced, problems that have arisen, and/or user feedback that needs to be addressed.
Submissions are limited to 1,200 words, including an abstract, a summary of deployment with end users, and a full description. Most papers in the proceedings are therefore short, and often informal, although some authors chose to extend their papers once they had been accepted.
Papers accepted in 2016 fell into two categories.
⢠Practitioner Presentations Presentation sessions are designed to focus on deployment of a single learning analytics tool or initiative.
⢠Technology Showcase The Technology Showcase event enables practitioners to demonstrate new and emerging learning analytics technologies that they are piloting or deploying.
Both types of paper are included in these proceedings
Games for a new climate: experiencing the complexity of future risks
This repository item contains a single issue of the Pardee Center Task Force Reports, a publication series that began publishing in 2009 by the Boston University Frederick S. Pardee Center for the Study of the Longer-Range Future.This report is a product of the Pardee Center Task Force on Games for a New Climate, which met at Pardee House at Boston University in March 2012. The 12-member Task Force was convened on behalf of the Pardee Center by Visiting Research Fellow Pablo Suarez in collaboration with the Red Cross/Red Crescent Climate Centre to âexplore the potential of participatory, game-based processes for accelerating learning, fostering dialogue, and promoting action through real-world decisions affecting the longer-range future, with an emphasis on humanitarian and development work, particularly involving climate risk management.â
Compiled and edited by Janot Mendler de Suarez, Pablo Suarez and Carina Bachofen, the report includes contributions from all of the Task Force members and provides a detailed exploration of the current and potential ways in which games can be used to help a variety of stakeholders â including subsistence farmers, humanitarian workers, scientists, policymakers, and donors â to both understand and experience the difficulty and risks involved related to decision-making in a complex and uncertain future. The dozen Task Force experts who contributed to the report represent academic institutions, humanitarian organization, other non-governmental organizations, and game design firms with backgrounds ranging from climate modeling and anthropology to community-level disaster management and national and global policymaking as well as game design.Red Cross/Red Crescent Climate Centr
Interwoven Leadership: the Missing Link in Multi-Agency Major Incident Response
This paper reports on research into the effectiveness of strategic commanders and their multi-agency teams in response to major incidents. It is argued that current models of crisis leadership fail to establish a balance between the requirement for task skills,interpersonal skills, stakeholder awareness and personal qualities of commanders and their teams. The paper sets out a theoretical model for interwoven leadership combining
these features
Improving disaster response evaluations : Supporting advances in disaster risk management through the enhancement of response evaluation usefulness
Future disasters or crises are difficult to predict and therefore hard to prepare for. However, while a specific event might not have happened, it can be simulated in an exercise. The evaluation of performance during such an exercise can provide important information regarding the current state of preparedness, and used to improve the response to future events. For this to happen, evaluation products must be perceived as useful by the end user. Unfortunately, it appears that this is not the case. Both evaluations and their products are rarely used to their full extent or, in extreme cases, are regarded as paper-pushing exercises.The first part of this research characterises current evaluation practice, both in the scientific literature and in Dutch practice, based on a scoping study, document and content analyses, and expert judgements. The findings highlight that despite a recent increase in research attention, few studies focus on disaster management exercise evaluation. It is unclear whether current evaluations achieve their purpose, or how they contribute to disaster preparedness. Both theory and practice tend to view, and present evaluations in isolation. This limited focus creates a fragmented field that lacks coherence and depth. Furthermore, most evaluation documentation fails to justify or discuss the rational underlying the selected methods, and their link to the overall purpose or context of the exercise. The process of collecting and analysing contextual, evidence-based data, and using it to reach conclusions and make recommendations lacks methodological transparency and rigour. Consequently, professionals lack reliable guidance when designing evaluations.Therefore, the second part of this research aimed to gain an insights into what make evaluations useful, and suggest improvements. In particular, it highlights the values associated with the methodology used to record and present evaluation outcomes to end users. The notion of an âevaluation descriptionâ is introduced to support the identification of four components that are assumed to influence the usefulness of an evaluation: its purpose, object description, analysis and conclusion. Survey experiments identified that how these elements â notably, the analysis and/ or conclusions â are documented significantly influences the usefulness of the product. Furthermore, different components are more useful depending on the purpose of the report (for learning or accountability). Crisis management professionals expect the analysis to go beyond the object of the evaluation, and focus on the broader context. They expect a rigorous evaluation to provide them with evidence-based judgements that deliver actionable conclusions and support future learning.Overall, this research shows that the design and execution of evaluations should provide systematic, rigorous, evidence-based and actionable outcomes. It suggests some ways to manage both the process and the products of an evaluation to improve its usefulness. Finally, it underlines that it is not the evaluation itself that leads to improvement, but its use. Evaluation should, therefore, be seen as a means to an end
Information practices of disaster preparedness professionals in multidisciplinary groups
OBJECTIVE: This article summarizes the results of a descriptive qualitative study addressing the question, what are the information practices of the various professionals involved in disaster preparedness? We present key results, but focus on issues of choice and adaptation of models and theories for the study. METHODS: Primary and secondary literature on theory and models of information behavior were consulted. Taylor's Information Use Environments (IUE) model, Institutional Theory, and Dervin's Sense-Making metatheory were used in the design of an open-ended interview schedule. Twelve individual face-to-face interviews were conducted with disaster professionals drawn from the Pennsylvania Preparedness Leadership Institute (PPLI) scholars. Taylor's Information Use Environments (IUE) model served as a preliminary coding framework for the transcribed interviews. RESULTS: Disaster professionals varied in their use of libraries, peer-reviewed literature, and information management techniques, but many practices were similar across professions, including heavy Internet and email use, satisficing, and preference for sources that are socially and physically accessible. CONCLUSIONS: The IUE model provided an excellent foundation for the coding scheme, but required modification to place the workplace in the larger social context of the current information society. It is not possible to confidently attribute all work-related information practices to professional culture. Differences in information practice observed may arise from professional training and organizational environment, while many similarities observed seem to arise from everyday information practices common to non-work settings
Appraising critical infrastructure systems with visualisation
This paper explores the use of system modelling as an approach for appraising critical infrastructure systems. It reports on focus group findings with relation to the system modelling aspects of a critical infrastructure security analysis and modelling framework. Specifically, this discussion focuses on the interpretations of a focus group in terms of the likely benefits or otherwise of system visualisation. With the group focusing on its perceived value as an educational tool in terms of providing an abstract visualisation representation of a critical infrastructure system incident.<br /
Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)
This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio
- âŚ