1,112 research outputs found
Research Directions in Information Systems for Humanitarian Logistics
This article systematically reviews the literature on using IT (Information Technology) in humanitarian logistics focusing on disaster relief operations. We first discuss problems in humanitarian relief logistics. We then identify the stage and disaster type for each article as well as the article’s research methodology and research contribution. Finally, we identify potential future research directions
Insurability Challenges Under Uncertainty: An Attempt to Use the Artificial Neural Network for the Prediction of Losses from Natural Disasters
The main difficulty for natural disaster insurance derives from the uncertainty of an event’s damages. Insurers cannot precisely appreciate the weight of natural hazards because of risk dependences. Insurability under uncertainty first requires an accurate assessment of entire damages. Insured and insurers both win when premiums calculate risk properly. In such cases, coverage will be available and affordable. Using the artificial neural network – a technique rooted in artificial intelligence - insurers can predict annual natural disaster losses. There are many types of artificial neural network models. In this paper we use the multilayer perceptron neural network, the most accommodated to the prediction task. In fact, if we provide the natural disaster explanatory variables to the developed neural network, it calculates perfectly the potential annual losses for the studied country.Natural disaster losses, Insurability, Uncertainty, Multilayer perceptron neural network, Prediction.
Building Bridges at the Science-Stakeholder Interface: Towards Knowledge Exchange in Earth System Science
This book covers the approaches, applied methods and central participatory processes at the science-stakeholder interfaces embedded in the development of the "Earth System Knowledge Platform (ESKP)". The latter is an initiative of the German Helmholtz Association, synthesizing the expertise of the eight Helmholtz research institutions focusing on Earth System Sciences.
The contributions showcase the approach of the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) within the ESKP initiative. Central focus is placed on the question as to which knowledge transfer processes can be employed to foster meaningful approaches based on science-stakeholder dialogues, data products, and/or modelling. The authors suggest that the tools and approaches for enhancing the vital contributions of science to addressing societal challenges warrant further investigation and development
Tsunami-Related Data: A Review of Available Repositories Used in Scientific Literature
Various organizations and institutions store large volumes of tsunami-related data, whose
availability and quality should benefit society, as it improves decision making before the tsunami
occurrence, during the tsunami impact, and when coping with the aftermath. However, the existing
digital ecosystem surrounding tsunami research prevents us from extracting the maximum benefit
from our research investments. The main objective of this study is to explore the field of data
repositories providing secondary data associated with tsunami research and analyze the current
situation. We analyze the mutual interconnections of references in scientific studies published in the
Web of Science database, governmental bodies, commercial organizations, and research agencies. A
set of criteria was used to evaluate content and searchability. We identified 60 data repositories with
records used in tsunami research. The heterogeneity of data formats, deactivated or nonfunctional
web pages, the generality of data repositories, or poor dataset arrangement represent the most
significant weak points. We outline the potential contribution of ontology engineering as an example
of computer science methods that enable improvements in tsunami-related data management
Recommended from our members
IOME, A Toolkit for Distributed and Collaborative Computational Science and Engineering
The internet provides a media rich communications platform enabling communities to share content. Alongside the increased activity in collaborative work, recent developments on workflow tools are now enabling researchers from different disciplines to collaborate by feeding data and results between large multi-disciplinary, optimization problems. Researchers developing computational models require development kits and tools enabling them to provide simulations with a range of methods that facilitate collaboration. This paper presents a unique, multi-purpose tool-kit, enabling researchers to easily develop simulations which may be run as web services and accessed interactively. The development kit is based on a protocol that uses an XML markup called IOME ML, "the Interactive Object Management Environment Markup Language". The paper describes the IOME ML and it's development kit. We illustrate the capabilities of IOME with two case studies. Firstly, a medical image processing application which is wrapped as a web service and accessed through a web browser offering medical professionals image analysis tools. Secondly, a method of collaborative visualisation and computational steering of a tsunami simulation based on a shallow water wave model. The paper concludes with a review of further developments including refinements to the mark up language and the development of a service factory enabling dynamic invocation of published simulations as IOME web service applications
Recommended from our members
Automated web-based analysis and visualization of spatiotemporal data
Most data are associated with a place, and many are also associated with a moment in time, a time interval, or another linked temporal component. Spatiotemporal data (i.e., data with elements of both space and time) can be used to assess movement or change over time in a particular location, an approach that is useful across many disciplines. However, spatiotemporal data structures can be quite complex, and the datasets very large. Although GIS software programs are capable of processing and analyzing spatial information, most contain no (or minimal) features for handling temporal information and have limited capability to deal with large, complex multidimensional spatiotemporal data. A related problem is how to best represent spatiotemporal data to support efficient processing, analysis, and visualization.
In the era of "big data," efficient methods for analyzing and visualizing large quantities of spatiotemporal data have become increasingly necessary. Automated processing approaches, when made scalable and generalizable, can result in much greater efficiency in spatiotemporal data analysis. The growing popularity of web services and server-side processing methods can be leveraged to create systems for processing spatiotemporal data on the server, with delivery of output products to the client. In many cases, the client can be a standard web browser, providing a common platform from which users can interact with complex server-side processing systems to produce specific output data and visualizations. The rise of complex JavaScript libraries for creating interactive client-side tools has enabled the development of rich internet applications (RIA) that provide interactive data exploration capabilities and an enhanced user experience within the web browser.
Three projects involving time-series tsunami simulation data, potential human response in a tsunami evacuation scenario, and large sets of modeled time-series climate grids were conducted to explore automated web-based analysis, processing, and visualization of spatiotemporal data. Methods were developed for efficient handling of spatiotemporal data on the server side, as well as for interactive animation and visualization tools on the client side. The common web browser, particularly when combined with specialized server side code and client side RIA libraries, was found to be an effective platform for analysis and visualization tools that quickly interact with complex spatiotemporal data. Although specialized methods were developed to for each project, in most cases those methods can be generalized to other disciplines or computational domains where similar problem sets exist
- …