11 research outputs found

    Designing an Architectural Model of Crisis Management Information System for Natural Disasters in Iran

    Get PDF
    Introduction: The crisis management information system (CMIS) is a mission-critical system that enables the crisis management team for understanding, diagnosing, interpreting, analyzing, structuring, and making decisions faster by providing timely and high-quality information at the right time. The purpose of this research is to provide an architectural model of a CMIS for managing natural disasters in the process of finding trapped victims and relieving them. Materials and Methods: This applied study was conducted in 2020 in two stages. First, data on CMIS used in selected countries were collected from electronic resources and digital libraries and were then analyzed. Next, a preliminary model of CMIS architecture including three aspects of informational content, applications, and technological requirements, was prepared using these systems and experts’ interviews. Finally, the architectural model of CMIS validated by the Delphi technique and the focus group. Results: The analysis of experts in three rounds of Delphi test for three aspects of informational content, applications and technological requirements in the architectural model was performed with the presence of experts at the national level and the consensus rate over 75% was obtained for 7 modules and 28 proposed components of the CMIS. Conclusion: The architecture of information systems has a direct impact on the performance of these systems. Using an appropriate architecture for CMIS can be an effective step towards reducing the costs and consequences of crises in Iran and countries with similar conditions and have a significant impact on saving human lives in emergency situations

    SEMANTIC LINKING SPATIAL RDF DATA TO THE WEB DATA SOURCES

    Get PDF
    Large amounts of spatial data are hold in relational databases. Spatial data in the relational databases must be converted to RDF for semantic web applications. Spatial data is an important key factor for creating spatial RDF data. Linked Data is the most preferred way by users to publish and share data in the relational databases on the Web. In order to define the semantics of the data, links are provided to vocabularies (ontologies or other external web resources) that are common conceptualizations for a domain. Linking data of resource vocabulary with globally published concepts of domain resources combines different data sources and datasets, makes data more understandable, discoverable and usable, improves data interoperability and integration, provides automatic reasoning and prevents data duplication. The need to convert relational data to RDF is coming in sight due to semantic expressiveness of Semantic Web Technologies. One of the important key factors of Semantic Web is ontologies. Ontology means “explicit specification of a conceptualization”. The semantics of spatial data relies on ontologies. Linking of spatial data from relational databases to the web data sources is not an easy task for sharing machine-readable interlinked data on the Web. Tim Berners-Lee, the inventor of the World Wide Web and the advocate of Semantic Web and Linked Data, layed down the Linked Data design principles. Based on these rules, firstly, spatial data in the relational databases must be converted to RDF with the use of supporting tools. Secondly, spatial RDF data must be linked to upper level-domain ontologies and related web data sources. Thirdly, external data sources (ontologies and web data sources) must be determined and spatial RDF data must be linked related data sources. Finally, spatial linked data must be published on the web. The main contribution of this study is to determine requirements for finding RDF links and put forward the deficiencies for creating or publishing linked spatial data. To achieve this objective, this study researches existing approaches, conversion tools and web data sources for relational data conversion to the spatial RDF. In this paper, we have investigated current state of spatial RDF data, standards, open source platforms (particularly D2RQ, Geometry2RDF, TripleGeo, GeoTriples, Ontop, etc.) and the Web Data Sources. Moreover, the process of spatial data conversion to the RDF and how to link it to the web data sources is described. The implementation of linking spatial RDF data to the web data sources is demonstrated with an example use case. Road data has been linked to the one of the related popular web data sources, DBPedia. SILK, a tool for discovering relationships between data items within different Linked Data sources, is used as a link discovery framework. Also, we evaluated other link discovery tools e.g. LIMES, Silk and results are compared to carry out matching/linking task. As a result, linked road data is shared and represented as an information resource on the web and enriched with definitions of related different resources. By this way, road datasets are also linked by the related classes, individuals, spatial relations and properties they cover such as, construction date, road length, coordinates, etc

    Understanding the Use of Heterogenous Data in Tackling Urban Flooding: An Integrative Literature Review

    Get PDF
    Data-driven approaches to urban flooding management require a comprehensive understanding of how heterogenous data are leveraged in tackling this problem. In this paper, we conduct an integrative review of related studies, and this is structured based on two angles: tasks and data. From the selected 69 articles on this topic, diverse tasks in tackling urban flooding are identified and categorized into eight categories, and heterogeneous data are summarized by their content type and source into eight categories. The links between tasks and data are identified by synthesizing what data are used to support the tasks in the studies. The task–data links are a many-to-many relationship in the sense that one particular data category supports multiple tasks, and one particular task uses data from multiple categories. The future research opportunities are also discussed based on our observations. This paper serves a signpost for researchers who wish to gain an overview of the heterogenous data and their use in this field and lays a foundation for studies that aim to develop a data-driven approach to tackle urban flooding

    The Acceptance of Using Information Technology for Disaster Risk Management: A Systematic Review

    Get PDF
    The numbers of natural disaster events are continuously affecting human and the world economics. For coping with disaster, several sectors try to develop the frameworks, systems, technologies and so on. However, there are little researches focusing on the usage behavior of Information Technology (IT) for disaster risk management (DRM). Therefore, this study investigates the affecting factors on the intention to use IT for mitigating disaster’s impacts. This study conducted a systematic review with the academic researches during 2011-2018. Two important factors from the Technology Acceptance Model (TAM) and others are used in describing individual behavior. In order to investigate the potential factors, the technology platforms are divided into nine types. According to the findings, computer software such as GIS applications are frequently used for simulation and spatial data analysis. Social media is preferred among the first choices during disaster events in order to communicate about situations and damages. Finally, we found five major potential factors which are Perceived Usefulness (PU), Perceived Ease of Use (PEOU), information accessibility, social influence, and disaster knowledge. Among them, the most essential one of using IT for disaster management is PU, while PEOU and information accessibility are more important in the web platforms

    Automating Global Geospatial Data Set Analysis : Visualizing flood disasters in the cities of the Global South

    Get PDF
    Flooding is the most devastating natural hazard affecting tens of millions of people yearly and causing billions of USD dollars in damages globally. The people most affected by flooding globally are those with a high level of everyday vulnerability and limited resources for flood protection and recovery. Geospatial data from the Global South is severely lacking, and geospatial proficiency needs to be improved at a local level so that geospatial data and data analysis can be efficiently utilized in disaster risk reduction schemes and urban planning in the Global South. This thesis focuses on the use of automated global geospatial dataset analysis in disaster risk reduction in the Global South by using the Python programming language to produce an automated flood analysis and visualization model. In this study, the automated model was developed and tested in two, highly relevant cases: in the city of Bangkok, Thailand, and in the urban area of Tula de Allende, Mexico. The results of the thesis show that with minimal user interaction, the automated flood model ingests flood extent and depth data produced by ICEYE, a global population estimation raster produced by the German Aerospace Agency (DLR) and OpenStreetMap (OSM) data, performs multiple relevant analyses of these data, and produces an interactive map highlighting the severity and effects of a flooding event. The automated flood model performs consistently and accurately while producing key statistics and standardized visualizations of flooding events which offers first responders a very fast first estimation of the scale of a flooding event and helps plan an appropriate response anywhere around the globe. Global geospatial data sets are often created to examine large scale geographical phenomena; however, the results of this thesis show that they can also be used to analyze detailed local-level phenomena when paired together with supporting data. The advantage of using global geospatial data sets is that when sufficiently accurate and precise, they remove the most time-consuming part of geospatial analysis: finding suitable data. Fast reaction is of utmost importance in the first hours of a natural hazard like flooding, thus, automated analysis produced on a global scale could significantly help international humanitarian aid and first responders. Using an automated model also standardizes the results removing human errors and interpretation from the results enabling the accurate comparison of historical flood data in due time.Tulvat ovat luonnonilmiöihin liittyvistä riskeistä tuhoisimpia, ja ne vaikuttavat kymmeniin miljooniin ihmisiin vuosittain sekä aiheuttavat miljardien dollarien vahingot maailmanlaajuisesti. Tulvista kärsivät usein maailmanlaajuisesti ne ihmiset, jotka ovat jo ennestään haavoittuvia ja joilla on suhteellisesti heikoimmat keinot suojautua tulvilta ja selviytyä tulvan aiheuttamista tuhoista. Monissa globaalin etelän maissa on niukasti paikkatietoaineistoa ja paikkatieto-osaamista on syytä lisätä erityisesti paikallisella tasolla, jotta paikkatietoaineistoa ja analyysin hyödynnettävyyttä voidaan parantaa katastrofiriskien vähentämissuunnitelmissa sekä kaupunkisuunnittelussa globaalissa etelässä. Tämä opinnäytetyö keskittyy automatisoidun globaalin paikkatietoaineiston analyysin hyödyntämiseen katastrofiriskien vähentämisessä globaalissa etelässä käyttämällä Python-ohjelmointikieltä automatisoidun tulva-analyysi- ja visualisointimallin tuottamiseen. Tässä tutkimuksessa automatisoitua mallia kehitettiin ja testattiin kahdessa tulvariskien kannalta erittäin relevantissa tapauksessa: Bangkokissa, Thaimaassa ja Tula de Allende:n kaupunkialueella, Meksikossa. Tämän tutkielman tulokset osoittavat, että automatisoitu tulvamalli osaa lukea ICEYE:n tuottaman tulvan laajuus- ja syvyysaineiston, Saksan ilmailu- ja avaruuskeskuksen (DLR) tuottaman maailmanlaajuisen väestönarviorasterin, sekä OpenStreetMap (OSM) -aineiston, suorittaa aineistolle tulvan tuhojen tulkinnan kannalta olennaisia analyyseja, ja tuottaa lopputuloksena interaktiivisen kartan, joka korostaa tulvatapahtuman laajuutta ja vaikutuksia. Automatisoitu tulvamalli toimii johdonmukaisesti ja tuottaa tilastoja sekä standardoituja visualisointeja tulvatapahtumista, mikä tarjoaa ensivastehenkilöille erittäin nopean ensimmäisen arvion tulvatapahtuman laajuudesta. Tämä auttaa kohdentamaan pelastustoimenpiteitä riskitilanteessa vaihtelevissa ympäristöissä eri puolilla maailmaa. Globaalit paikkatietoaineistot luodaan usein laajojen maantieteellisten ilmiöiden tutkimiseen, mutta tämän tutkielman tulokset osoittavat kuitenkin, että niillä voidaan analysoida myös hyvin paikallistason ilmiöitä, kun ne yhdistetään muihin relevantteihin tietolähteisiin. Globaalien paikkatietoaineistojen käytön etuna on, että ollessaan riittävän tarkkoja ne poistavat paikkatietoanalyysin aikaa vievimmän osan: sopivan tiedon löytämisen. Nopea reagointi on äärimmäisen tärkeää luonnonuhkien, kuten tulvien, ensimmäisinä tunteina ja kansainvälisen humanitaarisen avun ja ensivastetoimijoiden tulisi hyödyntää maailmanlaajuisia automatisoituja analyysejä. Automaattinen malli myös standardoi tulokset poistaen tuloksista inhimilliset virheet ja tulkinnat, mikä mahdollistaa historiallisten tulvatietojen tarkan vertailun

    Flood risk in urban areas: modelling, management and adaptation to climate change. A review

    Get PDF
    [Abstract:] The modelling and management of flood risk in urban areas are increasingly recognized as global challenges. The complexity of these issues is a consequence of the existence of several distinct sources of risk, including not only fluvial, tidal and coastal flooding, but also exposure to urban runoff and local drainage failure, and the various management strategies that can be proposed. The high degree of vulnerability that characterizes such areas is expected to increase in the future due to the effects of climate change, the growth of the population living in cities, and urban densification. An increasing awareness of the socio-economic losses and environmental impact of urban flooding is clearly reflected in the recent expansion of the number of studies related to the modelling and management of urban flooding, sometimes within the framework of adaptation to climate change. The goal of the current paper is to provide a general review of the recent advances in flood-risk modelling and management, while also exploring future perspectives in these fields of research

    Social media and knowledge integration based emergency response performance model

    Get PDF
    Emergency Response (ER) during the flood is increasingly being characterized as a complex phase in disaster management as it involves multi-organizational settings. This scenario causes miscommunication, lack of coordination and difficulty in making life-saving decisions, which decreases organisational performance. Accordingly, Knowledge Integration (KI) can reduce and resolve problems of coordination and communications which lead to decisions being made at a proper time, thereby increasing the task of Non- Government Organisations (NGOs)’ capabilities to achieve better performance. Moreover, use of Social Media (SM) provides many advantages that may assist in eliminating KI’s challenges and enhancing its dissemination at low cost, particularly for NGOs that work in disparate places. Despite this, current research into the improvement of task performance using KI through SM in the emergency response context is, unfortunately, limited. Most of the studies are not empirical and there is a lack of theoretical foundation for improving task performance using KI, in addition to using SM to facilitate KI in the flood disaster ER. Hence, it is important to address these issues. The main objective of this study is to identify the factors that influence the Emergency Response Task Performance (ERTP). In this research, the factors which affect the performance of ER tasks were elicited through a review of the literature to identify the essential factors influential NGOs’ emergency response. Then, this study developed an ERTP model by combining Knowledge-Based Theory (KBT) of the firm and the Task-Technology Fit (TTF) theory, used to utilise technology. This study applied a quantitative approach to examine these factors. Based on purposive sampling, questionnaires were distributed to over 700 staff and volunteers working for 12 NGOs in Sudan. Smart PLS 2.0 M3 and IBM SPSS Statistics version 24 were used to analyse the data. The results revealed that KI is a significant factor related to ERTP. In addition, it was found that the SM usage factor was significantly related to KI. Furthermore, this study discovered significant differences among the various experiences of volunteers and staff when it comes to utilising SM for knowledge integration in the context of ER response. The results of the study contribute to the body of knowledge by providing a model for ER managers, team members in NGOs and decision-makers to use it as a guideline for successfully assessing and validating ERTP. Additionally, it sets guidelines that may be useful for NGOs in the effective use of social media as a platform for integrating knowledge. Finally, this study provides recommendations to flood decision-makers who are considering enhancing the performance of the tasks within their organisations

    Advanced Computer Technologies for Integrated Agro-Hydrologic Systems Modeling: Coupled Crop and Hydrologic Models for Agricultural Intensification Impacts Assessment

    Get PDF
    Coupling hydrologic and crop models is increasingly becoming an important task when addressing agro-hydrologic systems studies. Either for resources conservation or cropping systems improvement, the complex interactions between hydrologic regime and crop management components requires an integrative approach in order to be fully understood. Nevertheless, the literature offers limited resources on models’ coupling that targets environmental scientists. Indeed, major of guides are are destined primarily for computer specialists and make them hard to encompass and apply. To address this gap, we present an extensive research to crop and hydrologic models coupling that targets earth agro-hydrologic modeling studies in its integrative complexity. The primary focus is to understand the relationship between agricultural intensification and its impacts on hydrologic balance. We provided documentations, classifications, applications and references of the available technologies and trends of development. We applied the results of the investigation by coupling the DREAM hydrologic model with DSSAT crop model. Both models were upgraded either on their code source (DREAM) or operational base (DSSAT) for interoperability and parallelization. The resulting model operates at a grid base and daily step. The model is applied southern Italy to analyze the effect of fertilizer application on runoff generation between 2000 and 2013. The results of the study show a significant impacts of nitrogen application on water yield. Indeed, nearly 71.5 thousand cubic-meter of rain water for every kilogram of nitrogen and per hectare is lost as a reduction of runoff coefficient. Furthermore, a significant correlation between the nitrogen applications amount and runoff is found at a yearly basis with Pearson’s coefficient of 0.93

    Knowledge hypergraph based-approach for multi-source data integration and querying : Application for Earth Observation domain

    Get PDF
    Early warning against natural disasters to save lives and decrease damages has drawn increasing interest to develop systems that observe, monitor, and assess the changes in the environment. Over the last years, numerous environmental monitoring systems and Earth Observation (EO) programs were implemented. Nevertheless, these systems generate a large amount of EO data while using different vocabularies and different conceptual schemas. Accordingly, data resides in many siloed systems and are mainly untapped for integrated operations, insights, and decision making situations. To overcome the insufficient exploitation of EO data, a data integration system is crucial to break down data silos and create a common information space where data will be semantically linked. Within this context, we propose a semantic data integration and querying approach, which aims to semantically integrate EO data and provide an enhanced query processing in terms of accuracy, completeness, and semantic richness of response. . To do so, we defined three main objectives. The first objective is to capture the knowledge of the environmental monitoring domain. To do so, we propose MEMOn, a domain ontology that provides a common vocabulary of the environmental monitoring domain in order to support the semantic interoperability of heterogeneous EO data. While creating MEMOn, we adopted a development methodology, including three fundamental principles. First, we used a modularization approach. The idea is to create separate modules, one for each context of the environment domain in order to ensure the clarity of the global ontology’s structure and guarantee the reusability of each module separately. Second, we used the upper-level ontology Basic Formal Ontology and the mid-level ontologies, the Common Core ontologies, to facilitate the integration of the ontological modules in order to build the global one. Third, we reused existing domain ontologies such as ENVO and SSN, to avoid creating the ontology from scratch, and this can improve its quality since the reused components have already been evaluated. MEMOn is then evaluated using real use case studies, according to the Sahara and Sahel Observatory experts’ requirements. The second objective of this work is to break down the data silos and provide a common environmental information space. Accordingly, we propose a knowledge hypergraphbased data integration approach to provide experts and software agents with a virtual integrated and linked view of data. This approach generates RML mappings between the developed ontology and metadata and then creates a knowledge hypergraph that semantically links these mappings to identify more complex relationships across data sources. One of the strengths of the proposed approach is it goes beyond the process of combining data retrieved from multiple and independent sources and allows the virtual data integration in a highly semantic and expressive way, using hypergraphs. The third objective of this thesis concerns the enhancement of query processing in terms of accuracy, completeness, and semantic richness of response in order to adapt the returned results and make them more relevant and richer in terms of relationships. Accordingly, we propose a knowledge-hypergraph based query processing that improves the selection of sources contributing to the final result of an input query. Indeed, the proposed approach moves beyond the discovery of simple one-to-one equivalence matches and relies on the identification of more complex relationships across data sources by referring to the knowledge hypergraph. This enhancement significantly showcases the increasing of answer completeness and semantic richness. The proposed approach was implemented in an open-source tool and has proved its effectiveness through a real use case in the environmental monitoring domain
    corecore