111 research outputs found

    Developing a Framework for Stigmergic Human Collaboration with Technology Tools: Cases in Emergency Response

    Get PDF
    Information and Communications Technologies (ICTs), particularly social media and geographic information systems (GIS), have become a transformational force in emergency response. Social media enables ad hoc collaboration, providing timely, useful information dissemination and sharing, and helping to overcome limitations of time and place. Geographic information systems increase the level of situation awareness, serving geospatial data using interactive maps, animations, and computer generated imagery derived from sophisticated global remote sensing systems. Digital workspaces bring these technologies together and contribute to meeting ad hoc and formal emergency response challenges through their affordances of situation awareness and mass collaboration. Distributed ICTs that enable ad hoc emergency response via digital workspaces have arguably made traditional top-down system deployments less relevant in certain situations, including emergency response (Merrill, 2009; Heylighen, 2007a, b). Heylighen (2014, 2007a, b) theorizes that human cognitive stigmergy explains some self-organizing characteristics of ad hoc systems. Elliott (2007) identifies cognitive stigmergy as a factor in mass collaborations supported by digital workspaces. Stigmergy, a term from biology, refers to the phenomenon of self-organizing systems with agents that coordinate via perceived changes in the environment rather than direct communication. In the present research, ad hoc emergency response is examined through the lens of human cognitive stigmergy. The basic assertion is that ICTs and stigmergy together make possible highly effective ad hoc collaborations in circumstances where more typical collaborative methods break down. The research is organized into three essays: an in-depth analysis of the development and deployment of the Ushahidi emergency response software platform, a comparison of the emergency response ICTs used for emergency response during Hurricanes Katrina and Sandy, and a process model developed from the case studies and relevant academic literature is described

    Learning from information crises: Exploring aggregated trustworthiness in big data production

    Get PDF
    In a crisis situation when traditional venues for information dissemination aren't reliable and information is needed immediately "aggregated trustworthiness", data verification through network evaluation and social validation, becomes an important alternative. However, the risk with evaluating credibility through trust and network reputation is that the perspective can get biased. In these socially distributed information systems there is therefore of particularly high importance to understand how data is socially produced by whom. The purpose with the research project presented in this position paper is to explore how patters of bias in information production online can become more transparent by including tools that analyze and visualize aggregated trustworthiness. the research project consists of two interconnected parts. We will first look into a recent crisis situation, the case Red Hook after Hurricane Sandy, to see how the dissemination of information took place in the recovery work, focusing on questions of credibility and trust. Thereafter, this case study will inform the design of two collaborative tools where we investigate how social validation processes can be made more transparent

    Technology for Good: Innovative Use of Technology by Charities

    Get PDF
    Technology for Good identifies ten technologies being used by charitable organizations in innovative ways. The report briefly introduces each technology and provides examples of how those technologies are being used.Examples are drawn from a broad spectrum of organizations working on widely varied issues around the globe. This makes Technology for Good a unique repository of inspiration for the public and private sectors, funders, and other change makers who support the creation and use of technology for social good

    Partnering People with Deep Learning Systems: Human Cognitive Effects of Explanations

    Get PDF
    Advances in “deep learning” algorithms have led to intelligent systems that provide automated classifications of unstructured data. Until recently these systems could not provide the reasons behind a classification. This lack of “explainability” has led to resistance in applying these systems in some contexts. An intensive research and development effort to make such systems more transparent and interpretable has proposed and developed multiple types of explanation to address this challenge. Relatively little research has been conducted into how humans process these explanations. Theories and measures from areas of research in social cognition were selected to evaluate attribution of mental processes from intentional systems theory, measures of working memory demands from cognitive load theory, and self-efficacy from social cognition theory. Crowdsourced natural disaster damage assessment of aerial images was employed using a written assessment guideline as the task. The “Wizard of Oz” method was used to generate the damage assessment output of a simulated agent. The output and explanations contained errors consistent with transferring a deep learning system to a new disaster event. A between-subjects experiment was conducted where three types of natural language explanations were manipulated between conditions. Counterfactual explanations increased intrinsic cognitive load and made participants more aware of the challenges of the task. Explanations that described boundary conditions and failure modes (“hedging explanations”) decreased agreement with erroneous agent ratings without a detectable effect on cognitive load. However, these effects were not large enough to counteract decreases in self-efficacy and increases in erroneous agreement as a result of providing a causal explanation. The extraneous cognitive load generated by explanations had the strongest influence on self-efficacy in the task. Presenting all of the explanation types at the same time maximized cognitive load and agreement with erroneous simulated output. Perceived interdependence with the simulated agent was also associated with increases in self-efficacy; however, trust in the agent was not associated with differences in self-efficacy. These findings identify effects related to research areas which have developed methods to design tasks that may increase the effectiveness of explanations

    Crowdsourcing Crisis Management Platforms: A Privacy and Data Protection Risk Assessment and Recommendations

    Get PDF
    Over the last few years, crowdsourcing have expanded rapidly allowing citizens to connect with each other, governments to connect with common mass, to coordinate disaster response work, to map political conflicts, acquiring information quickly and participating in issues that affect day-to- day life of citizens. As emerging tools and technologies offer huge potential to response quickly and on time during crisis, crisis responders do take support from these tools and techniques. The ‘Guiding Principles’ of the Sendai Framework for Disaster Risk Reduction 2015-2030 identifies that ‘disaster risk reduction requires a multi-hazard approach and inclusive risk-informed decision-making (RIDM) based on the open exchange and dissemination of disaggregated data, including by sex, age and disability, as well as on easily accessible, up-to-date, comprehensible, science-based, non-sensitive risk information, complemented by traditional knowledge. Addressing the ‘Priority Action’ 1 & 2, this PhD research aims to identify various risks and present recommendations for ‘RIDM Process’ in form of a general Privacy and Data Protection Risk Assessment and Recommendations for crowdsourcing crisis management. It includes legal, ethical and technical recommendations

    Crisis Crowdsourcing in Government: Characterising efforts by North American Agencies to Inform Emergency Management Operations

    Get PDF
    Crowdsourcing is proven to be a useful communication platform during and in the direct aftermath of a disastrous event. While previous research in crisis crowdsourcing demonstrates its wide adoption for aiding response efforts, this research is generally limited to adoption by non-government organizations and members of the general public, and not government agencies. There is a gap in understanding the state of crowdsourcing by governments for emergency management. Additionally, there is a noticeable focus on the application of crowdsourcing in the response and recovery of a given disaster, with less attention paid to mitigation and preparedness. This research aims to classify the use of government crisis crowdsourcing in all phases of the disaster management cycle in Canada and the USA and identify the barriers and constraints faced by Canadian government agencies when adopting crisis crowdsourcing and social media for emergency management. Semi-structured interviews conducted with 22 government officials from Canada and the USA at the various levels of government in both countries reveal that crisis crowdsourced information has a place in all phases of the disaster management cycle, though direct crowdsourcing has yet to be applied in the pre-disaster phases. Participating federal agencies appear to be using crowdsourced information for mitigation and preparedness efforts, while the lower-tiered agencies are using crowdsourcing for direct response and recovery. A more in-depth analysis into the barriers and constraints faced by participating Canadian agencies looking to adopt crisis crowdsourcing or social media for emergency management reveals three general areas of concern that may be hindering crisis crowdsourcing efforts in Canada: organizational factors, demographic factors, and hazard risk. Based on these three general areas of concern, a readiness assessment scheme is presented to allow agencies to pinpoint the most prevalent barriers to their crowdsourcing efforts and to formulate plans to address these barriers

    Earthquake reconnaissance data sources, a literature review

    Get PDF
    Earthquakes are one of the most catastrophic natural phenomena. After an earthquake, earthquake reconnaissance enables effective recovery by collecting data on building damage and other impacts. This paper aims to identify state-of-the-art data sources for building damage assessment and provide guidance for more efficient data collection. We have reviewed 39 articles that indicate the sources used by different authors to collect data related to damage and post-disaster recovery progress after earthquakes between 2014 and 2021. The current data collection methods have been grouped into seven categories: fieldwork or ground surveys, omnidirectional imagery (OD), terrestrial laser scanning (TLS), remote sensing (RS), crowdsourcing platforms, social media (SM) and closed-circuit television videos (CCTV). The selection of a particular data source or collection technique for earthquake reconnaissance includes different criteria depending on what questions are to be answered by these data. We conclude that modern reconnaissance missions cannot rely on a single data source. Different data sources should complement each other, validate collected data or systematically quantify the damage. The recent increase in the number of crowdsourcing and SM platforms used to source earthquake reconnaissance data demonstrates that this is likely to become an increasingly important data source

    Guidance note on the application of coastal monitoring for small island developing states : Part of the NOC-led project “Climate Change Impact Assessment: Ocean Modelling and Monitoring for the Caribbean CME states”, 2017-2020; under the Commonwealth Marine Economies (CME) Programme in the Caribbean.

    Get PDF
    Small Island Developing States (SIDS) are a diverse group of 51 countries and territories vulnerable to human-induced climate change, due to factors including their small size, large exclusive economic zones and limited resources. They generally have insufficient critical mass in scientific research and technical capability to carry out coastal monitoring campaigns from scratch and limited access to data. This guidance report will go some way to addressing these issues by providing information on monitoring methods and signposting data sources. Coastal monitoring, the collection, analysis and storage of information about coastal processes and the response of the coastline, provides information on how the coast changes over time, after storm events and due to the effects of human intervention. Accurate and repeatable observational data is essential to informed decision making, particularly in light of climate change, the impacts of which are already being felt. In this report, we review the need for monitoring and the development of appropriate strategies, which include good baseline data and long-term repeatable data collection at appropriate timescales. We identify some of the methods for collection of in situ data, such as tide gauges and topographic survey, and highlight where resources in terms of data and equipment are currently available. We then go on to explore the range of remote sensing methods available from satellites to smart phone photography. Both in situ and remotely sensed data are important as inputs into models, which in turn feed in to visualisations for decision-making. We review the availability of a wide range of datasets, including details of how to access satellite data and links to international and regional data banks. The report concludes with information on the use of Geographical Information Systems (GIS) and good practice in managing data

    Towards Automated Analysis of Urban Infrastructure after Natural Disasters using Remote Sensing

    Get PDF
    Natural disasters, such as earthquakes and hurricanes, are an unpreventable component of the complex and changing environment we live in. Continued research and advancement in disaster mitigation through prediction of and preparation for impacts have undoubtedly saved many lives and prevented significant amounts of damage, but it is inevitable that some events will cause destruction and loss of life due to their sheer magnitude and proximity to built-up areas. Consequently, development of effective and efficient disaster response methodologies is a research topic of great interest. A successful emergency response is dependent on a comprehensive understanding of the scenario at hand. It is crucial to assess the state of the infrastructure and transportation network, so that resources can be allocated efficiently. Obstructions to the roadways are one of the biggest inhibitors to effective emergency response. To this end, airborne and satellite remote sensing platforms have been used extensively to collect overhead imagery and other types of data in the event of a natural disaster. The ability of these platforms to rapidly probe large areas is ideal in a situation where a timely response could result in saving lives. Typically, imagery is delivered to emergency management officials who then visually inspect it to determine where roads are obstructed and buildings have collapsed. Manual interpretation of imagery is a slow process and is limited by the quality of the imagery and what the human eye can perceive. In order to overcome the time and resource limitations of manual interpretation, this dissertation inves- tigated the feasibility of performing fully automated post-disaster analysis of roadways and buildings using airborne remote sensing data. First, a novel algorithm for detecting roadway debris piles from airborne light detection and ranging (lidar) point clouds and estimating their volumes is presented. Next, a method for detecting roadway flooding in aerial imagery and estimating the depth of the water using digital elevation models (DEMs) is introduced. Finally, a technique for assessing building damage from airborne lidar point clouds is presented. All three methods are demonstrated using remotely sensed data that were collected in the wake of recent natural disasters. The research presented in this dissertation builds a case for the use of automatic, algorithmic analysis of road networks and buildings after a disaster. By reducing the latency between the disaster and the delivery of damage maps needed to make executive decisions about resource allocation and performing search and rescue missions, significant loss reductions could be achieved

    Rethinking Flood Analytics: Proceedings from the 2017 Flood Analytics Colloquium

    Get PDF
    This report documents outcomes from the Flood Analytics Colloquium held at the Renaissance Computing Institute (RENCI) in Chapel Hill, NC, on November 7-9, 2017. The Colloquium was sponsored jointly by the Coastal Resilience Center of Excellence (CRC), RENCI, and two organizations within the U.S. Department of Homeland Security’s Science and Technology Directorate: the First Responders Group (FRG) and the Office of University Programs. The overall purpose of the Colloquium was to support the Flood Apex Program, which is managed by the FRG with the goals of reducing fatalities and property losses from future flood events, increasing community resilience to disruptions caused by flooding, and developing better investment strategies to prepare for, respond to, recover from and mitigate against flood hazards. The Colloquium convened a group of approximately 50 selected persons from a variety of sectors and disciplines to explore the future of flood analytics and how it can better address the increasingly complex needs of society in dealing with flood events
    • 

    corecore