20 research outputs found

    Opportunities and Risks of Disaster Data from Social Media: A Systematic Review of Incident Information

    Get PDF
    Compiling and disseminating information about incidents and disasters is key to disaster management and relief. But due to inherent limitations of the acquisition process, the required information is often incomplete or missing altogether. To fill these gaps, citizen observations spread through social media are widely considered to be a promising source of relevant information, and many studies propose new methods to tap this resource. Yet, the overarching question of whether, and under which circumstances social media can supply relevant information (both qualitatively and quantitatively) still remains unanswered. To shed some light on this question, we review 37 large disaster and incident databases covering 27 incident types, organize the contained data and its collection process, and identify the missing or incomplete information. The resulting data collection reveals six major use cases for social media analysis in incident data collection: impact assessment and verification of model predictions, narrative generation, enabling enhanced citizen involvement, supporting weakly institutionalized areas, narrowing surveillance areas, and reporting triggers for periodical surveillance. Aside from this analysis, we discuss the advantages and disadvantages of the use of social media data for closing information gaps related to incidents and disasters

    A PageRank-based Reputation Model for VGI Data

    Get PDF
    AbstractQuality of data is one of the key issues in the domain of Volunteered geographic information (VGI). To this purpose, in literature VGI data has been sometime compared with authoritative geospatial data. Evaluation of single contributions to VGI databases is more relevant for some applications and typically relies on evaluating reputation of contributors and using it as proxy measures for data quality. In this paper, we present a novel approach for reputation evaluation that is based on the well known PageRank algorithm for Web pages. We use a simple model for describing different versions of a geospatial entity in terms of corrections and completions. Authors, VGI contributions and their mutual relationships are modeled as nodes of a graph. In order to evaluate reputation of authors and contributions in the graph we propose an algorithm that is based on the personalized version of PageRank

    Uncertainty-aware Visual Analytics for Spatio-temporal Data Exploration

    No full text
    Uncertainty in spatio-temporal data is described as the discrepancy between a measured value of an object and the true value of that object. Common causes of uncertainty in data can be identi ed as errors of precision in the data measurement devices, inadequate domain knowledge of the data collector, absence of gatekeepers etc., known in this dissertation as inherent or source uncertainties. These inherent uncertainties further vary depending on the type of data (e.g., geotagged text or image data), as well as the explicit and implicit nature of the spatial dimension in the data. Static and dynamic visualisation methods have been used to communicate uncertainties. However, a gap we see in such uncertainty visualisations is that users have little to no leeway of controlling the system outcomes (e.g., by weighing in their domain expertise, control to what extent uncertainty plays a role in the analysis, or reduce uncertainty in the data). Visual analytics help to fill this gap by allowing the user to steer the analysis process through interaction. The challenge of uncertainty analysis with visual analytics is that we not only have to encounter the inherent data uncertainties, but also the uncertainties that keep propagating through every component in a visual analytics system (the data, data models, data visualisations and model-visualisation couplings), and through every interaction from the user. To address this challenge, this dissertation introduces a framework that de fines the role of uncertainty throughout the visual analytics knowledge generation process. At each component of the visual analytics system, guidelines in terms of methods are specifi ed for assessing the uncertainties. Following this framework, four novel visual analytics approaches are introduced that enable a user to explore, assess, and mitigate context-specifi c uncertainties in heterogeneous data types: image data, text data, location data, and numerical data. By enabling a strong interaction between the user and the system, uncertainties are mitigated and trustworthy knowledge is extracted, thereby bridging the gap identi fied in static and dynamic uncertainty visualisations. The approaches developed are evaluated against anecdotal evidences and a usability experiment.publishe

    Using reverse viewshed analysis to assess the location correctness of visually generated VGI

    No full text
    With the increased availability of user generated data, assessing the quality and credibility of such data becomes important. In this article, we propose to assess the location correctness of visually generated Volunteered Geographic Information (VGI) as a quality reference measure. The location correctness is determined by checking the visibility of the point of interest from the position of the visually generated VGI (observer point); as an example we utilize Flickr photographs. Therefore we first collect all Flickr photographs that conform to a certain point of interest through their textual labelling. Then we conduct a reverse viewshed analysis for the point of interest to determine if it lies within the area of visibility of the observer points. If the point of interest lies outside the visibility of a given observer point, the respective geotagged image is considered to be incorrectly geotagged. In this way, we analyze sample datasets of photographs and make observations regarding the dependency of certain user/photo metadata and (in)correct geotags and labels. In future the dependency relationship between the location correctness and user/photo metadata can be used to automatically infer user credibility. In other words, attributes such as profile completeness, together with location correctness, can serve as a weighted score to assess credibility

    Usability of uncertainty visualisation methods : A comparison between different user groups

    No full text
    This paper presents the results of a web based survey assessing the usability of main uncertainty visualisation methods for users belonging to different key domains such as GIS and Climate change research. We assess the usability of the visualisation methods based on the user's performance in selected learnability tasks, in addition to assessing user preferences. A correspondence analysis between these two results was further carried out to find the association between the user's performance and preference. The key outcome of our study is the ranking of uncertainty visualisation methods according to their suitability for different user domains, as tested for within our study. The gained results are a valuable basis for tools, such as our Uncertainty Visualisation Selector (described later) which can recommend the most appropriate uncertainty visualisation methods according to user defined requirements

    Moving on Twitter : Using Episodic Hotspot and Drift Analysis to Detect and Characterise Spatial Trajectories

    No full text
    Today, a tremendous source of spatio-temporal data is user generated, so-called volunteered geographic information (VGI). Among the many VGI sources, microblogged services, such as Twitter, are extensively used to disseminate information on a near real-time basis. Interest in analysis of microblogged data has been motivated to date by many applications ranging from trend detection, early disaster warning, to urban management and marketing. One important analysis perspective in understanding microblogged data is based on the notion of drift, considering a gradual change of real world phenomena observed across space, time, content, or a combination thereof.The scientific contribution provided by this paper is the presentation of a systematic framework that utilises on the one hand a Kernel Density Estimation (KDE) to detect hotspot clusters of Tweeter activities, which are episodically sequential in nature. These clusters help to derive spatial trajectories. On the other hand we introduce the concept of drift that characterises these trajectories by looking into changes of sentiment and topics to derive meaningful information. We apply our approach to a Twitter dataset comprising 26,000 tweets. We demonstrate how phenomena of interest can be detected by our approach. As an example, we use our approach to detect the locations of Lady Gaga’s concert tour in 2013. A set of visualisations allows to analyse the identified trajectories in space, enhanced by optional overlays for sentiment or other parameters of interest

    Integrated Spatial Uncertainty Visualization using Off-screen Aggregation

    No full text
    Visualization of spatial data uncertainties is crucial to the data understanding and exploration process. Scientific measurements, numerical simulations, and user generated content are error prone sources that gravely influence data reliability. When exploring large spatial datasets, we face two main challenges: data and uncertainty are two different sets which need to be integrated into one visualization, and we often lose the contextual overview when zooming or filtering to see details. In this paper, we present an extrinsic uncertainty visualization as well as an off-screen technique which integrates the uncertainty representation and enables the user to perceive data context and topology in the analysis process. We show the applicability and usefulness of our approach in a use case.publishe
    corecore