50,332 research outputs found

    Visual analytics of movement: An overview of methods, tools and procedures

    Get PDF
    Analysis of movement is currently a hot research topic in visual analytics. A wide variety of methods and tools for analysis of movement data has been developed in recent years. They allow analysts to look at the data from different perspectives and fulfil diverse analytical tasks. Visual displays and interactive techniques are often combined with computational processing, which, in particular, enables analysis of a larger number of data than would be possible with purely visual methods. Visual analytics leverages methods and tools developed in other areas related to data analytics, particularly statistics, machine learning and geographic information science. We present an illustrated structured survey of the state of the art in visual analytics concerning the analysis of movement data. Besides reviewing the existing works, we demonstrate, using examples, how different visual analytics techniques can support our understanding of various aspects of movement

    Visual analytics on eye movement data reveal search patterns on dynamic and interactive maps

    Get PDF
    In this paper the results of a visual analytics approach on eye movement data are described which allows detecting underlying patterns in the scanpaths of the user’s during a visual search on a map. These patterns give insights in the user his cognitive processes or his mental map while working with interactive maps

    How context influences the segmentation of movement trajectories - an experimental approach for environmental and behavioral context

    Full text link
    In the digital information age where large amounts of movement data are generated daily through technological devices, such as mobile phones, GPS, and digital navigation aids, the exploration of moving point datasets for identifying movement patterns has become a research focus in GIScience (Dykes and Mountain 2003). Visual analytics (VA) tools, such as GeoVISTA Studio (Gahegan 2001), have been developed to explore large amounts of movement data based on the contention that VA combine computational methods with the outstanding human capabilities for pattern recognition, imagination, association, and reasoning (Andrienko et al. 2008). However, exploring, extracting and understanding the meaning encapsulated in movement data from a user perspective has become a major bottleneck, not only in GIScience, but in all areas of science where this kind of data is collected (Holyoak et al. 2008). Specifically the inherent complex and multidimensional nature of spatio-temporal data has not been sufficiently integrated into visual analytics tools. To ensure the inclusion of cognitive principles for the integration of space-time data, visual analytics has to consider how users conceptualize and understand movement data (Fabrikant et al. 2008). A review on cognitively motivated work exemplifies the urgent need to identify how humans make inferences and derive knowledge from movement data. In order to enhance visual analytics tools by integrating cognitive principles we have to first ask to what extent cognitive factors influence our understanding, reasoning, and analysis of movement pattern extraction. It is especially important to comprehend human knowledge construction and reasoning about spatial and temporal phenomena and processes. This paper proposes an experimental approach with human subject testing to evaluate the importance of contextual information in visual displays of movement patterns. This research question is part of a larger research project, with two main objectives, namely * getting a better understanding of how humans process spatio-temporal information * and empirically validating guidelines to improve the design of visual analytics tools to enhance visual data exploration

    Analysing the spatial dimension of eye movement data using a visual analytic approach

    Get PDF
    Conventional analyses on eye movement data only take into account eye movement metrics, such as the number or the duration of fixations and length of the scanpaths, on which statistical analysis is performed for detecting significant differences. However, the spatial dimension in the eye movements is neglected, which is an essential element when investigating the design of maps. The study described in this paper uses a visual analytics software package, the Visual Analytics Toolkit, to analyse the eye movement data. Selection, simplification and aggregation functions are applied to filter out meaningful subsets of the data to be able to recognise structures in the movement data. Visualising and analysing these patterns provides essential insights in the user's search strategies while working on a (n interactive) map

    GeoCAM: A geovisual analytics workspace to contextualize and interpret statements about movement

    Get PDF
    This article focuses on integrating computational and visual methods in a system that supports analysts to identify extract map and relate linguistic accounts of movement. We address two objectives: (1) build the conceptual theoretical and empirical framework needed to represent and interpret human-generated directions; and (2) design and implement a geovisual analytics workspace for direction document analysis. We have built a set of geo-enabled computational methods to identify documents containing movement statements and a visual analytics environment that uses natural language processing methods iteratively with geographic database support to extract interpret and map geographic movement references in context. Additionally analysts can provide feedback to improve computational results. To demonstrate the value of this integrative approach we have realized a proof-of-concept implementation focusing on identifying and processing documents that contain human-generated route directions. Using our visual analytic interface an analyst can explore the results provide feedback to improve those results pose queries against a database of route directions and interactively represent the route on a map

    Visual analytics of delays and interaction in movement data

    Get PDF
    Maximilian Konzack, Tim Ophelders, Michel A. Westenberg and Kevin Buchin are supported by the Netherlands Organisation for Scientific Research (NWO) under grant no. 612.001.207 (Maximilian Konzack, Michel A. Westenberg and Kevin Buchin) and grant no. 639.023.208 (Tim Ophelders).The analysis of interaction between movement trajectories is of interest for various domains when movement of multiple objects is concerned. Interaction often includes a delayed response, making it difficult to detect interaction with current methods that compare movement at specific time intervals. We propose analyses and visualizations, on a local and global scale, of delayed movement responses, where an action is followed by a reaction over time, on trajectories recorded simultaneously. We developed a novel approach to compute the global delay in subquadratic time using a fast Fourier transform (FFT). Central to our local analysis of delays is the computation of a matching between the trajectories in a so-called delay space. It encodes the similarities between all pairs of points of the trajectories. In the visualization, the edges of the matching are bundled into patches, such that shape and color of a patch help to encode changes in an interaction pattern. To evaluate our approach experimentally, we have implemented it as a prototype visual analytics tool and have applied the tool on three bidimensional data sets. For this we used various measures to compute the delay space, including the directional distance, a new similarity measure, which captures more complex interactions by combining directional and spatial characteristics. We compare matchings of various methods computing similarity between trajectories. We also compare various procedures to compute the matching in the delay space, specifically the Fréchet distance, dynamic time warping (DTW), and edit distance (ED). Finally, we demonstrate how to validate the consistency of pairwise matchings by computing matchings between more than two trajectories.Publisher PDFPeer reviewe

    Visual analytics of location-based social networks for decision support

    Get PDF
    Recent advances in technology have enabled people to add location information to social networks called Location-Based Social Networks (LBSNs) where people share their communication and whereabouts not only in their daily lives, but also during abnormal situations, such as crisis events. However, since the volume of the data exceeds the boundaries of human analytical capabilities, it is almost impossible to perform a straightforward qualitative analysis of the data. The emerging field of visual analytics has been introduced to tackle such challenges by integrating the approaches from statistical data analysis and human computer interaction into highly interactive visual environments. Based on the idea of visual analytics, this research contributes the techniques of knowledge discovery in social media data for providing comprehensive situational awareness. We extract valuable hidden information from the huge volume of unstructured social media data and model the extracted information for visualizing meaningful information along with user-centered interactive interfaces. We develop visual analytics techniques and systems for spatial decision support through coupling modeling of spatiotemporal social media data, with scalable and interactive visual environments. These systems allow analysts to detect and examine abnormal events within social media data by integrating automated analytical techniques and visual methods. We provide comprehensive analysis of public behavior response in disaster events through exploring and examining the spatial and temporal distribution of LBSNs. We also propose a trajectory-based visual analytics of LBSNs for anomalous human movement analysis during crises by incorporating a novel classification technique. Finally, we introduce a visual analytics approach for forecasting the overall flow of human crowds
    • …
    corecore