13,884 research outputs found

    Interactive tag maps and tag clouds for the multiscale exploration of large spatio-temporal datasets

    Get PDF
    'Tag clouds' and 'tag maps' are introduced to represent geographically referenced text. In combination, these aspatial and spatial views are used to explore a large structured spatio-temporal data set by providing overviews and filtering by text and geography. Prototypes are implemented using freely available technologies including Google Earth and Yahoo! 's Tag Map applet. The interactive tag map and tag cloud techniques and the rapid prototyping method used are informally evaluated through successes and limitations encountered. Preliminary evaluation suggests that the techniques may be useful for generating insights when visualizing large data sets containing geo-referenced text strings. The rapid prototyping approach enabled the technique to be developed and evaluated, leading to geovisualization through which a number of ideas were generated. Limitations of this approach are reflected upon. Tag placement, generalisation and prominence at different scales are issues which have come to light in this study that warrant further work

    Interactive visual exploration of a large spatio-temporal dataset: Reflections on a geovisualization mashup

    Get PDF
    Exploratory visual analysis is useful for the preliminary investigation of large structured, multifaceted spatio-temporal datasets. This process requires the selection and aggregation of records by time, space and attribute, the ability to transform data and the flexibility to apply appropriate visual encodings and interactions. We propose an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards. Our case study combines MySQL, PHP and the LandSerf GIS to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML. This approach is applied to the exploration of a log of 1.42 million requests made of a mobile directory service. Novel combinations of interaction and visual encoding are developed including spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density surfaces. Four aspects of the approach are informally evaluated: the visual encodings employed, their success in the visual exploration of the clataset, the specific tools used and the 'rnashup' approach. Preliminary findings will be beneficial to others considering using mashups for visualization. The specific techniques developed may be more widely applied to offer insights into the structure of multifarious spatio-temporal data of the type explored here

    Improving Big Data Visual Analytics with Interactive Virtual Reality

    Full text link
    For decades, the growth and volume of digital data collection has made it challenging to digest large volumes of information and extract underlying structure. Coined 'Big Data', massive amounts of information has quite often been gathered inconsistently (e.g from many sources, of various forms, at different rates, etc.). These factors impede the practices of not only processing data, but also analyzing and displaying it in an efficient manner to the user. Many efforts have been completed in the data mining and visual analytics community to create effective ways to further improve analysis and achieve the knowledge desired for better understanding. Our approach for improved big data visual analytics is two-fold, focusing on both visualization and interaction. Given geo-tagged information, we are exploring the benefits of visualizing datasets in the original geospatial domain by utilizing a virtual reality platform. After running proven analytics on the data, we intend to represent the information in a more realistic 3D setting, where analysts can achieve an enhanced situational awareness and rely on familiar perceptions to draw in-depth conclusions on the dataset. In addition, developing a human-computer interface that responds to natural user actions and inputs creates a more intuitive environment. Tasks can be performed to manipulate the dataset and allow users to dive deeper upon request, adhering to desired demands and intentions. Due to the volume and popularity of social media, we developed a 3D tool visualizing Twitter on MIT's campus for analysis. Utilizing emerging technologies of today to create a fully immersive tool that promotes visualization and interaction can help ease the process of understanding and representing big data.Comment: 6 pages, 8 figures, 2015 IEEE High Performance Extreme Computing Conference (HPEC '15); corrected typo

    The Visualization of Historical Structures and Data in a 3D Virtual City

    Get PDF
    Google Earth is a powerful tool that allows users to navigate through 3D representations of many cities and places all over the world. Google Earth has a huge collection of 3D models and it only continues to grow as users all over the world continue to contribute new models. As new buildings are built new models are also created. But what happens when a new building replaces another? The same thing that happens in reality also happens in Google Earth. Old models are replaced with new models. While Google Earth shows the most current data, many users would also benefit from being able to view historical data. Google Earth has acknowledged this with the ability to view historical images with the manipulation of a time slider. However, this feature does not apply to 3D models of buildings, which remain in the environment even when viewing a time before their existence. I would like to build upon this concept by proposing a system that stores 3D models of historical buildings that have been demolished and replaced by new developments. People may want to view the old cities that they grew up in which have undergone huge developments over the years. Old neighborhoods may be completely transformed with new road and buildings. In addition to being able to view historical buildings, users may want to view statistics of a given area. Users can view such data in their raw format but using 3D visualizations of statistical data allows for a greater understanding and appreciation of historical changes. I propose to enhance the visualization of the 3D world by allowing users to graphically view statistical data such as population, ethnic groups, education, crime, and income. With this feature users will not only be able to see physical changes in the environment, but also statistical changes over time

    Seafloor characterization using airborne hyperspectral co-registration procedures independent from attitude and positioning sensors

    Get PDF
    The advance of remote-sensing technology and data-storage capabilities has progressed in the last decade to commercial multi-sensor data collection. There is a constant need to characterize, quantify and monitor the coastal areas for habitat research and coastal management. In this paper, we present work on seafloor characterization that uses hyperspectral imagery (HSI). The HSI data allows the operator to extend seafloor characterization from multibeam backscatter towards land and thus creates a seamless ocean-to-land characterization of the littoral zone

    Horizontal accuracy assessment of very high resolution Google Earth images in the city of Rome, Italy

    Get PDF
    Google Earth (GE) has recently become the focus of increasing interest and popularity among available online virtual globes used in scientific research projects, due to the free and easily accessed satellite imagery provided with global coverage. Nevertheless, the uses of this service raises several research questions on the quality and uncertainty of spatial data (e.g. positional accuracy, precision, consistency), with implications for potential uses like data collection and validation. This paper aims to analyze the horizontal accuracy of very high resolution (VHR) GE images in the city of Rome (Italy) for the years 2007, 2011, and 2013. The evaluation was conducted by using both Global Positioning System ground truth data and cadastral photogrammetric vertex as independent check points. The validation process includes the comparison of histograms, graph plots, tests of normality, azimuthal direction errors, and the calculation of standard statistical parameters. The results show that GE VHR imageries of Rome have an overall positional accuracy close to 1 m, sufficient for deriving ground truth samples, measurements, and large-scale planimetric maps
    • …
    corecore