1,072 research outputs found

    Concept-driven visualization for terascale data analytics

    Get PDF
    Over the past couple of decades the amount of scientific data sets has exploded. The science community has since been facing the common problem of being drowned in data, and yet starved of information. Identification and extraction of meaningful features from large data sets has become one of the central problems of scientific research, for both simulation as well as sensory data sets. The problems at hand are multifold and need to be addressed concurrently to provide scientists with the necessary tools, methods, and systems. Firstly, the underlying data structures and management need to be optimized for the kind of data most commonly used in scientific research, i.e. terascale time-varying, multi-dimensional, multi-variate, and potentially non-uniform grids. This implies avoidance of data duplication, utilization of a transparent query structure, and use of sophisticated underlying data structures and algorithms.Secondly, in the case of scientific data sets, simplistic queries are not a sufficient method to describe subsets or features. For time-varying data sets, many features can generally be described as local events, i.e. spatially and temporally limited regions with characteristic properties in value space. While most often scientists know quite well what they are looking for in a data set, at times they cannot formally or definitively describe their concept well to computer science experts, especially when based on partially substantiated knowledge. Scientists need to be enabled to query and extract such features or events directly and without having to rewrite their hypothesis into an inadequately simple query language. Thirdly, tools to analyze the quality and sensitivity of these event queries itself are required. Understanding local data sensitivity is a necessity for enabling scientists to refine query parameters as needed to produce more meaningful findings.Query sensitivity analysis can also be utilized to establish trends for event-driven queries, i.e. how does the query sensitivity differ between locations and over a series of data sets. In this dissertation, we present an approach to apply these interdependent measures to aid scientists in better understanding their data sets. An integrated system containing all of the above tools and system parts is presented

    AN EXTENDABLE VISUALIZATION AND USER INTERFACE DESIGN FOR TIME-VARYING MULTIVARIATE GEOSCIENCE DATA

    Get PDF
    Geoscience data has unique and complex data structures, and its visualization has been challenging due to a lack of effective data models and visual representations to tackle the heterogeneity of geoscience data. In today’s big data era, the needs of visualizing geoscience data become urgent, especially driven by its potential value to human societies, such as environmental disaster prediction, urban growth simulation, and so on. In this thesis, I created a novel geoscience data visualization framework and applied interface automata theory to geoscience data visualization tasks. The framework can support heterogeneous geoscience data and facilitate data operations. The interface automata can generate a series of interactions that can efficiently impress users, which also provides an intuitive method for visualizing and analysis geoscience data. Except clearly guided users to the specific visualization, interface automata can also enhance user experience by eliminating automation surprising, and the maintenance overhead is also reduced. The new framework was applied to INSIGHT, a scientific hydrology visualization and analysis system that was developed by the Nebraska Department of Natural Resources (NDNR). Compared to the existing INSIGHT solution, the new framework has brought many advantages that do not exist in the existing solution, which proved that the framework is efficient and extendable for visualizing geoscience data. Adviser: Hongfeng Y

    AN EXTENDABLE VISUALIZATION AND USER INTERFACE DESIGN FOR TIME-VARYING MULTIVARIATE GEOSCIENCE DATA

    Get PDF
    Geoscience data has unique and complex data structures, and its visualization has been challenging due to a lack of effective data models and visual representations to tackle the heterogeneity of geoscience data. In today’s big data era, the needs of visualizing geoscience data become urgent, especially driven by its potential value to human societies, such as environmental disaster prediction, urban growth simulation, and so on. In this thesis, I created a novel geoscience data visualization framework and applied interface automata theory to geoscience data visualization tasks. The framework can support heterogeneous geoscience data and facilitate data operations. The interface automata can generate a series of interactions that can efficiently impress users, which also provides an intuitive method for visualizing and analysis geoscience data. Except clearly guided users to the specific visualization, interface automata can also enhance user experience by eliminating automation surprising, and the maintenance overhead is also reduced. The new framework was applied to INSIGHT, a scientific hydrology visualization and analysis system that was developed by the Nebraska Department of Natural Resources (NDNR). Compared to the existing INSIGHT solution, the new framework has brought many advantages that do not exist in the existing solution, which proved that the framework is efficient and extendable for visualizing geoscience data. Adviser: Hongfeng Y

    Efficient Large-scale Distance-Based Join Queries in SpatialHadoop

    Get PDF
    Efficient processing of Distance-Based Join Queries (DBJQs) in spatial databases is of paramount importance in many application domains. The most representative and known DBJQs are the K Closest Pairs Query (KCPQ) and the ε Distance Join Query (εDJQ). These types of join queries are characterized by a number of desired pairs (K) or a distance threshold (ε) between the components of the pairs in the final result, over two spatial datasets. Both are expensive operations, since two spatial datasets are combined with additional constraints. Given the increasing volume of spatial data originating from multiple sources and stored in distributed servers, it is not always efficient to perform DBJQs on a centralized server. For this reason, this paper addresses the problem of computing DBJQs on big spatial datasets in SpatialHadoop, an extension of Hadoop that supports efficient processing of spatial queries in a cloud-based setting. We propose novel algorithms, based on plane-sweep, to perform efficient parallel DBJQs on large-scale spatial datasets in Spatial Hadoop. We evaluate the performance of the proposed algorithms in several situations with large real-world as well as synthetic datasets. The experiments demonstrate the efficiency and scalability of our proposed methodologies

    Explorative coastal oceanographic visual analytics : oceans of data

    Get PDF
    The widely acknowledged challenge to data analysis and understanding, resulting from the exponential increase in volumes of data generated by increasingly complex modelling and sampling systems, is a problem experienced by many researchers, including ocean scientists. The thesis explores a visualization and visual analytics solution for predictive studies of coastal shelf and estuarine modelled, hydrodynamics undertaken to understand sea level rise, as a contribution to wider climate change studies, and to underpin coastal zone planning, flood prevention and extreme event management. But these studies are complex and require numerous simulations of estuarine hydrodynamics, generating extremely large datasets of multi-field data. This type\ud of data is acknowledged as difficult to visualize and analyse, as its numerous attributes present significant computational challenges, and ideally require a wide range of approaches to provide the necessary insight. These challenges are not easily overcome with the current visualization and analysis methodologies employed by coastal shelf hydrodynamic researchers, who use several software systems to generate graphs, each taking considerable time to operate, thus it is difficult to explore different scenarios and explore the data interactively and visually. The thesis, therefore, develops novel visualization and visual analytics techniques to help researchers overcome the limitations of existing methods (for example in understanding key tidal components); analyse data in a timely manner and explore different scenarios. There were a number of challenges to this: the size of the data, resulting in lengthy computing time, also many data values becoming plotted on one pixel (overplotting). The thesis presents: (1) a new visualization framework (VINCA) using caching and hierarchical aggregation techniques to make the data more interactive, plus explorative, coordinated multiple views, to enable the scientists to explore the data. (2) A novel estuarine transect profiler and flux tool, which provides instantaneous flux calculations across an estuary. Measures of flux are of great significance in oceanographic studies, yet are notoriously difficult and time consuming to calculate with the commonly used tools. This derived data is added back into the database for further investigation and analysis. (3) New views, including a novel, dynamic, spatially aggregated Parallel Coordinate Plots (Sa-PCP), are developed to provide different perspectives of the spatial, time dependent data, also methodologies for developing high-quality (journal ready) output from the visualization tool. Finally, (4) the dissertation explored the use of hierarchical data-structures and caching techniques to enable fast analysis on a desktop computer and to overcome the overplotting challenge for this data

    Visualizing a Field of Research: A Methodology of Systematic Scientometric Reviews

    Full text link
    Systematic scientometric reviews, empowered by scientometric and visual analytic techniques, offer opportunities to improve the timeliness, accessibility, and reproducibility of conventional systematic reviews. While increasingly accessible science mapping tools enable end users to visualize the structure and dynamics of a research field, a common bottleneck in the current practice is the construction of a collection of scholarly publications as the input of the subsequent scientometric analysis and visualization. End users often have to face a dilemma in the preparation process: the more they know about a knowledge domain, the easier it is for them to find the relevant data to meet their needs adequately; the little they know, the harder the problem is. What can we do to avoid missing something valuable but beyond our initial description? In this article, we introduce a flexible and generic methodology, cascading citation expansion, to increase the quality of constructing a bibliographic dataset for systematic reviews. Furthermore, the methodology simplifies the conceptualization of globalism and localism in science mapping and unifies them on a consistent and continuous spectrum. We demonstrate an application of the methodology to the research of literature-based discovery and compare five datasets constructed based on three use scenarios, namely a conventional keyword-based search (one dataset), an expansion process starting with a groundbreaking article of the knowledge domain (two datasets), and an expansion process starting with a recently published review article by a prominent expert in the domain (two datasets). The unique coverage of each of the datasets is inspected through network visualization overlays with reference to other datasets in a broad and integrated context.Comment: 17 figures, 3 table
    • …
    corecore