15 research outputs found

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    Interactive visualization of heterogeneous social networks using glyphs

    Get PDF
    There is a growing need for visualizing heterogeneous social networks as new data sets become available. However, the existing visualization tools do not address the challenge of reading topological information introduced by heterogeneous node and link types. To resolve this issue, we introduce glyphs to node-link diagrams to conveniently represent the multivariate nature of heterogeneous node and link types. This provides the opportunity to visually reorganize topological information of the heterogeneous social networks without losing connectivity information. Moreover, a set of interaction techniques are provided to the analyst to give total control over the reorganization process. Finally, a case study is presented to using InfoVis 2008 data set to show the exploration process

    Vortex Characterization for Engineering Applications

    Get PDF
    Realistic engineering simulation data often have features that are not optimally resolved due to practical limitations on mesh resolution. To be useful to application engineers, vortex characterization techniques must be sufficiently robust to handle realistic data with complex vortex topologies. In this paper, we present enhancements to the vortex topology identification component of an existing vortex characterization algorithm. The modified techniques are demonstrated by application to three realistic data sets that illustrate the strengths and weaknesses of our approach

    Interactive Fusion and Tracking For Multi‐Modal Spatial Data Visualization

    Get PDF
    YesScientific data acquired through sensors which monitor natural phenomena, as well as simulation data that imitate time‐identified events, have fueled the need for interactive techniques to successfully analyze and understand trends and patterns across space and time. We present a novel interactive visualization technique that fuses ground truth measurements with simulation results in real‐time to support the continuous tracking and analysis of spatiotemporal patterns. We start by constructing a reference model which densely represents the expected temporal behavior, and then use GPU parallelism to advect measurements on the model and track their location at any given point in time. Our results show that users can interactively fill the spatio‐temporal gaps in real world observations, and generate animations that accurately describe physical phenomena

    Visualização de informação

    Get PDF
    O relatório está dividido em duas partes. Na primeira parte, é abordado o problema da visualização exactamente no que diz respeito à subtil correlação existente entre as técnicas (e respectivas metáforas), o utilizador e os dados. Na segunda parte, são analisadas algumas aplicações, projectos, ferramentas e sistemas de Visualização de Informação. Para categorizalos, serão considerados sete tipos de dados básicos subjacentes a eles: unidimensional, bidimensional, tridimensional, multi-dimensional, temporal, hierárquico, rede e workspace.O tema deste relatório é a visualização da informação. Esta é uma área actualmente muito activa e vital no ensino, na pesquisa e no desenvolvimento tecnológico. A ideia básica é utilizar imagens geradas pelo computador como meio para se obter uma maior compreensão e apreensão da informação que está presente nos dados (geometria) e suas relações (topologia). É um conceito simples, porém poderoso que tem criado imenso impacto em diversas áreas da engenharia e ciência.The theme of this report is information visualization. Nowadays, this is a very active and vital area of research, teaching and development. The basic idea of using computer generated pictures to gain information and understanding from data and relationships is the key concept behind it. This is an extremely simple, but very important concept which is having a powerful impact on methodology of engineering and science. This report is consisted of two parts. The first one, is an overview of the subtle correlation between the visual techniques, the user perception and the data. In the second part, several computer applications, tools, projects and information visualization systems are analyzed. In order to categorize them, seven basic types of data are considered: onedimensional, two- dimensional, three-dimensional, multidimensional, temporal, hierarchic, network and workspace

    Feature-Based Uncertainty Visualization

    Get PDF
    While uncertainty in scientific data attracts an increasing research interest in the visualization community, two critical issues remain insufficiently studied: (1) visualizing the impact of the uncertainty of a data set on its features and (2) interactively exploring 3D or large 2D data sets with uncertainties. In this study, a suite of feature-based techniques is developed to address these issues. First, a framework of feature-level uncertainty visualization is presented to study the uncertainty of the features in scalar and vector data. The uncertainty in the number and locations of features such as sinks or sources of vector fields are referred to as feature-level uncertainty while the uncertainty in the numerical values of the data is referred to as data-level uncertainty. The features of different ensemble members are indentified and correlated. The feature-level uncertainties are expressed as the transitions between corresponding features through new elliptical glyphs. Second, an interactive visualization tool for exploring scalar data with data-level and two types of feature-level uncertainties — contour-level and topology-level uncertainties — is developed. To avoid visual cluttering and occlusion, the uncertainty information is attached to a contour tree instead of being integrated with the visualization of the data. An efficient contour tree-based interface is designed to reduce users’ workload in viewing and analyzing complicated data with uncertainties and to facilitate a quick and accurate selection of prominent contours. This thesis advances the current uncertainty studies with an in-depth investigation of the feature-level uncertainties and an exploration of topology tools for effective and interactive uncertainty visualizations. With quantified representation and interactive capability, feature-based visualization helps people gain new insights into the uncertainties of their data, especially the uncertainties of extracted features which otherwise would remain unknown with the visualization of only data-level uncertainties

    Visualization and Analysis of Flow Fields based on Clifford Convolution

    Get PDF
    Vector fields from flow visualization often containmillions of data values. It is obvious that a direct inspection of the data by the user is tedious. Therefore, an automated approach for the preselection of features is essential for a complete analysis of nontrivial flow fields. This thesis deals with automated detection, analysis, and visualization of flow features in vector fields based on techniques transfered from image processing. This work is build on rotation invariant template matching with Clifford convolution as developed in the diploma thesis of the author. A detailed analysis of the possibilities of this approach is done, and further techniques and algorithms up to a complete segmentation of vector fields are developed in the process. One of the major contributions thereby is the definition of a Clifford Fourier transform in 2D and 3D, and the proof of a corresponding convolution theorem for the Clifford convolution as well as other major theorems. This Clifford Fourier transform allows a frequency analysis of vector fields and the behavior of vectorvalued filters, as well as an acceleration of the convolution computation as a fast transform exists. The depth and precision of flow field analysis based on template matching and Clifford convolution is studied in detail for a specific application, which are flow fields measured in the wake of a helicopter rotor. Determining the features and their parameters in this data is an important step for a better understanding of the observed flow. Specific techniques dealing with subpixel accuracy and the parameters to be determined are developed on the way. To regard the flow as a superposition of simpler features is a necessity for this application as close vortices influence each other. Convolution is a linear system, so it is suited for this kind of analysis. The suitability of other flow analysis and visualization methods for this task is studied here as well. The knowledge and techniques developed for this work are brought together in the end to compute and visualize feature based segmentations of flow fields. The resulting visualizations display important structures of the flow and highlight the interesting features. Thus, a major step towards robust and automatic detection, analysis and visualization of flow fields is taken

    Doctor of Philosophy

    Get PDF
    dissertationA broad range of applications capture dynamic data at an unprecedented scale. Independent of the application area, finding intuitive ways to understand the dynamic aspects of these increasingly large data sets remains an interesting and, to some extent, unsolved research problem. Generically, dynamic data sets can be described by some, often hierarchical, notion of feature of interest that exists at each moment in time, and those features evolve across time. Consequently, exploring the evolution of these features is considered to be one natural way of studying these data sets. Usually, this process entails the ability to: 1) define and extract features from each time step in the data set; 2) find their correspondences over time; and 3) analyze their evolution across time. However, due to the large data sizes, visualizing the evolution of features in a comprehensible manner and performing interactive changes are challenging. Furthermore, feature evolution details are often unmanageably large and complex, making it difficult to identify the temporal trends in the underlying data. Additionally, many existing approaches develop these components in a specialized and standalone manner, thus failing to address the general task of understanding feature evolution across time. This dissertation demonstrates that interactive exploration of feature evolution can be achieved in a non-domain-specific manner so that it can be applied across a wide variety of application domains. In particular, a novel generic visualization and analysis environment that couples a multiresolution unified spatiotemporal representation of features with progressive layout and visualization strategies for studying the feature evolution across time is introduced. This flexible framework enables on-the-fly changes to feature definitions, their correspondences, and other arbitrary attributes while providing an interactive view of the resulting feature evolution details. Furthermore, to reduce the visual complexity within the feature evolution details, several subselection-based and localized, per-feature parameter value-based strategies are also enabled. The utility and generality of this framework is demonstrated by using several large-scale dynamic data sets
    corecore