330 research outputs found

    Feature-Based Uncertainty Visualization

    Get PDF
    While uncertainty in scientific data attracts an increasing research interest in the visualization community, two critical issues remain insufficiently studied: (1) visualizing the impact of the uncertainty of a data set on its features and (2) interactively exploring 3D or large 2D data sets with uncertainties. In this study, a suite of feature-based techniques is developed to address these issues. First, a framework of feature-level uncertainty visualization is presented to study the uncertainty of the features in scalar and vector data. The uncertainty in the number and locations of features such as sinks or sources of vector fields are referred to as feature-level uncertainty while the uncertainty in the numerical values of the data is referred to as data-level uncertainty. The features of different ensemble members are indentified and correlated. The feature-level uncertainties are expressed as the transitions between corresponding features through new elliptical glyphs. Second, an interactive visualization tool for exploring scalar data with data-level and two types of feature-level uncertainties — contour-level and topology-level uncertainties — is developed. To avoid visual cluttering and occlusion, the uncertainty information is attached to a contour tree instead of being integrated with the visualization of the data. An efficient contour tree-based interface is designed to reduce users’ workload in viewing and analyzing complicated data with uncertainties and to facilitate a quick and accurate selection of prominent contours. This thesis advances the current uncertainty studies with an in-depth investigation of the feature-level uncertainties and an exploration of topology tools for effective and interactive uncertainty visualizations. With quantified representation and interactive capability, feature-based visualization helps people gain new insights into the uncertainties of their data, especially the uncertainties of extracted features which otherwise would remain unknown with the visualization of only data-level uncertainties

    An Empirical Evaluation of Visual Cues for 3D Flow Field Perception

    Get PDF
    Three-dimensional vector fields are common datasets throughout the sciences. They often represent physical phenomena that are largely invisible to us in the real world, like wind patterns and ocean currents. Computer-aided visualization is a powerful tool that can represent data in any way we choose through digital graphics. Visualizing 3D vector fields is inherently difficult due to issues such as visual clutter, self-occlusion, and the difficulty of providing depth cues that adequately support the perception of flow direction in 3D space. Cutting planes are often used to overcome these issues by presenting slices of data that are more cognitively manageable. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. The most valuable depth cue for the perception of other kinds of 3D data, notably 3D networks and 3D point clouds, is structure-from-motion (also called the Kinetic Depth Effect); another powerful depth cue is stereoscopic viewing, but none of these cues have been fully examined in the context of flow visualization. This dissertation presents a series of quantitative human factors studies that evaluate depth and direction cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The results of the studies are distilled into a set of design guidelines to improve the effectiveness of 3D flow field visualizations, and those guidelines are implemented as an immersive, interactive 3D flow visualization proof-of-concept application

    Visualization for the Physical Sciences

    Get PDF

    Gaussian Processes for Uncertainty Visualization

    Get PDF
    Data is virtually always uncertain in one way or another. Yet, uncertainty information is not routinely included in visualizations and, outside of simple 1D diagrams, there is no established way to do it. One big issue is to find a method that shows the uncertainty without completely cluttering the display. A second important question that needs to be solved, is how uncertainty and interpolation interact. Interpolated values are inherently uncertain, because they are heuristically estimated values – not measurements. But how much more uncertain are they? How can this effect be modeled? In this thesis, we introduce Gaussian processes, a statistical framework that allows for the smooth interpolation of data with heteroscedastic uncertainty through regression. Its theoretical background makes it a convincing method to analyze uncertain data and create a model of the underlying phenomenon and, most importantly, the uncertainty at and in-between the data points. For this reason, it is already popular in the GIS community where it is known as Kriging but has applications in machine learning too. In contrast to traditional interpolation methods, Gaussian processes do not merely create a surface that runs through the data points, but respect the uncertainty in them. This way, noise, errors or outliers in the data do not disturb the model inappropriately. Most importantly, the model shows the variance in the interpolated values, which can be higher but also lower than that of its neighboring data points, providing us with a lot more insight into the quality of our data and how it influences our uncertainty! This enables us to use uncertainty information in algorithms that need to interpolate between data points, which includes almost all visualization algorithms

    High-dimensional glyph-based visualization and interactive techniques.

    Get PDF
    The advancement of modern technology and scientific measurements has led to datasets growing in both size and complexity, exposing the need for more efficient and effective ways of visualizing and analysing data. Despite the amount of progress in visualization methods, high-dimensional data still poses a number of significant challenges in terms of the technical ability of realising such a mapping, and how accurate they are actually interpreted. The different data sources and characteristics which arise from a wide range of scientific domains as well as specific design requirements constantly create new special challenges for visualization research. This thesis presents several contributions to the field of glyph-based visualization. Glyphs are parametrised objects which encode one or more data values to its appearance (also referred to as visual channels) such as their size, colour, shape, and position. They have been widely used to convey information visually, and are especially well suited for displaying complex, multi-faceted datasets. Its major strength is the ability to depict patterns of data in the context of a spatial relationship, where multi-dimensional trends can often be perceived more easily. Our research is set in the broad scope of multi-dimensional visualization, addressing several aspects of glyph-based techniques, including visual design, perception, placement, interaction, and applications. In particular, this thesis presents a comprehensive study on one interaction technique, namely sorting, for supporting various analytical tasks. We have outlined the concepts of glyph- based sorting, identified a set of design criteria for sorting interactions, designed and prototyped a user interface for sorting multivariate glyphs, developed a visual analytics technique to support sorting, conducted an empirical study on perceptual orderability of visual channels used in glyph design, and applied glyph-based sorting to event visualization in sports applications. The content of this thesis is organised into two parts. Part I provides an overview of the basic concepts of glyph-based visualization, before describing the state-of-the-art in this field. We then present a collection of novel glyph-based approaches to address challenges created from real-world applications. These are detailed in Part II. Our first approach involves designing glyphs to depict the composition of multiple error-sensitivity fields. This work addresses the problem of single camera positioning, using both 2D and 3D methods to support camera configuration based on various constraints in the context of a real-world environment. Our second approach present glyphs to visualize actions and events "at a glance". We discuss the relative merits of using metaphoric glyphs in comparison to other types of glyph designs to the particular problem of real-time sports analysis. As a result of this research, we delivered a visualization software, MatchPad, on a tablet computer. It successfully helped coaching staff and team analysts to examine actions and events in detail whilst maintaining a clear overview of the match, and assisted in their decision making during the matches. Abstract shortened by ProQuest

    Visuelle Analyse großer Partikeldaten

    Get PDF
    Partikelsimulationen sind eine bewährte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der Kraftstoffzerstäubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich über die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher Datensätze sowie der zugrundeliegenden Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulären Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein reguläres eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. Darüber hinaus führt diese Konversion meist zu einem Verlust der Präzision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten

    Supporting Quantitative Visual Analysis in Medicine and Biology in the Presence of Data Uncertainty

    Full text link
    • …
    corecore