32 research outputs found

    Information theory assisted data visualization and exploration

    Get PDF
    This thesis introduces techniques to utilize information theory, particularly entropy for enhancing data visualization and exploration. The ultimate goal with this work is to enable users to perceive as much as information available for recognizing objects, detecting regular or non-regular patterns and reducing user effort while executing the required tasks. We believe that the metrics to be set for enhancing computer generated visualizations should be quantifiable and that quantification should measure the information perception of the user. The proper way to solve this problem is utilizing information theory, particularly entropy. Entropy offers quantification of the information amount in a general communication system. In the communication model, information sender and information receiver are connected with a channel. We are inspired from this model and exploited it in a different way, namely we set the information sender as the data to be visualized, the information receiver as the viewer and the communication channel as the screen where the visualized image is displayed. In this thesis we explore the usage of entropy in three different visualization problems, -Enhancing the visualization of large scale social networks for better perception, -Finding the best representational images of a 3D object to visually inspect with minimal loss of information, -Automatic navigation over a 3D terrain with minimal loss of information. Visualization of large scale social networks is still a major challenge for information visualization researchers. When a thousand nodes are displayed on the screen with the lack of coloring, sizing and filtering mechanisms, the users generally do not perceive much on the first look. They usually use pointing devices or keyboard for zooming and panning to find the information that they are looking for. With this thesis we tried to present a visualization approach that uses coloring, sizing and filtering to help the users recognize the presented information. The second problem that we tried to tackle is finding the best representational images of 3D models. This problem is highly subjective in cognitive manner. The best or good definitions do not depend on any metric or any quantification, furthermore, when the same image is presented to two different users it can be identified differently. However in this thesis we tried to map some metrics to best or good definitions for representational images, such as showing the maximum faces, maximum saliency or combination of both in an image. The third problem that we tried to find a solution is automatic terrain navigation with minimal loss of information. The information to be quantified on this problem is taken as the surface visibility of a terrain. However the visibility problem is changed with the heuristic that users generally focus on city centers, buildings and interesting points during terrain exploration. In order to improve the information amount at the time of navigation, we should focus on those areas. Hence we employed the road network data, and set the heuristic that intersections of road network segments are the residential places. In this problem, region extraction using road network data, viewpoint entropy for camera positions, and automatic camera path generation methods are investigated

    Towards Data-Driven Large Scale Scientific Visualization and Exploration

    Get PDF
    Technological advances have enabled us to acquire extremely large datasets but it remains a challenge to store, process, and extract information from them. This dissertation builds upon recent advances in machine learning, visualization, and user interactions to facilitate exploration of large-scale scientific datasets. First, we use data-driven approaches to computationally identify regions of interest in the datasets. Second, we use visual presentation for effective user comprehension. Third, we provide interactions for human users to integrate domain knowledge and semantic information into this exploration process. Our research shows how to extract, visualize, and explore informative regions on very large 2D landscape images, 3D volumetric datasets, high-dimensional volumetric mouse brain datasets with thousands of spatially-mapped gene expression profiles, and geospatial trajectories that evolve over time. The contribution of this dissertation include: (1) We introduce a sliding-window saliency model that discovers regions of user interest in very large images; (2) We develop visual segmentation of intensity-gradient histograms to identify meaningful components from volumetric datasets; (3) We extract boundary surfaces from a wealth of volumetric gene expression mouse brain profiles to personalize the reference brain atlas; (4) We show how to efficiently cluster geospatial trajectories by mapping each sequence of locations to a high-dimensional point with the kernel distance framework. We aim to discover patterns, relationships, and anomalies that would lead to new scientific, engineering, and medical advances. This work represents one of the first steps toward better visual understanding of large-scale scientific data by combining machine learning and human intelligence

    Automating Camera Placement for In Situ Visualization

    Get PDF
    Trends in high-performance computing increasingly require visualization to be carried out using in situ processing. This processing most often occurs without a human in the loop, meaning that the in situ software must be able to carry out its tasks without human guidance. This dissertation explores this topic, focusing on automating camera placement for in situ visualization when there is no a priori knowledge of where to place the camera. We introduce a new approach for this automation process, which depends on Viewpoint Quality (VQ) metrics that quantify how much insight a camera position provides. This research involves three major sub-projects: (1) performing a user survey to determine the viewpoint preferences of scientific users as well as developing new VQ metrics that can predict preference 68% of the time; (2) parallelizing VQ metrics and designing search algorithms so they can be executed efficiently in situ; and (3) evaluating the behavior of camera placement of time-varying data to determine how often a new camera placement should be considered. In all, this dissertation shows automating in situ camera placement for scientific simulations is possible on exascale computers and provides insight on best practices

    Entropy guided visualization and analysis of multivariate spatio-temporal data generated by physically based simulation

    Get PDF
    Flow fields produced by physically based simulations are subsets of multivariate spatiotemporal data, and have been in interest of many researchers for visualization, since the data complexity makes it difficult to extract representative views for the interpretation of fluid behavior. In this thesis, we utilize Information Theory to find entropy maps for vector flow fields, and use entropy maps to aid visualization and analysis of the flow fields. Our major contribution is to use Principal Component Analyses (PCA) to find a projection that has the maximal directional variation in polar coordinates for each sampling window in order to generate histograms according to the projected 3D vector field, producing results with fewer artifacts than the traditional methods. Entropy guided visualization of different data sets are presented to evaluate proposed method for the generation of entropy maps. High entropy regions and coherent directional components of the flow fields are visible without cluttering to reveal fluid behavior in rendered images. In addition to using data sets those are available for research purposes, we have developed a fluid simulation framework using Smoothed Particle Hydrodynamics (SPH) to produce flow fields. SPH is a widely used method for fluid simulations, and used to generate data sets that are difficult to interpret with direct visualization techniques. A moderate improvement for the performance and stability of SPH implementations is also proposed with the use of fractional derivatives, which are known to be useful for approximating particle behavior immersed in fluids

    Generative Models for Active Vision

    Get PDF
    The active visual system comprises the visual cortices, cerebral attention networks, and oculomotor system. While fascinating in its own right, it is also an important model for sensorimotor networks in general. A prominent approach to studying this system is active inference—which assumes the brain makes use of an internal (generative) model to predict proprioceptive and visual input. This approach treats action as ensuring sensations conform to predictions (i.e., by moving the eyes) and posits that visual percepts are the consequence of updating predictions to conform to sensations. Under active inference, the challenge is to identify the form of the generative model that makes these predictions—and thus directs behavior. In this paper, we provide an overview of the generative models that the brain must employ to engage in active vision. This means specifying the processes that explain retinal cell activity and proprioceptive information from oculomotor muscle fibers. In addition to the mechanics of the eyes and retina, these processes include our choices about where to move our eyes. These decisions rest upon beliefs about salient locations, or the potential for information gain and belief-updating. A key theme of this paper is the relationship between “looking” and “seeing” under the brain's implicit generative model of the visual world

    ANALYSIS AND VISUALIZATION OF FLOW FIELDS USING INFORMATION-THEORETIC TECHNIQUES AND GRAPH-BASED REPRESENTATIONS

    Get PDF
    Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms

    Medical image registration and soft tissue deformation for image guided surgery system

    Get PDF
    In parallel with the developments in imaging modalities, image-guided surgery (IGS) can now provide the surgeon with high quality three-dimensional images depicting human anatomy. Although IGS is now in widely use in neurosurgery, there remain some limitations that must be overcome before it can be employed in more general minimally invasive procedures. In this thesis, we have developed several contributions to the field of medical image registration and brain tissue deformation modeling. From the methodology point of view, medical image registration algorithms can be classified into feature-based and intensity-based methods. One of the challenges faced by feature-based registration would be to determine which specific type of feature is desired for a given task and imaging type. For this reason, a point set registration using points and curves feature is proposed, which has the accuracy of registration based on points and the robustness of registration based on lines or curves. We have also tackled the problem on rigid registration of multimodal images using intensity-based similarity measures. Mutual information (MI) has emerged in recent years as a popular similarity metric and widely being recognized in the field of medical image registration. Unfortunately, it ignores the spatial information contained in the images such as edges and corners that might be useful in the image registration. We introduce a new similarity metric, called Adaptive Mutual Information (AMI) measure which incorporates the gradient spatial information. Salient pixels in the regions with high gradient value will contribute more in the estimation of mutual information of image pairs being registered. Experimental results showed that our proposed method improves registration accuracy and it is more robust to noise images which have large deviation from the reference image. Along with this direction, we further improve the technique to simultaneously use all information obtained from multiple features. Using multiple spatial features, the proposed algorithm is less sensitive to the effect of noise and some inherent variations, giving more accurate registration. Brain shift is a complex phenomenon and there are many different reasons causing brain deformation. We have investigated the pattern of brain deformation with respect to location and magnitude and to consider the implications of this pattern for correcting brain deformation in IGS systems. A computational finite element analysis was carried out to analyze the deformation and stress tensor experienced by the brain tissue during surgical operations. Finally, we have developed a prototype visualization display and navigation platform for interpretation of IGS. The system is based upon Qt (cross-platform GUI toolkit) and it integrates VTK (an object-oriented visualization library) as the rendering kernel. Based on the construction of a visualization software platform, we have laid a foundation on the future research to be extended to implement brain tissue deformation into the system

    Medical image registration and soft tissue deformation for image guided surgery system

    Get PDF
    In parallel with the developments in imaging modalities, image-guided surgery (IGS) can now provide the surgeon with high quality three-dimensional images depicting human anatomy. Although IGS is now in widely use in neurosurgery, there remain some limitations that must be overcome before it can be employed in more general minimally invasive procedures. In this thesis, we have developed several contributions to the field of medical image registration and brain tissue deformation modeling. From the methodology point of view, medical image registration algorithms can be classified into feature-based and intensity-based methods. One of the challenges faced by feature-based registration would be to determine which specific type of feature is desired for a given task and imaging type. For this reason, a point set registration using points and curves feature is proposed, which has the accuracy of registration based on points and the robustness of registration based on lines or curves. We have also tackled the problem on rigid registration of multimodal images using intensity-based similarity measures. Mutual information (MI) has emerged in recent years as a popular similarity metric and widely being recognized in the field of medical image registration. Unfortunately, it ignores the spatial information contained in the images such as edges and corners that might be useful in the image registration. We introduce a new similarity metric, called Adaptive Mutual Information (AMI) measure which incorporates the gradient spatial information. Salient pixels in the regions with high gradient value will contribute more in the estimation of mutual information of image pairs being registered. Experimental results showed that our proposed method improves registration accuracy and it is more robust to noise images which have large deviation from the reference image. Along with this direction, we further improve the technique to simultaneously use all information obtained from multiple features. Using multiple spatial features, the proposed algorithm is less sensitive to the effect of noise and some inherent variations, giving more accurate registration. Brain shift is a complex phenomenon and there are many different reasons causing brain deformation. We have investigated the pattern of brain deformation with respect to location and magnitude and to consider the implications of this pattern for correcting brain deformation in IGS systems. A computational finite element analysis was carried out to analyze the deformation and stress tensor experienced by the brain tissue during surgical operations. Finally, we have developed a prototype visualization display and navigation platform for interpretation of IGS. The system is based upon Qt (cross-platform GUI toolkit) and it integrates VTK (an object-oriented visualization library) as the rendering kernel. Based on the construction of a visualization software platform, we have laid a foundation on the future research to be extended to implement brain tissue deformation into the system

    Efficient 3D Segmentation, Registration and Mapping for Mobile Robots

    Get PDF
    Sometimes simple is better! For certain situations and tasks, simple but robust methods can achieve the same or better results in the same or less time than related sophisticated approaches. In the context of robots operating in real-world environments, key challenges are perceiving objects of interest and obstacles as well as building maps of the environment and localizing therein. The goal of this thesis is to carefully analyze such problem formulations, to deduce valid assumptions and simplifications, and to develop simple solutions that are both robust and fast. All approaches make use of sensors capturing 3D information, such as consumer RGBD cameras. Comparative evaluations show the performance of the developed approaches. For identifying objects and regions of interest in manipulation tasks, a real-time object segmentation pipeline is proposed. It exploits several common assumptions of manipulation tasks such as objects being on horizontal support surfaces (and well separated). It achieves real-time performance by using particularly efficient approximations in the individual processing steps, subsampling the input data where possible, and processing only relevant subsets of the data. The resulting pipeline segments 3D input data with up to 30Hz. In order to obtain complete segmentations of the 3D input data, a second pipeline is proposed that approximates the sampled surface, smooths the underlying data, and segments the smoothed surface into coherent regions belonging to the same geometric primitive. It uses different primitive models and can reliably segment input data into planes, cylinders and spheres. A thorough comparative evaluation shows state-of-the-art performance while computing such segmentations in near real-time. The second part of the thesis addresses the registration of 3D input data, i.e., consistently aligning input captured from different view poses. Several methods are presented for different types of input data. For the particular application of mapping with micro aerial vehicles where the 3D input data is particularly sparse, a pipeline is proposed that uses the same approximate surface reconstruction to exploit the measurement topology and a surface-to-surface registration algorithm that robustly aligns the data. Optimization of the resulting graph of determined view poses then yields globally consistent 3D maps. For sequences of RGBD data this pipeline is extended to include additional subsampling steps and an initial alignment of the data in local windows in the pose graph. In both cases, comparative evaluations show a robust and fast alignment of the input data

    An Information-theoretic Framework for Visualization

    Get PDF
    Abstract-In this paper, we examine whether or not information theory can be one of the theoretic frameworks for visualization. We formulate concepts and measurements for qualifying visual information. We illustrate these concepts with examples that manifest the intrinsic and implicit use of information theory in many existing visualization techniques. We outline the broad correlation between visualization and the major applications of information theory, while pointing out the difference in emphasis and some technical gaps. Our study provides compelling evidence that information theory can explain a significant number of phenomena or events in visualization, while no example has been found which is fundamentally in conflict with information theory. We also notice that the emphasis of some traditional applications of information theory, such as data compression or data communication, may not always suit visualization, as the former typically focuses on the efficient throughput of a communication channel, whilst the latter focuses on the effectiveness in aiding the perceptual and cognitive process for data understanding and knowledge discovery. These findings suggest that further theoretic developments are necessary for adopting and adapting information theory for visualization
    corecore