6 research outputs found

    Direct Multifield Volume Ray Casting of Fiber Surfaces

    Get PDF
    Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data

    New Algorithmic Techniques for Large Scale Volumetric Data Visualization on Parallel Architectures

    Get PDF
    Volume visualization is widely used as an effective approach for the visual exploration, computational analysis, and manipulation of volumetric datasets. Due to the dramatic advances in imaging instruments and computing technologies, such datasets are now appearing at a very fast rate with increasingly larger sizes in many engineering, science and medical applications. Isosurface and direct volume rendering(DVR) are two of the most widely used techniques to render such datasets. This dissertation introduces novel techniques for rendering isosurfaces and volumes, and extends these techniques to multiprocessor architectures. We first focus on cluster-based techniques for isosurface extraction and rendering using polygonal approximation. We present a new simple indexing scheme and data layout approach, which enable scalable and efficient isosurface generation. This algorithm is the first known parallel algorithm to achieve provable load balancing on multiprocessor systems. We also develop an algorithm to generate isosurfaces using ray-casting on multi-core processors. Our method is based on a hybrid strategy that begins with an object order traversal of the data followed by ray-casting on ordered sets of an adaptive number of subcubes, one set for each small group of pixels on the image. We develop a multithreaded implementation, which uses new dynamic load balancing techniques that start with an image partitioning for the initial stage and then perform dynamic allocation of groups of ray-casting tasks among the different threads. The strategy ensures almost equal loads among the cores while maintaining spatial data locality. This scheme is extended to perform direct volume rendering and is shown to achieve similar improvements in terms of overall performance, load balancing, and scalability. We conduct a large number of tests for all our algorithms on the University of Maryland Visualization Cluster and on the 8-core Clovertown platform using a wide variety of datasets such as Richtmyer-Meshkov Instability dataset (7.5GB for each time step) and Visible Human dataset (~1GB). We obtain results that consistently validate the efficiency and the scalability of our algorithms. In particular, the overall performance of our hybrid ray-casting scheme achieves an interactive rendering rate on high resolution (1024x1024) screens for all the datasets tested

    Research on generic interactive deformable 3D models: focus on the human inguinal region

    Get PDF
    The goal of this project is to research for real-time approximate methods of physicallybased animation in conjunction with static polygonal meshes with the aim of deforming them and simulating an elastic behaviour for these meshes. Because of this, in this project it has been developed a software suite capable of doing a lot of tasks, each one from different computer graphics research fields, conforming a versatile capability project

    Doctor of Philosophy

    Get PDF
    dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection

    Scalable, Data- intensive Network Computation

    Get PDF
    To enable groups of collaborating researchers at different locations to effectively share large datasets and investigate their spontaneous hypotheses on the fly, we are interested in de- veloping a distributed system that can be easily leveraged by a variety of data intensive applications. The system is composed of (i) a number of best effort logistical depots to en- able large-scale data sharing and in-network data processing, (ii) a set of end-to-end tools to effectively aggregate, manage and schedule a large number of network computations with attendant data movements, and (iii) a Distributed Hash Table (DHT) on top of the generic depot services for scalable data management. The logistical depot is extended by following the end-to-end principles and is modeled with a closed queuing network model. Its performance characteristics are studied by solving the steady state distributions of the model using local balance equations. The modeling results confirm that the wide area network is the performance bottleneck and running concurrent jobs can increase resource utilization and system throughput. As a novel contribution, techniques to effectively support resource demanding data- intensive applications using the ¯ne-grained depot services are developed. These techniques include instruction level scheduling of operations, dynamic co-scheduling of computation and replication, and adaptive workload control. Experiments in volume visualization have proved the effectiveness of these techniques. Due to the unique characteristic of data- intensive applications and our co-scheduling algorithm, a DHT is implemented on top of the basic storage and computation services. It demonstrates the potential of the Logistical Networking infrastructure to serve as a service creation platform

    Saliency-guided Graphics and Visualization

    Get PDF
    In this dissertation, we show how we can use principles of saliency to enhance depiction, manage visual attention, and increase interactivity for 3D graphics and visualization. Current mesh saliency approaches are inspired by low-level human visual cues, but have not yet been validated. Our eye-tracking-based user study shows that the current computational model of mesh saliency can well approximate human eye movements. Artists, illustrators, photographers, and cinematographers have long used the principles of contrast and composition to guide visual attention. We present a visual-saliency-based operator to draw visual attention to selected regions of interest. We have observed that it is more successful at eliciting viewer attention than the traditional Gaussian enhancement operator for visualizing both volume datasets and 3D meshes. Mesh saliency can be measured in various ways. The previous model of saliency computes saliency by identifying the uniqueness of curvature. Another way to identify uniqueness is to look for non-repeating structure in the middle of repeating structure. We have developed a system to detect repeating patterns in 3D point datasets. We introduce the idea of creating vertex and transformation streams that represent large point datasets via their interaction. This dramatically improves arithmetic intensity and addresses the input geometry bandwidth bottleneck for interactive 3D graphics applications. Fast-previewing of time-varing datasets is important for the purpose of summarization and abstraction. We compute the salient frames in molecular dynamics simulations through the subspace analysis of the protein's residue orientations. We first compute an affinity matrix for each frame i of the simulation based on the similarity of the orientation of the protein's backbone residues. Eigenanalysis of the affinity matrix gives us the subspace that best represents the conformation of the current frame i. We use this subspace to represent the frames ahead and behind frame i. The more accurately we can use the subspace of frame i to represent its neighbors, the less salient it is. Taken together, the tools and techniques developed in this dissertation are likely to provide the building blocks for the next generation visual analysis, reasoning, and discovery environments
    corecore