332 research outputs found

    Meshless Animation Framework

    Get PDF
    This report details the implementation of a meshless animation framework for blending surfaces. The framework is meshless in the sense that only the control points are handled on the CPU, and the surface evaluation is delegated to the GPU using the tessellation shader steps. The framework handles regular grids and some forms of irregular grids. Different ways of handling the evaluation of the local surfaces are investigated. Directly evaluating them on the GPU or pre-evaluating them and only sampling the data on the GPU. Four different methods for pre-evaluation are presented, and the surface accuracy of each one is tested. The framework contains two methods for adaptively setting the level of detail on the GPU depending on position of the camera, using a view-based metric and a pixel-accurate rendering method. For both methods the pixel-accuracy and triangle size is tested and compared with static tessellation. Benchmarking results from the framework are presented. With and without animation, with different local surface types, and different resolution on the pre-evaluated data

    VISUALIZATION OF MARINE SAND DUNE DISPLACEMENTS UTILIZING MODERN GPU TECHNIQUES

    Get PDF

    Analysis of (iso)surface reconstructions: Quantitative metrics and methods

    Get PDF
    Due to sampling processes volumetric data is inherently discrete and most often knowledge of the underlying continuous model is not available. Surface rendering techniques attempt to reconstruct the continuous model, using isosurfaces, from the discrete data. Therefore, it natural to ask how accurate the reconstructed isosurfaces are with respect to the underlying continuous model. A reconstructed isosurface may look impressive when rendered ( photorealism ), but how well does it reflect reality ( physical realism )?;The users of volume visualization packages must be aware of the short-comings of the algorithms used to produce the images so that they may properly interpret, and interact with, what they see. However, very little work has been done to quantify the accuracy of volumetric data reconstructions. Most analysis to date has been qualitative. Qualitative analysis uses simple visual inspection to determine whether characteristics, known to exist in the real world object, are present in the rendered image. Our research suggests metrics and methods for quantifying the physical realism of reconstructed isosurfaces.;Physical realism is a many faceted notion. In fact, a different metric could be defined for each physical property one wishes to consider. We have defined four metrics--Global Surface Area Preservation (GSAP), Volume Preservation (VP), Point Distance Preservation (PDP), and Isovalue Preservation (IVP). We present experimental results for each of these metrics and discuss their validity with respect to those results.;We also present the Reconstruction Quantification (sub)System (RQS). RQS provides a flexible framework for measuring physical realism. This system can be embedded in existing visualization systems with little modification of the system itself. Two types of analysis can be performed; reconstruction analysis and algorithm analysis. Reconstruction analysis allows users to determine the accuracy of individual surface reconstructions. Algorithm analysis, on the other hand, allows developers of visualization systems to determine the efficacy of the visualization system based on several reconstructions

    Techniques for Realtime Viewing and Manipulation of Volumetric Data

    Get PDF
    Visualizing and manipulating volumetric data is a major component in many areas including anatomical registration in biomedical fields, seismic data analysis in the oil industry, machine part design in computer-aided geometric design, character animation in the movie industry, and fluid simulation. These industries have to meet the demands of the times and be able to make meaningful assertions about the data they generate. The shear size of this data presents many challenges to facilitating realtime interaction. In the recent decade, graphics hardware has become increasingly powerful and more sophisticated which has introduced a new realm of possibilities for processing volumetric data. This thesis focuses on a suite of techniques for viewing and editing volumetric data that efficiently use the processing power of central processing units (CPUs) as well as the large processing power of the graphics hardware (GPUs). This work begins with an algorithm to improve the efficiency of a texture-based volume rendering. We continue with a framework for performing realtime constructive solid geometry (CSG) with complex shapes and smoothing operations on watertight meshes based on a variation of Depth Peeling. We then move to an intuitive technique for deforming volumetric data using a collection of control points. Finally, we apply this technique to image registration of 3-dimensional computed tomography (CT) images used for lung cancel treatment, planning

    Doctor of Philosophy

    Get PDF
    dissertationWhile boundary representations, such as nonuniform rational B-spline (NURBS) surfaces, have traditionally well served the needs of the modeling community, they have not seen widespread adoption among the wider engineering discipline. There is a common perception that NURBS are slow to evaluate and complex to implement. Whereas computer-aided design commonly deals with surfaces, the engineering community must deal with materials that have thickness. Traditional visualization techniques have avoided NURBS, and there has been little cross-talk between the rich spline approximation community and the larger engineering field. Recently there has been a strong desire to marry the modeling and analysis phases of the iterative design cycle, be it in car design, turbulent flow simulation around an airfoil, or lighting design. Research has demonstrated that employing a single representation throughout the cycle has key advantages. Furthermore, novel manufacturing techniques employing heterogeneous materials require the introduction of volumetric modeling representations. There is little question that fields such as scientific visualization and mechanical engineering could benefit from the powerful approximation properties of splines. In this dissertation, we remove several hurdles to the application of NURBS to problems in engineering and demonstrate how their unique properties can be leveraged to solve problems of interest

    Locally refinable gradient meshes supporting branching and sharp colour transitions:Towards a more versatile vector graphics primitive

    Get PDF
    We present a local refinement approach for gradient meshes, a primitive commonly used in the design of vector illustrations with complex colour propagation. Local refinement allows the artist to add more detail only in the regions where it is needed, as opposed to global refinement which often clutters the workspace with undesired detail and potentially slows down the workflow. Moreover, in contrast to existing implementations of gradient mesh refinement, our approach ensures mathematically exact refinement. Additionally, we introduce a branching feature that allows for a wider range of mesh topologies, as well as a feature that enables sharp colour transitions similar to diffusion curves, which turn the gradient mesh into a more versatile and expressive vector graphics primitive

    A survey on personal computer applications in industrial design process

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Industrial Design, Izmir, 1999Includes bibliographical references (leaves: 157-162)Text in English, Abstract: Turkish and Englishxii, 194 leavesIn this thesis, computer aided design systems are studied from the industrial designer's point of view. The study includes industrial design processes, computer aided design systems and the integration aspects.The technical issues are priorly studied, including current hardware and software technologies. The pure technical concepts are tried to be supported with real-world examples and graphics. Several important design software are examined, whether by personal practice or by literature research, depending on the availability of the software.Finally, the thesis include a case study, a 17" LCD computer monitor designed with a set of graphic programs including two-dimensional and three-dimensional packages.Keywords: Computers, industrial design methods, design software, computer aided design

    Rõivaste tekstureerimine kasutades Kinect V2.0

    Get PDF
    This thesis describes three new garment retexturing methods for FitsMe virtual fitting room applications using data from Microsoft Kinect II RGB-D camera. The first method, which is introduced, is an automatic technique for garment retexturing using a single RGB-D image and infrared information obtained from Kinect II. First, the garment is segmented out from the image using GrabCut or depth segmentation. Then texture domain coordinates are computed for each pixel belonging to the garment using normalized 3D information. Afterwards, shading is applied to the new colors from the texture image. The second method proposed in this work is about 2D to 3D garment retexturing where a segmented garment of a manikin or person is matched to a new source garment and retextured, resulting in augmented images in which the new source garment is transferred to the manikin or person. The problem is divided into garment boundary matching based on point set registration which uses Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. The final contribution of this thesis is by introducing another novel method which is used for increasing the texture quality of a 3D model of a garment, by using the same Kinect frame sequence which was used in the model creation. Firstly, a structured mesh must be created from the 3D model, therefore the 3D model is wrapped to a base model with defined seams and texture map. Afterwards frames are matched to the newly created model and by process of ray casting the color values of the Kinect frames are mapped to the UV map of the 3D model
    corecore