26 research outputs found

    Well-Centered Triangulation

    Get PDF
    Meshes composed of well-centered simplices have nice orthogonal dual meshes (the dual Voronoi diagram). This is useful for certain numerical algorithms that prefer such primal-dual mesh pairs. We prove that well-centered meshes also have optimality properties and relationships to Delaunay and minmax angle triangulations. We present an iterative algorithm that seeks to transform a given triangulation in two or three dimensions into a well-centered one by minimizing a cost function and moving the interior vertices while keeping the mesh connectivity and boundary vertices fixed. The cost function is a direct result of a new characterization of well-centeredness in arbitrary dimensions that we present. Ours is the first optimization-based heuristic for well-centeredness, and the first one that applies in both two and three dimensions. We show the results of applying our algorithm to small and large two-dimensional meshes, some with a complex boundary, and obtain a well-centered tetrahedralization of the cube. We also show numerical evidence that our algorithm preserves gradation and that it improves the maximum and minimum angles of acute triangulations created by the best known previous method.Comment: Content has been added to experimental results section. Significant edits in introduction and in summary of current and previous results. Minor edits elsewher

    Quad Meshing

    Get PDF
    Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this State of the Art Report, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing

    Scales and Scale-like Structures

    Get PDF
    Scales are a visually striking feature that grows on many animals. These small, rigid plates embedded in the skin form an integral part of our description of fish and reptiles, some plants, and many extinct animals. Scales exist in many shapes and sizes, and serve as protection, camouflage, and plumage for animals. The variety of scales and the animals they grow from pose an interesting problem in the field of Computer Graphics. This dissertation presents a method for generating scales and scale-like structures on a polygonal mesh through surface replacement. A triangular mesh was covered with scales and one or more proxy-models were used as the scales shape. A user began scale generation by drawing a lateral line on the model to control the distribution and orientation of scales on the surface. Next, a vector field was created over the surface to control an anisotropic Voronoi tessellation, which represents the region occupied by each scale. Then these regions were replaced by cutting the proxy model to match the boundary of the Voronoi region and deform the cut model onto the surface. The final result is a fully connected 2-manifold that is suitable for subsequent post-processing applications, like surface subdivision

    Interactive visualization tools for topological exploration

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1992This thesis concerns using computer graphics methods to visualize mathematical objects. Abstract mathematical concepts are extremely difficult to visualize, particularly when higher dimensions are involved; I therefore concentrate on subject areas such as the topology and geometry of four dimensions which provide a very challenging domain for visualization techniques. In the first stage of this research, I applied existing three-dimensional computer graphics techniques to visualize projected four-dimensional mathematical objects in an interactive manner. I carried out experiments with direct object manipulation and constraint-based interaction and implemented tools for visualizing mathematical transformations. As an application, I applied these techniques to visualizing the conjecture known as Fermat's Last Theorem. Four-dimensional objects would best be perceived through four-dimensional eyes. Even though we do not have four-dimensional eyes, we can use computer graphics techniques to simulate the effect of a virtual four-dimensional camera viewing a scene where four-dimensional objects are being illuminated by four-dimensional light sources. I extended standard three-dimensional lighting and shading methods to work in the fourth dimension. This involved replacing the standard "z-buffer" algorithm by a "w-buffer" algorithm for handling occlusion, and replacing the standard "scan-line" conversion method by a new "scan-plane" conversion method. Furthermore, I implemented a new "thickening" technique that made it possible to illuminate surfaces correctly in four dimensions. Our new techniques generate smoothly shaded, highlighted view-volume images of mathematical objects as they would appear from a four-dimensional viewpoint. These images reveal fascinating structures of mathematical objects that could not be seen with standard 3D computer graphics techniques. As applications, we generated still images and animation sequences for mathematical objects such as the Steiner surface, the four-dimensional torus, and a knotted 2-sphere. The images of surfaces embedded in 4D that have been generated using our methods are unique in the history of mathematical visualization. Finally, I adapted these techniques to visualize volumetric data (3D scalar fields) generated by other scientific applications. Compared to other volume visualization techniques, this method provides a new approach that researchers can use to look at and manipulate certain classes of volume data

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Data-driven shape analysis and processing

    Get PDF
    Data-driven methods serve an increasingly important role in discovering geometric, structural, and semantic relationships between shapes. In contrast to traditional approaches that process shapes in isolation of each other, data-driven methods aggregate information from 3D model collections to improve the analysis, modeling and editing of shapes. Through reviewing the literature, we provide an overview of the main concepts and components of these methods, as well as discuss their application to classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing

    Curve and surface framing for scientific visualization and domain dependent navigation

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Science, 1996Curves and surfaces are two of the most fundamental types of objects in computer graphics. Most existing systems use only the 3D positions of the curves and surfaces, and the 3D normal directions of the surfaces, in the visualization process. In this dissertation, we attach moving coordinate frames to curves and surfaces, and explore several applications of these frames in computer graphics and scientific visualization. Curves in space are difficult to perceive and analyze, especially when they are densely clustered, as is typical in computational fluid dynamics and volume deformation applications. Coordinate frames are useful for exposing the similarities and differences between curves. They are also useful for constructing ribbons, tubes and smooth camera orientations along curves. In many 3D systems, users interactively move the camera around the objects with a mouse or other device. But all the camera control is done independently of the properties of the objects being viewed, as if the user is flying freely in space. This type of domain-independent navigation is frequently inappropriate in visualization applications and is sometimes quite difficult for the user to control. Another productive approach is to look at domain-specific constraints and thus to create a new class of navigation strategies. Based on attached frames on surfaces, we can constrain the camera gaze direction to be always parallel (or at a fixed angle) to the surface normal. Then users will get a feeling of driving on the object instead of flying through the space. The user's mental model of the environment being visualized can be greatly enhanced by the use of these constraints in the interactive interface. Many of our research ideas have been implemented in Mesh View, an interactive system for viewing and manipulating geometric objects. It contains a general purpose C++ library for nD geometry and supports a winged-edge based data structure. Dozens of examples of scientifically interesting surfaces have been constructed and included with the system

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Doctor of Philosophy

    Get PDF
    dissertationVolumetric parameterization is an emerging field in computer graphics, where volumetric representations that have a semi-regular tensor-product structure are desired in applications such as three-dimensional (3D) texture mapping and physically-based simulation. At the same time, volumetric parameterization is also needed in the Isogeometric Analysis (IA) paradigm, which uses the same parametric space for representing geometry, simulation attributes and solutions. One of the main advantages of the IA framework is that the user gets feedback directly as attributes of the NURBS model representation, which can represent geometry exactly, avoiding both the need to generate a finite element mesh and the need to reverse engineer the simulation results from the finite element mesh back into the model. Research in this area has largely been concerned with issues of the quality of the analysis and simulation results assuming the existence of a high quality volumetric NURBS model that is appropriate for simulation. However, there are currently no generally applicable approaches to generating such a model or visualizing the higher order smooth isosurfaces of the simulation attributes, either as a part of current Computer Aided Design or Reverse Engineering systems and methodologies. Furthermore, even though the mesh generation pipeline is circumvented in the concept of IA, the quality of the model still significantly influences the analysis result. This work presents a pipeline to create, analyze and visualize NURBS geometries. Based on the concept of analysis-aware modeling, this work focusses in particular on methodologies to decompose a volumetric domain into simpler pieces based on appropriate midstructures by respecting other relevant interior material attributes. The domain is decomposed such that a tensor-product style parameterization can be established on the subvolumes, where the parameterization matches along subvolume boundaries. The volumetric parameterization is optimized using gradient-based nonlinear optimization algorithms and datafitting methods are introduced to fit trivariate B-splines to the parameterized subvolumes with guaranteed order of accuracy. Then, a visualization method is proposed allowing to directly inspect isosurfaces of attributes, such as the results of analysis, embedded in the NURBS geometry. Finally, the various methodologies proposed in this work are demonstrated on complex representations arising in practice and research

    Sixth Biennial Report : August 2001 - May 2003

    No full text
    corecore