1,116 research outputs found

    Variational blue noise sampling

    Get PDF
    Blue noise point sampling is one of the core algorithms in computer graphics. In this paper, we present a new and versatile variational framework for generating point distributions with high-quality blue noise characteristics while precisely adapting to given density functions. Different from previous approaches based on discrete settings of capacity-constrained Voronoi tessellation, we cast the blue noise sampling generation as a variational problem with continuous settings. Based on an accurate evaluation of the gradient of an energy function, an efficient optimization is developed which delivers significantly faster performance than the previous optimization-based methods. Our framework can easily be extended to generating blue noise point samples on manifold surfaces and for multi-class sampling. The optimization formulation also allows us to naturally deal with dynamic domains, such as deformable surfaces, and to yield blue noise samplings with temporal coherence. We present experimental results to validate the efficacy of our variational framework. Finally, we show a variety of applications of the proposed methods, including nonphotorealistic image stippling, color stippling, and blue noise sampling on deformable surfaces. © 1995-2012 IEEE.published_or_final_versio

    Robust object-based algorithms for direct shadow simulation

    Get PDF
    En informatique graphique, les algorithmes de générations d'ombres évaluent la quantité de lumière directement perçue par une environnement virtuel. Calculer précisément des ombres est cependant coûteux en temps de calcul. Dans cette dissertation, nous présentons un nouveau système basé objet robuste, qui permet de calculer des ombres réalistes sur des scènes dynamiques et ce en temps interactif. Nos contributions incluent notamment le développement de nouveaux algorithmes de génération d'ombres douces ainsi que leur mise en oeuvre efficace sur processeur graphique. Nous commençons par formaliser la problématique du calcul d'ombres directes. Tout d'abord, nous définissons ce que sont les ombres directes dans le contexte général du transport de la lumière. Nous étudions ensuite les techniques interactives qui génèrent des ombres directes. Suite à cette étude nous montrons que mêmes les algorithmes dit physiquement réalistes se reposent sur des approximations. Nous mettons également en avant, que malgré leur contraintes géométriques, les algorithmes d'ombres basées objet sont un bon point de départ pour résoudre notre problématique de génération efficace et robuste d'ombres directes. Basé sur cette observation, nous étudions alors le système basé objet existant et mettons en avant ses problèmes de robustesse. Nous proposons une nouvelle technique qui améliore la qualité des ombres générées par ce système en lui ajoutant une étape de mélange de pénombres. Malgré des propriétés et des résultats convaincants, les limitations théoriques et de mise en oeuvre limite la qualité générale et les performances de cet algorithme. Nous présentons ensuite un nouvel algorithme d'ombres basées objet. Cet algorithme combine l'efficacité de l'approche basée objet temps réel avec la précision de sa généralisation au rendu hors ligne. Notre algorithme repose sur l'évaluation locale du nombre d'objets entre deux points : la complexité de profondeur. Nous décrivons comment nous utilisons cet algorithme pour échantillonner la complexité de profondeur entre les surfaces visibles d'une scène et une source lumineuse. Nous générons ensuite des ombres à partir de cette information soit en modulant l'éclairage direct soit en intégrant numériquement l'équation d'illumination directe. Nous proposons ensuite une extension de notre algorithme afin qu'il puisse prendre en compte les ombres projetées par des objets semi-opaque. Finalement, nous présentons une mise en oeuvre efficace de notre système qui démontre que des ombres basées objet peuvent être générées de façon efficace et ce même sur une scène dynamique. En rendu temps réel, il est commun de représenter des objets très détaillés encombinant peu de triangles avec des textures qui représentent l'opacité binaire de l'objet. Les techniques de génération d'ombres basées objet ne traitent pas de tels triangles dit "perforés". De par leur nature, elles manipulent uniquement les géométries explicitement représentées par des primitives géométriques. Nous présentons une nouvel algorithme basé objet qui lève cette limitation. Nous soulignons que notre méthode peut être efficacement combinée avec les systèmes existants afin de proposer un système unifié basé objet qui génère des ombres à la fois pour des maillages classiques et des géométries perforées. La mise en oeuvre proposée montre finalement qu'une telle combinaison fournit une solution élégante, efficace et robuste à la problématique générale de l'éclairage direct et ce aussi bien pour des applications temps réel que des applications sensibles à la la précision du résultat.Direct shadow algorithms generate shadows by simulating the direct lighting interaction in a virtual environment. The main challenge with the accurate direct shadow problematic is its computational cost. In this dissertation, we develop a new robust object-based shadow framework that provides realistic shadows at interactive frame rate on dynamic scenes. Our contributions include new robust object-based soft shadow algorithms and efficient interactive implementations. We start, by formalizing the direct shadow problematic. Following the light transport problematic, we first formalize what are robust direct shadows. We then study existing interactive direct shadow techniques and outline that the real time direct shadow simulation remains an open problem. We show that even the so called physically plausible soft shadow algorithms still rely on approximations. Nevertheless we exhibit that, despite their geometric constraints, object-based approaches seems well suited when targeting accurate solutions. Starting from the previous analyze, we investigate the existing object-based shadow framework and discuss about its robustness issues. We propose a new technique that drastically improve the resulting shadow quality by improving this framework with a penumbra blending stage. We present a practical implementation of this approach. From the obtained results, we outline that, despite desirable properties, the inherent theoretical and implementation limitations reduce the overall quality and performances of the proposed algorithm. We then present a new object-based soft shadow algorithm. It merges the efficiency of the real time object-based shadows with the accuracy of its offline generalization. The proposed algorithm lies onto a new local evaluation of the number of occluders between twotwo points (\ie{} the depth complexity). We describe how we use this algorithm to sample the depth complexity between any visible receiver and the light source. From this information, we compute shadows by either modulate the direct lighting or numerically solve the direct illumination with an accuracy depending on the light sampling strategy. We then propose an extension of our algorithm in order to handle shadows cast by semi opaque occluders. We finally present an efficient implementation of this framework that demonstrates that object-based shadows can be efficiently used on complex dynamic environments. In real time rendering, it is common to represent highly detailed objects with few triangles and transmittance textures that encode their binary opacity. Object-based techniques do not handle such perforated triangles. Due to their nature, they can only evaluate the shadows cast by models whose their shape is explicitly defined by geometric primitives. We describe a new robust object-based algorithm that addresses this main limitation. We outline that this method can be efficiently combine with object-based frameworks in order to evaluate approximative shadows or simulate the direct illumination for both common meshes and perforated triangles. The proposed implementation shows that such combination provides a very strong and efficient direct lighting framework, well suited to many domains ranging from quality sensitive to performance critical applications

    Multisource Point Clouds, Point Simplification and Surface Reconstruction

    Get PDF
    As data acquisition technology continues to advance, the improvement and upgrade of the algorithms for surface reconstruction are required. In this paper, we utilized multiple terrestrial Light Detection And Ranging (Lidar) systems to acquire point clouds with different levels of complexity, namely dynamic and rigid targets for surface reconstruction. We propose a robust and effective method to obtain simplified and uniform resample points for surface reconstruction. The method was evaluated. A point reduction of up to 99.371% with a standard deviation of 0.2 cm was achieved. In addition, well-known surface reconstruction methods, i.e., Alpha shapes, Screened Poisson reconstruction (SPR), the Crust, and Algebraic point set surfaces (APSS Marching Cubes), were utilized for object reconstruction. We evaluated the benefits in exploiting simplified and uniform points, as well as different density points, for surface reconstruction. These reconstruction methods and their capacities in handling data imperfections were analyzed and discussed. The findings are that (i) the capacity of surface reconstruction in dealing with diverse objects needs to be improved; (ii) when the number of points reaches the level of millions (e.g., approximately five million points in our data), point simplification is necessary, as otherwise, the reconstruction methods might fail; (iii) for some reconstruction methods, the number of input points is proportional to the number of output meshes; but a few methods are in the opposite; (iv) all reconstruction methods are beneficial from the reduction of running time; and (v) a balance between the geometric details and the level of smoothing is needed. Some methods produce detailed and accurate geometry, but their capacity to deal with data imperfection is poor, while some other methods exhibit the opposite characteristics

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Doctor of Philosophy

    Get PDF
    dissertationShape analysis is a well-established tool for processing surfaces. It is often a first step in performing tasks such as segmentation, symmetry detection, and finding correspondences between shapes. Shape analysis is traditionally employed on well-sampled surfaces where the geometry and topology is precisely known. When the form of the surface is that of a point cloud containing nonuniform sampling, noise, and incomplete measurements, traditional shape analysis methods perform poorly. Although one may first perform reconstruction on such a point cloud prior to performing shape analysis, if the geometry and topology is far from the true surface, then this can have an adverse impact on the subsequent analysis. Furthermore, for triangulated surfaces containing noise, thin sheets, and poorly shaped triangles, existing shape analysis methods can be highly unstable. This thesis explores methods of shape analysis applied directly to such defect-laden shapes. We first study the problem of surface reconstruction, in order to obtain a better understanding of the types of point clouds for which reconstruction methods contain difficulties. To this end, we have devised a benchmark for surface reconstruction, establishing a standard for measuring error in reconstruction. We then develop a new method for consistently orienting normals of such challenging point clouds by using a collection of harmonic functions, intrinsically defined on the point cloud. Next, we develop a new shape analysis tool which is tolerant to imperfections, by constructing distances directly on the point cloud defined as the likelihood of two points belonging to a mutually common medial ball, and apply this for segmentation and reconstruction. We extend this distance measure to define a diffusion process on the point cloud, tolerant to missing data, which is used for the purposes of matching incomplete shapes undergoing a nonrigid deformation. Lastly, we have developed an intrinsic method for multiresolution remeshing of a poor-quality triangulated surface via spectral bisection

    Rapid Visualization of Large Point-Based Surfaces

    Get PDF
    International audiencePoint-Based Surfaces can be directly generated by 3D scanners and avoid the generation and storage of an explicit topology for a sampled geometry, which saves time and storage space for very dense and large objects, such as scanned statues and other archaeological artefacts [Duguet 2004]. We propose a fast processing pipeline of large point-based surfaces for real-time, appearance preserving, polygonal rendering. Our goal is to reduce the time needed between a point set made of hundred of millions samples and a high resolution visualization taking benefit of modern graphics hardware, tuned for normal mapping of polygons. Our approach starts by an out-of-core generation of a coarse local triangulation of the original model. The resulting coarse mesh is enriched by applying a set of maps which capture the high frequency features of the original data set. We choose as an example the normal component of samples for these maps, since normal maps provide efficiently an accurate local illumination. But our approach is also suitable for other point attributes such as color or position (displacement map). These maps come also from an out-of-core process, using the complete input data in a streaming process. Sampling issues of the maps are addressed using an efficient diffusion algorithm in 2D. Our main contribution is to directly handle such large unorganized point clouds through this two pass algorithm, without the time-consuming meshing or parameterization step, required by current state-of-the-art high resolution visualization methods. One of the main advantages is to express most of the fine features present in the original large point clouds as textures in the huge texture memory usually provided by graphics devices, using only a lazy local parameterization. Our technique comes as a complementary tool to high-quality, but costly, out-of-core visualization systems. Direct applications are: interactive preview at high screen resolution of very detailed scanned objects such as scanned statues, inclusion of large point clouds in usual polygonal 3D engines and 3D databases browsing

    Creating 3D models of cultural heritage sites with terrestrial laser scanning and 3D imaging

    Get PDF
    Includes abstract.Includes bibliographical references.The advent of terrestrial laser-scanners made the digital preservation of cultural heritage sites an affordable technique to produce accurate and detailed 3D-computermodel representations for any kind of 3D-objects, such as buildings, infrastructure, and even entire landscapes. However, one of the key issues with this technique is the large amount of recorded points; a problem which was even more intensified by the recent advances in laser-scanning technology, which increased the data acquisition rate from 25 thousand to 1 million points per second. The following research presents a workflow for the processing of large-volume laser-scanning data, with a special focus on the needs of the Zamani initiative. The research project, based at the University of Cape Town, spatially documents African Cultural Heritage sites and Landscapes and produces meshed 3D models, of various, historically important objects, such as fortresses, mosques, churches, castles, palaces, rock art shelters, statues, stelae and even landscapes

    Visual Data Representation using Context-Aware Samples

    Get PDF
    The rapid growth in the complexity of geometry models has necessisated revision of several conventional techniques in computer graphics. At the heart of this trend is the representation of geometry with locally constant approximations using independent sample primitives. This generally leads to a higher sampling rate and thus a high cost of representation, transmission, and rendering. We advocate an alternate approach involving context-aware samples that capture the local variation of the geometry. We detail two approaches; one, based on differential geometry and the other based on statistics. Our differential-geometry-based approach captures the context of the local geometry using an estimation of the local Taylor's series expansion. We render such samples using programmable Graphics Processing Unit (GPU) by fast approximation of the geometry in the screen space. The benefits of this representation can also be seen in other applications such as simulation of light transport. In our statistics-based approach we capture the context of the local geometry using Principal Component Analysis (PCA). This allows us to achieve hierarchical detail by modeling the geometry in a non-deterministic fashion as a hierarchical probability distribution. We approximate the geometry and its attributes using quasi-random sampling. Our results show a significant rendering speedup and savings in the geometric bandwidth when compared to current approaches

    Ambient Occlusion on Mobile: an empirical comparison

    Get PDF
    In this thesis, we study the feasibility of screen space ambient occlusion on a range of mobile devices. We implement several of the most popular techniques and propose two rendering pipelines, a custom algorithm and an optimisation that can be applied to any algorithm to speed up computation times
    • …
    corecore