149 research outputs found

    A Comparative Study on Polygonal Mesh Simplification Algorithms

    Get PDF
    Polygonal meshes are a common way of representing three dimensional surface models in many different areas of computer graphics and geometry processing. However, with the evolution of the technology, polygonal models are becoming more and more complex. As the complexity of the models increase, the visual approximation to the real world objects get better but there is a trade-off between the cost of processing these models and better visual approximation. In order to reduce this cost, the number of polygons in a model can be reduced by mesh simplification algorithms. These algorithms are widely used such that nearly all of the popular mesh editing libraries include at least one of them. In this work, polygonal simplification algorithms that are embedded in open source libraries: CGAL, VTK and OpenMesh are compared with the Metro geometric error measuring tool. By this way we try to supply a guidance for developers for publicly available mesh libraries in order to implement polygonal mesh simplification

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Saliency detection for large-scale mesh decimation

    Get PDF
    Highly complex and dense models of 3D objects have recently become indispensable in digital industries. Mesh decimation then plays a crucial role in the production pipeline to efficiently get visually convincing yet compact expressions of complex meshes. However, the current pipeline typically does not allow artists control the decimation process, just a simplification rate. Thus a preferred approach in production settings splits the process into a first pass of saliency detection highlighting areas of greater detail, and allowing artists to iterate until satisfied before simplifying the model. We propose a novel, efficient multi-scale method to compute mesh saliency at coarse and finer scales, based on fast mesh entropy of local surface measurements. Unlike previous approaches, we ensure a robust and straightforward calculation of mesh saliency even for densely tessellated models with millions of polygons. Moreover, we introduce a new adaptive subsampling and interpolation algorithm for saliency estimation. Our implementation achieves speedups of up to three orders of magnitude over prior approaches. Experimental results showcase its resilience to problem scenarios that efficiently scales up to process multi-million vertex meshes. Our evaluation with artists in the entertainment industry also demonstrates its applicability to real use-case scenarios

    Terrain guided multi-level instancing of highly complex plant populations

    Get PDF

    Kolmiulotteisten tietokoneavusteisten mallien yksinkertaistaminen renderoinnin nopeuttamiseksi

    Get PDF
    Visualization of three-dimensional (3D) computer-aided design model is an integral part of the design process. Large assemblies such as plant or building designs contain a substantial amount of geometric data. New constraints for visualization performance and the amount of geometric data are set by the advent of mobile devices and virtual reality headsets. Our goal is to improve visualization performance and reduce memory consumption by simplifying 3D models while retaining the output simplification quality stable regardless of the geometric complexity of the input mesh. We research the current state of 3D mesh simplification methods that use geometry decimation. We design and implement our own data structure for geometry decimation. Based on the existing research, we select and use an edge decimation method for model simplification. In order to free the user from configuring edge decimation level per model by hand, and to retain a stable quality of the simplification output, we propose a threshold parameter, \textit{edge decimation cost threshold}. The threshold is calculated by multiplying the length of the model’s bounding box diagonal with a user-defined scale parameter. Our results show that the edge decimation cost threshold works as expected. The geometry decimation algorithm manages to simplify models with round surfaces with an excellent simplification rate. Based on the edge decimation cost threshold, the algorithm terminates the geometry decimation for models that have a large number of planar surfaces. Without the threshold, the simplification leads to large geometric errors quickly. The visualization performance improvement from the simplification scales almost at the same rate as the simplification rate

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Feature preserving decimation of urban meshes

    Get PDF
    1 online resource (vii, 72 pages) : illustrations (chiefly colour), charts (chiefly colour)Includes abstract.Includes bibliographical references (pages 65-72).Commercial buildings as well as residential houses represent core structures of any modern day urban or semi-urban areas. Consequently, 3D models of urban buildings are of paramount importance to a majority of digital urban applications such as city planning, 3D mapping and navigation, video games and movies, among others. However, current studies suggest that existing 3D modeling approaches often involve high computational cost and large storage volumes for processing the geometric details of the buildings. Therefore, it is essential to generate concise digital representations of urban buildings from the 3D measurements or images, so that the acquired information can be efficiently utilized for various urban applications. Such concise representations, often referred to as “lightweight” models, strive to capture the details of the physical objects with less computational storage. Furthermore, lightweight models consume less bandwidth for online applications and facilitate accelerated visualizations. In this thesis, we provide an assessment study on state-of-the-art data structures for storing lightweight urban buildings. Then we propose a method to generate lightweight yet highly detailed 3D building models from LiDAR scans. The lightweight modeling pipeline comprises the following stages: mesh reconstruction, feature points detection and mesh decimation through gradient structure tensors. The gradient of each vertex of the reconstructed mesh is obtained by estimating the vertex confidence through eigen analysis and further encoded into a 3 X 3 structure tensor. We analyze the eigenvalues of structure tensor representing gradient variations and use it to classify vertices into various feature classes, e.g., edges, and corners. While decimating the mesh, fea ture points are preserved through a mean cost-based edge collapse operation. The experiments on different building facade models show that our method is effective in generating simplified models with a trade-off between simplification and accuracy
    corecore