20 research outputs found

    Capturing improved TLS data of Maulbronn Monastery andintegration of the mesh into the existing UNITY visualization

    Full text link
    [EN] This Master Thesis consists in improving the existing 3D visualization of the Maulbronn monastery, because there are areas with excess brightness that is produced by the windows. To achieve this purpose, the old scans that were part of an existing FARO SCENE project have been analysed. After analysing the scans, those areas that had to be repeated to improve texture were detected. Additionally, tests have been done to find out which parameters are best suited to improve the quality of the HDR images. Afterwards, different scans have been taken with the best parameters. This data has been processed and recorded with the data from the previous scans, resulting in the creation of a mesh for each zone, along with the position file and HDR images. Geomagic Qualify has also been used to improve mesh geometry. Then the images have been edited in Photoshop to represent a better texture for the mesh, as well as masks have been created not to apply those areas of the images that do not have good quality. In order to reproject the images on the mesh, the Agisoft Metashape program has been used, resulting in a tiled model. Once the tiled model is obtained, only the last level has been used to incorporate the new meshes into UNITY. Finally, the texture and some parts related to walkability have been improved through the use of several scripts. This project is divided into three parts. The first is the theoretical part, where the basic concepts of 3D visualization and data processing are explained. The different types of software that have been used are also explained. The second part is the explanation of the practical part, in what it consists and in what steps it is divided. Finally, in the last part of the document are the results, conclusions, future lines of the project and references.[ES] Este proyecto consiste en mejorar la visualización 3D existente del monasterio de Maulbronn, porque hay áreas con exceso de brillo que producen las ventanas. Para lograr este propósito, se analizaron los antiguos escaneos que formaban parte de un proyecto FARO SCENE existente. Después de analizar los escaneos, se detectaron aquellas áreas que tuvieron que repetirse para mejorar la textura. Además, se han realizado pruebas para descubrir qué parámetros son los más adecuados para mejorar la calidad de las imágenes HDR. Posteriormente, se han realizado diferentes escaneos con los mejores parámetros. Estos datos se procesaron y registraron con los datos de los escaneos anteriores, lo que resultó en la creación de una malla para cada zona, junto con el archivo de posición y las imágenes HDR. Geomagic Qualify también se ha utilizado para mejorar la geometría de la malla. Luego, las imágenes se han editado en Photoshop para representar una mejor textura para la malla, así como se han creado máscaras para no aplicar aquellas áreas de las imágenes que no tienen buena calidad. Para reproyectar las imágenes en la malla, se ha utilizado el programa Agisoft Metashape, lo que da como resultado un modelo en mosaico. Una vez que se obtiene el modelo de mosaico, solo se ha utilizado el último nivel para incorporar las nuevas mallas a UNITY. Finalmente, la textura y algunas partes relacionadas con la capacidad de caminar se han mejorado mediante el uso de varios scripts. Este proyecto se divide en tres partes. La primera es la parte teórica, donde se explican los conceptos básicos de visualización 3D y procesamiento de datos. También se explican los diferentes tipos de software que se han utilizado. La segunda parte es la explicación de la parte práctica, en qué consiste y en qué pasos se divide. Finalmente, en la última parte del documento están los resultados, conclusiones, líneas futuras del proyecto y referencias.Arcón Navarro, R. (2020). Capturing improved TLS data of Maulbronn Monastery andintegration of the mesh into the existing UNITY visualization. http://hdl.handle.net/10251/139512TFG

    Error-aware construction and rendering of multi-scan panoramas from massive point clouds

    Get PDF
    Obtaining 3D realistic models of urban scenes from accurate range data is nowadays an important research topic, with applications in a variety of fields ranging from Cultural Heritage and digital 3D archiving to monitoring of public works. Processing massive point clouds acquired from laser scanners involves a number of challenges, from data management to noise removal, model compression and interactive visualization and inspection. In this paper, we present a new methodology for the reconstruction of 3D scenes from massive point clouds coming from range lidar sensors. Our proposal includes a panorama-based compact reconstruction where colors and normals are estimated robustly through an error-aware algorithm that takes into account the variance of expected errors in depth measurements. Our representation supports efficient, GPU-based visualization with advanced lighting effects. We discuss the proposed algorithms in a practical application on urban and historical preservation, described by a massive point cloud of 3.5 billion points. We show that we can achieve compression rates higher than 97% with good visual quality during interactive inspections.Peer ReviewedPostprint (author's final draft

    Hardware Accelerated Visibility Preprocessing using Adaptive Sampling

    Get PDF
    We present a novel aggressive visibility preprocessing technique for general 3D scenes. Our technique exploits commodity graphics hardware and is faster than most conservative solutions, while simultaneously not overestimating the set of visible polygons. The cost of this benefit is that of potential image error. In order to reduce image error, we have developed an effective error minimization heuristic. We present results showing the application of our technique to highly complex scenes, consisting of many small polygons. We give performance results, an in depth error analysis using various metrics, and an empirical analysis showing a high degree of scalability. We show that our technique can rapidly compute from-region visibility (1hr 19min for a 5 million polygon forest), with minimal error (0.3% of image). On average 91.3% of the scene is culled

    Real time city visualization

    Get PDF
    The visualization of cities in real time has a lot of potential applications, from urban and emergency planning, to driving simulators and entertainment. The massive amount of data and the computational requirements needed to render an entire city in detail are the reason why a lot of techniques have been proposed in this eld. Procedural city generation, building simpli cation and visibility processing are some of the approaches used to solve a small subset of the problems that these applications need to face. Our work proposes a new city rendering algorithm that is a radically di erent approach to what has been done before in this eld. The proposed technique is based on a structuration of the city data in a regular grid which is traversed, at runtime, by a ray tracing algorithm that keeps track of visible parts of the scene. As a preprocess, a set of quads de ning the buildings of a city is transformed to the regular grid used by our algorithm. The rendering algorithm uses this data to generate a real time representation of the city minimizing the overdraw, a common problem in other techniques. This is done by means of a geometry shader to generate only the minimum number of fragments needed to render the city from a given position

    Fidelity optimization in distributed virtual environments

    Get PDF
    In virtual environment systems, the ultimate goal is delivery of the highest-fidelity user experience possible. This dissertation shows that is possible to increase the scalability of distributed virtual environments (DVEs), in a tractable fashion, through a novel application of optimization techniques. Fidelity is maximized by utilizing the given display and network capacity in an optimal fashion, individually tuned for multiple users, in a manner most appropriate to a specific DVE application. This optimization is accomplished using the QUICK framework for managing the display and request of representations for virtual objects. Ratings of representation Quality, object Importance, and representation Cost are included in model descriptions as special annotations. The QUICK optimization computes the fidelity contribution of a representation by combining these annotations with specifications of user task and platform capability. This dissertation contributes the QUICK optimization algorithms; a software framework for experimentation; and associated general purpose formats for codifying Quality, Importance, Cost, task, and platform capability. Experimentation with the QUICK framework has shown overwhelming advantages in comparison with standard resource management techniqueshttp://www.archive.org/details/fidelityoptimiza00cappCivilian author.Approved for public release; distribution is unlimited

    Visibility computation through image generalization

    Get PDF
    This dissertation introduces the image generalization paradigm for computing visibility. The paradigm is based on the observation that an image is a powerful tool for computing visibility. An image can be rendered efficiently with the support of graphics hardware and each of the millions of pixels in the image reports a visible geometric primitive. However, the visibility solution computed by a conventional image is far from complete. A conventional image has a uniform sampling rate which can miss visible geometric primitives with a small screen footprint. A conventional image can only find geometric primitives to which there is direct line of sight from the center of projection (i.e. the eye) of the image; therefore, a conventional image cannot compute the set of geometric primitives that become visible as the viewpoint translates, or as time changes in a dynamic dataset. Finally, like any sample-based representation, a conventional image can only confirm that a geometric primitive is visible, but it cannot confirm that a geometric primitive is hidden, as that would require an infinite number of samples to confirm that the primitive is hidden at all of its points. ^ The image generalization paradigm overcomes the visibility computation limitations of conventional images. The paradigm has three elements. (1) Sampling pattern generalization entails adding sampling locations to the image plane where needed to find visible geometric primitives with a small footprint. (2) Visibility sample generalization entails replacing the conventional scalar visibility sample with a higher dimensional sample that records all geometric primitives visible at a sampling location as the viewpoint translates or as time changes in a dynamic dataset; the higher-dimensional visibility sample is computed exactly, by solving visibility event equations, and not through sampling. Another form of visibility sample generalization is to enhance a sample with its trajectory as the geometric primitive it samples moves in a dynamic dataset. (3) Ray geometry generalization redefines a camera ray as the set of 3D points that project at a given image location; this generalization supports rays that are not straight lines, and enables designing cameras with non-linear rays that circumvent occluders to gather samples not visible from a reference viewpoint. ^ The image generalization paradigm has been used to develop visibility algorithms for a variety of datasets, of visibility parameter domains, and of performance-accuracy tradeoff requirements. These include an aggressive from-point visibility algorithm that guarantees finding all geometric primitives with a visible fragment, no matter how small primitive\u27s image footprint, an efficient and robust exact from-point visibility algorithm that iterates between a sample-based and a continuous visibility analysis of the image plane to quickly converge to the exact solution, a from-rectangle visibility algorithm that uses 2D visibility samples to compute a visible set that is exact under viewpoint translation, a flexible pinhole camera that enables local modulations of the sampling rate over the image plane according to an input importance map, an animated depth image that not only stores color and depth per pixel but also a compact representation of pixel sample trajectories, and a curved ray camera that integrates seamlessly multiple viewpoints into a multiperspective image without the viewpoint transition distortion artifacts of prior art methods

    Achieving efficient real-time virtual reality architectural visualisation

    Get PDF
    Master'sMASTER OF ARTS (ARCHITECTURE

    SPRITE TREE: AN EFFICIENT IMAGE-BASED REPRESENTATION FOR NETWORKED VIRTUAL ENVIRONMENTS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
    corecore