12,929 research outputs found

    Kolmiulotteisten tietokoneavusteisten mallien yksinkertaistaminen renderoinnin nopeuttamiseksi

    Get PDF
    Visualization of three-dimensional (3D) computer-aided design model is an integral part of the design process. Large assemblies such as plant or building designs contain a substantial amount of geometric data. New constraints for visualization performance and the amount of geometric data are set by the advent of mobile devices and virtual reality headsets. Our goal is to improve visualization performance and reduce memory consumption by simplifying 3D models while retaining the output simplification quality stable regardless of the geometric complexity of the input mesh. We research the current state of 3D mesh simplification methods that use geometry decimation. We design and implement our own data structure for geometry decimation. Based on the existing research, we select and use an edge decimation method for model simplification. In order to free the user from configuring edge decimation level per model by hand, and to retain a stable quality of the simplification output, we propose a threshold parameter, \textit{edge decimation cost threshold}. The threshold is calculated by multiplying the length of the model’s bounding box diagonal with a user-defined scale parameter. Our results show that the edge decimation cost threshold works as expected. The geometry decimation algorithm manages to simplify models with round surfaces with an excellent simplification rate. Based on the edge decimation cost threshold, the algorithm terminates the geometry decimation for models that have a large number of planar surfaces. Without the threshold, the simplification leads to large geometric errors quickly. The visualization performance improvement from the simplification scales almost at the same rate as the simplification rate

    Adaptive transfer functions: improved multiresolution visualization of medical models

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00371-016-1253-9Medical datasets are continuously increasing in size. Although larger models may be available for certain research purposes, in the common clinical practice the models are usually of up to 512x512x2000 voxels. These resolutions exceed the capabilities of conventional GPUs, the ones usually found in the medical doctors’ desktop PCs. Commercial solutions typically reduce the data by downsampling the dataset iteratively until it fits the available target specifications. The data loss reduces the visualization quality and this is not commonly compensated with other actions that might alleviate its effects. In this paper, we propose adaptive transfer functions, an algorithm that improves the transfer function in downsampled multiresolution models so that the quality of renderings is highly improved. The technique is simple and lightweight, and it is suitable, not only to visualize huge models that would not fit in a GPU, but also to render not-so-large models in mobile GPUs, which are less capable than their desktop counterparts. Moreover, it can also be used to accelerate rendering frame rates using lower levels of the multiresolution hierarchy while still maintaining high-quality results in a focus and context approach. We also show an evaluation of these results based on perceptual metrics.Peer ReviewedPostprint (author's final draft

    Point Cloud Framework for Rendering 3D Models Using Google Tango

    Get PDF
    This project seeks to demonstrate the feasibility of point cloud meshing for capturing and modeling three dimensional objects on consumer smart phones and tablets. Traditional methods of capturing objects require hundreds of images, are very slow and consume a large amount of cellular data for the average consumer. Software developers need a starting point for capturing and meshing point clouds to create 3D models as hardware manufacturers provide the tools to capture point cloud data. The project uses Googles Tango computer vision library for Android to capture point clouds on devices with depth-sensing hardware. The point clouds are combined and meshed as models for use in 3D rendering projects. We expect our results to be embraced by the Android market because capturing point clouds is fast and does not carry a large data footprint

    Ubiquitous Scalable Graphics: An End-to-End Framework using Wavelets

    Get PDF
    Advances in ubiquitous displays and wireless communications have fueled the emergence of exciting mobile graphics applications including 3D virtual product catalogs, 3D maps, security monitoring systems and mobile games. Current trends that use cameras to capture geometry, material reflectance and other graphics elements means that very high resolution inputs is accessible to render extremely photorealistic scenes. However, captured graphics content can be many gigabytes in size, and must be simplified before they can be used on small mobile devices, which have limited resources, such as memory, screen size and battery energy. Scaling and converting graphics content to a suitable rendering format involves running several software tools, and selecting the best resolution for target mobile device is often done by trial and error, which all takes time. Wireless errors can also affect transmitted content and aggressive compression is needed for low-bandwidth wireless networks. Most rendering algorithms are currently optimized for visual realism and speed, but are not resource or energy efficient on mobile device. This dissertation focuses on the improvement of rendering performance by reducing the impacts of these problems with UbiWave, an end-to-end Framework to enable real time mobile access to high resolution graphics using wavelets. The framework tackles the issues including simplification, transmission, and resource efficient rendering of graphics content on mobile device based on wavelets by utilizing 1) a Perceptual Error Metric (PoI) for automatically computing the best resolution of graphics content for a given mobile display to eliminate guesswork and save resources, 2) Unequal Error Protection (UEP) to improve the resilience to wireless errors, 3) an Energy-efficient Adaptive Real-time Rendering (EARR) heuristic to balance energy consumption, rendering speed and image quality and 4) an Energy-efficient Streaming Technique. The results facilitate a new class of mobile graphics application which can gracefully adapt the lowest acceptable rendering resolution to the wireless network conditions and the availability of resources and battery energy on mobile device adaptively

    Proxy-guided Image-based Rendering for Mobile Devices

    Get PDF
    VR headsets and hand-held devices are not powerful enough to render complex scenes in real-time. A server can take on the rendering task, but network latency prohibits a good user experience. We present a new image-based rendering (IBR) architecture for masking the latency. It runs in real-time even on very weak mobile devices, supports modern game engine graphics, and maintains high visual quality even for large view displacements. We propose a novel server-side dual-view representation that leverages an optimally-placed extra view and depth peeling to provide the client with coverage for filling disocclusion holes. This representation is directly rendered in a novel wide-angle projection with favorable directional parameterization. A new client-side IBR algorithm uses a pre-transmitted level-of-detail proxy with an encaging simplification and depth-carving to maintain highly complex geometric detail. We demonstrate our approach with typical VR / mobile gaming applications running on mobile hardware. Our technique compares favorably to competing approaches according to perceptual and numerical comparisons

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    3D oceanographic data compression using 3D-ODETLAP

    Get PDF
    This paper describes a 3D environmental data compression technique for oceanographic datasets. With proper point selection, our method approximates uncompressed marine data using an over-determined system of linear equations based on, but essentially different from, the Laplacian partial differential equation. Then this approximation is refined via an error metric. These two steps work alternatively until a predefined satisfying approximation is found. Using several different datasets and metrics, we demonstrate that our method has an excellent compression ratio. To further evaluate our method, we compare it with 3D-SPIHT. 3D-ODETLAP averages 20% better compression than 3D-SPIHT on our eight test datasets, from World Ocean Atlas 2005. Our method provides up to approximately six times better compression on datasets with relatively small variance. Meanwhile, with the same approximate mean error, we demonstrate a significantly smaller maximum error compared to 3D-SPIHT and provide a feature to keep the maximum error under a user-defined limit

    Plane-Based Optimization of Geometry and Texture for RGB-D Reconstruction of Indoor Scenes

    Full text link
    We present a novel approach to reconstruct RGB-D indoor scene with plane primitives. Our approach takes as input a RGB-D sequence and a dense coarse mesh reconstructed by some 3D reconstruction method on the sequence, and generate a lightweight, low-polygonal mesh with clear face textures and sharp features without losing geometry details from the original scene. To achieve this, we firstly partition the input mesh with plane primitives, simplify it into a lightweight mesh next, then optimize plane parameters, camera poses and texture colors to maximize the photometric consistency across frames, and finally optimize mesh geometry to maximize consistency between geometry and planes. Compared to existing planar reconstruction methods which only cover large planar regions in the scene, our method builds the entire scene by adaptive planes without losing geometry details and preserves sharp features in the final mesh. We demonstrate the effectiveness of our approach by applying it onto several RGB-D scans and comparing it to other state-of-the-art reconstruction methods.Comment: in International Conference on 3D Vision 2018; Models and Code: see https://github.com/chaowang15/plane-opt-rgbd. arXiv admin note: text overlap with arXiv:1905.0885
    corecore