22 research outputs found

    MeshLab

    Get PDF
    MeshLab the open source system for processing and editing 3D triangular meshes. It provides a set of tools for editing, cleaning, healing, inspecting, rendering, texturing and converting meshes. It offers features for processing raw data produced by 3D digitization tools/devices and for preparing models for 3D printing. With over 2 millions download, MeshLab is a de-fact standard tool in for mesh processing

    Turning a Smartphone Selfie into a Studio Portrait

    No full text
    We introduce a novel algorithm that turns a flash selfie taken with a smartphone into a studio-like photograph with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in a controlled environment. For each pair, we have one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend lighting artifacts introduced by a close-up camera flash, such as specular highlights, shadows, and skin shine

    Real-Time Single Scattering Inside Inhomogeneous Materials

    No full text
    In this paper we propose a novel technique to perform real-time rendering of translucent inhomogeneous materials, one of the most well-known problems of computer graphics. The developed technique is based on an adaptive volumetric point sampling, done in a preprocessing stage, which associates to each sample the optical depth for a predefined set of directions. This information is then used by a rendering algorithm that combines the object\u27s surface rasterization with a ray tracing algorithm, implemented on the graphics processor, to compose the final image. This approach allows us to simulate light scattering phenomena for inhomogeneous isotropic materials in real time with an arbitrary number of light sources. We tested our algorithm by comparing the produced images with the result of ray tracing and showed that the technique is effective. © Springer-Verlag 2010

    Interactive Out-of-core Visualisation of Very Large Landscapes on Commodity Graphics Platform

    No full text
    We recently introduced an efficient technique for out-of-core rendering and management of large textured landscapes. The technique, called Batched Dynamic Adaptive Meshes (BDAM), is based on a paired tree structure: a tiled quadtree for texture data and a pair of bintrees of small triangular patches for the geometry. These small patches are TINs that are constructed and optimized off-line with high quality simplification and tristripping algorithms. Hierarchical view frustum culling and view-dependendent texture/geometry refinement is performed at each frame with a stateless traversal algorithm that renders a continuous adaptive terrain surface by assembling out of core data. Thanks to the batched CPU/GPU communication model, the proposed technique is not processor intensive and fully harnesses the power of current graphics hardware. This paper summarizes the method and discusses the results obtained in a virtual flythrough over a textured digital landscape derived from aerial imaging

    DeepFlash: Turning a flash selfie into a studio portrait

    No full text
    We present a method for turning a flash selfie taken with a smartphone into a photograph as if it was taken in a studio setting with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in an ad-hoc acquisition campaign. Each pair consists of one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend defects introduced by a close-up camera flash, such as specular highlights, shadows, skin shine, and flattened images
    corecore