13 research outputs found

    Htex: Per-Halfedge Texturing for Arbitrary Mesh Topologies

    Full text link
    We introduce per-halfedge texturing (Htex) a GPU-friendly method for texturing arbitrary polygon-meshes without an explicit parameterization. Htex builds upon the insight that halfedges encode an intrinsic triangulation for polygon meshes, where each halfedge spans a unique triangle with direct adjacency information. Rather than storing a separate texture per face of the input mesh as is done by previous parameterization-free texturing methods, Htex stores a square texture for each halfedge and its twin. We show that this simple change from face to halfedge induces two important properties for high performance parameterization-free texturing. First, Htex natively supports arbitrary polygons without requiring dedicated code for, e.g, non-quad faces. Second, Htex leads to a straightforward and efficient GPU implementation that uses only three texture-fetches per halfedge to produce continuous texturing across the entire mesh. We demonstrate the effectiveness of Htex by rendering production assets in real time

    Implicit Neural Representation of Tileable Material Textures

    Full text link
    We explore sinusoidal neural networks to represent periodic tileable textures. Our approach leverages the Fourier series by initializing the first layer of a sinusoidal neural network with integer frequencies with a period PP. We prove that the compositions of sinusoidal layers generate only integer frequencies with period PP. As a result, our network learns a continuous representation of a periodic pattern, enabling direct evaluation at any spatial coordinate without the need for interpolation. To enforce the resulting pattern to be tileable, we add a regularization term, based on the Poisson equation, to the loss function. Our proposed neural implicit representation is compact and enables efficient reconstruction of high-resolution textures with high visual fidelity and sharpness across multiple levels of detail. We present applications of our approach in the domain of anti-aliased surface

    Adaptive image vectorisation and brushing using mesh colours

    Get PDF
    We propose the use of curved triangles and mesh colours as a vector primitive for image vectorisation. We show that our representation has clear benefits for rendering performance, texture detail, as well as further editing of the resulting vector images. The proposed method focuses on efficiency, but it still leads to results that compare favourably with those from previous work. We show results over a variety of input images ranging from photos, drawings, paintings, all the way to designs and cartoons. We implemented several editing workflows facilitated by our representation: interactive user-guided vectorisation, and novel raster-style feature-aware brushing capabilities

    On sparse voxel DAGs and memory efficient compression of surface attributes for real-time scenarios

    Get PDF
    The general shape of a 3D object can expeditiously be represented as, e.g., triangles or voxels, while smaller-scale features usually are parameterized over the surface of the object. Such features include, but are not limited to, color details, small-scale surface-normal variations, or even view-dependent properties required for the light-surface interactions. This thesis is a collection of four papers that focus on new ways to compress and efficiently utilize surface data in 3D for real-time usage.In Paper IA and IB, we extend upon the concept of sparse voxel DAGs, a real-time compression format of a voxel-grid, to allow an attribute mapping with a negligible impact on the size. The main contribution, however, is a novel real-time compression format of the mapped colors over such sparse voxel surfaces.Paper II aims to utilize the results of the previous papers to achieve uv-free texturing of surfaces, such as triangle meshes, with optimized run-time minification as well as magnification filtering.Paper III extends upon previous compact representations of view dependent radiance using spherical gaussians (SG). By using a convolutional neural network, we are able to compress the light-field by finding SGs with free directions, amplitudes and sharpnesses, whereas previous methods were limited to only free amplitudes in similar scenarios

    PATTERN RELATED ISSUES IN THE MODELLING OF DEFORMED OVER SURFACE WARP KNITTED STRUCTURES WITH LONGER UNDERLAPS

    Get PDF
    The yarn level modelling of warp knitted structures is a complex process. For structures placed on the plane, it is well investigated and there are a few software solutions and papers reported. This paper considers the simulation of warp knitted structure, deformed in the 3D space. Especially the modelling of the areas of high curvature are detailed observed. Underlaps with longer lengths makes an unreal visualization for simulation results. Different pattern with different length of the underlaps are modelled with original algorithm developed by the authors. Modelling and visualization problems in the areas with long underlaps are discussed and possible solutions are proposed

    Deep scene-scale material estimation from multi-view indoor captures

    Get PDF
    International audienceThe movie and video game industries have adopted photogrammetry as a way to create digital 3D assets from multiple photographs of a real-world scene. But photogrammetry algorithms typically output an RGB texture atlas of the scene that only serves as visual guidance for skilled artists to create material maps suitable for physically-based rendering. We present a learning-based approach that automatically produces digital assets ready for physically-based rendering, by estimating approximate material maps from multi-view captures of indoor scenes that are used with retopologized geometry. We base our approach on a material estimation Convolutional Neural Network (CNN) that we execute on each input image. We leverage the view-dependent visual cues provided by the multiple observations of the scene by gathering, for each pixel of a given image, the color of the corresponding point in other images. This image-space CNN provides us with an ensemble of predictions, which we merge in texture space as the last step of our approach. Our results demonstrate that the recovered assets can be directly used for physically-based rendering and editing of real indoor scenes from any viewpoint and novel lighting. Our method generates approximate material maps in a fraction of time compared to the closest previous solutions

    3D visualization of satellite data

    Get PDF
    RESUMEN: El aumento de la cantidad y complejidad de los datos que se generan actualmente a veces supera nuestra habilidad de extraer información y conocimiento de ellos. En este trabajo se estudia la posibilidad de visualizar conjuntos de datos multidimensionales obtenidos a partir de imágenes de satélite. La metodología propuesta cubre todo el proceso desde la obtención de los datos hasta su visualización tridimensional. Se ha combinado la visualización de modelos digitales de elevación con índices derivados de imágenes de las misiones espaciales Sentinel-2 y Landsat-8. Los resultados obtenidos permiten explorar y establecer relaciones espaciotemporales de los datos de forma fácil e intuitiva, lográndose el objetivo establecido.ABSTRACT: The increase in the amount and complexity of data currently generated, sometimes exceeds our ability to extract information and knowledge from it. In this paper we study different ways of visualizing multidimensional datasets obtained from satellite images. The proposed methodology covers the entire process from data acquisition stage to three-dimensional visualization. A multi-method visualization technique is used, combining digital elevation models with remote sensing indices from Sentinel-2 and Landsat-8 missions. The obtained results allow users to explore and establish spatio-temporal relationships of the data in an easy and intuitive way, achieving the established objective.Máster en Ciencia de Dato
    corecore