311 research outputs found

    An exact general remeshing scheme applied to physically conservative voxelization

    Full text link
    We present an exact general remeshing scheme to compute analytic integrals of polynomial functions over the intersections between convex polyhedral cells of old and new meshes. In physics applications this allows one to ensure global mass, momentum, and energy conservation while applying higher-order polynomial interpolation. We elaborate on applications of our algorithm arising in the analysis of cosmological N-body data, computer graphics, and continuum mechanics problems. We focus on the particular case of remeshing tetrahedral cells onto a Cartesian grid such that the volume integral of the polynomial density function given on the input mesh is guaranteed to equal the corresponding integral over the output mesh. We refer to this as "physically conservative voxelization". At the core of our method is an algorithm for intersecting two convex polyhedra by successively clipping one against the faces of the other. This algorithm is an implementation of the ideas presented abstractly by Sugihara (1994), who suggests using the planar graph representations of convex polyhedra to ensure topological consistency of the output. This makes our implementation robust to geometric degeneracy in the input. We employ a simplicial decomposition to calculate moment integrals up to quadratic order over the resulting intersection domain. We also address practical issues arising in a software implementation, including numerical stability in geometric calculations, management of cancellation errors, and extension to two dimensions. In a comparison to recent work, we show substantial performance gains. We provide a C implementation intended to be a fast, accurate, and robust tool for geometric calculations on polyhedral mesh elements.Comment: Code implementation available at https://github.com/devonmpowell/r3

    Perceptual rasterization for head-mounted display image synthesis

    Get PDF
    We suggest a rasterization pipeline tailored towards the needs of HMDs, where latency and field-of-view requirements pose new challenges beyond those of traditional desktop displays. Instead of image warping for low latency, or using multiple passes for foveation, we show how both can be produced directly in a single perceptual rasterization pass. We do this with per-fragment ray-casting. This is enabled by derivations of tight space-time-fovea pixel bounds, introducing just enough flexibility for the requisite geometric tests, but retaining most of the simplicity and efficiency of the traditional rasterizaton pipeline. To produce foveated images, we rasterize to an image with spatially varying pixel density. To compensate for latency, we extend the image formation model to directly produce "rolling" images where the time at each pixel depends on its display location. Our approach overcomes limitations of warping with respect to disocclusions, object motion and view-dependent shading, as well as geometric aliasing artifacts in other foveated rendering techniques. A set of perceptual user studies demonstrates the efficacy of our approach

    Efficient From-Point Visibility for Global Illumination in Virtual Scenes with Participating Media

    Get PDF
    Sichtbarkeitsbestimmung ist einer der fundamentalen Bausteine fotorealistischer Bildsynthese. Da die Berechnung der Sichtbarkeit allerdings äußerst kostspielig zu berechnen ist, wird nahezu die gesamte Berechnungszeit darauf verwendet. In dieser Arbeit stellen wir neue Methoden zur Speicherung, Berechnung und Approximation von Sichtbarkeit in Szenen mit streuenden Medien vor, die die Berechnung erheblich beschleunigen, dabei trotzdem qualitativ hochwertige und artefaktfreie Ergebnisse liefern

    Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications

    Full text link
    We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network's performance and that even modest implementation efforts produce state-of-the-art results.Comment: The project web page at http://vcl.itn.liu.se/publications/2017/TKWU17/ contains a version of the paper with high-resolution images as well as additional materia

    Improving Filtering for Computer Graphics

    Get PDF
    When drawing images onto a computer screen, the information in the scene is typically more detailed than can be displayed. Most objects, however, will not be close to the camera, so details have to be filtered out, or anti-aliased, when the objects are drawn on the screen. I describe new methods for filtering images and shapes with high fidelity while using computational resources as efficiently as possible. Vector graphics are everywhere, from drawing 3D polygons to 2D text and maps for navigation software. Because of its numerous applications, having a fast, high-quality rasterizer is important. I developed a method for analytically rasterizing shapes using wavelets. This approach allows me to produce accurate 2D rasterizations of images and 3D voxelizations of objects, which is the first step in 3D printing. I later improved my method to handle more filters. The resulting algorithm creates higher-quality images than commercial software such as Adobe Acrobat and is several times faster than the most highly optimized commercial products. The quality of texture filtering also has a dramatic impact on the quality of a rendered image. Textures are images that are applied to 3D surfaces, which typically cannot be mapped to the 2D space of an image without introducing distortions. For situations in which it is impossible to change the rendering pipeline, I developed a method for precomputing image filters over 3D surfaces. If I can also change the pipeline, I show that it is possible to improve the quality of texture sampling significantly in real-time rendering while using the same memory bandwidth as used in traditional methods

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Approximating Signed Distance Field to a Mesh by Artificial Neural Networks

    Get PDF
    Previous research has resulted in many representations of surfaces for rendering. However, for some approaches, an accurate representation comes at the expense of large data storage. Considering that Artifcial Neural Networks (ANNs) have been shown to achieve good performance in approximating non-linear functions in recent years, the potential to apply them to the problem of surface representation needs to be investigated. The goal in this research is to exploring how ANNs can effciently learn the Signed Distance Field (SDF) representation of shapes. Specifcally, we investigate how well different architectures of ANNs can learn 2D SDFs, 3D SDFs, and SDFs approximating a complex triangle mesh. In this research, we performed three main experiments to determine which ANN architectures and confgurations are suitable for learning SDFs by analyzing the errors in training and testing as well as rendering results. Also, three different pipelines for rendering general SDFs, grid-based SDFs, and ANN based SDFs were implemented to show the resulting images on screen. The following data are measured in this research project: the errors in training different architectures of ANNs; the errors in rendering SDFs; comparison between grid-based SDFs and ANN based SDFs. This work demonstrates the use of using ANNs to approximate the SDF to a mesh by learning the dataset through training data sampled near the mesh surface, which could be a useful technique in 3D reconstruction and rendering. We have found that the size of trained neural network is also much smaller than either the triangle mesh or grid-based SDFs, which could be useful for compression applications, and in software or hardware that has a strict requirement of memory size
    • …
    corecore