667 research outputs found

    A halo bias function measured deeply into voids without stochasticity

    Full text link
    We study the relationship between dark-matter haloes and matter in the MIP NN-body simulation ensemble, which allows precision measurements of this relationship, even deeply into voids. What enables this is a lack of discreteness, stochasticity, and exclusion, achieved by averaging over hundreds of possible sets of initial small-scale modes, while holding fixed large-scale modes that give the cosmic web. We find (i) that dark-matter-halo formation is greatly suppressed in voids; there is an exponential downturn at low densities in the otherwise power-law matter-to-halo density bias function. Thus, the rarity of haloes in voids is akin to the rarity of the largest clusters, and their abundance is quite sensitive to cosmological parameters. The exponential downturn appears both in an excursion-set model, and in a model in which fluctuations evolve in voids as in an open universe with an effective Ωm\Omega_m proportional to a large-scale density. We also find that (ii) haloes typically populate the average halo-density field in a super-Poisson way, i.e. with a variance exceeding the mean; and (iii) the rank-order-Gaussianized halo and dark-matter fields are impressively similar in Fourier space. We compare both their power spectra and cross-correlation, supporting the conclusion that one is roughly a strictly-increasing mapping of the other. The MIP ensemble especially reveals how halo abundance varies with `environmental' quantities beyond the local matter density; (iv) we find a visual suggestion that at fixed matter density, filaments are more populated by haloes than clusters.Comment: Changed to version accepted by MNRA

    SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections

    Full text link
    In this work, we present SceneDreamer, an unconditional generative model for unbounded 3D scenes, which synthesizes large-scale 3D landscapes from random noise. Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations. At the core of SceneDreamer is a principled learning paradigm comprising 1) an efficient yet expressive 3D scene representation, 2) a generative scene parameterization, and 3) an effective renderer that can leverage the knowledge from 2D images. Our approach begins with an efficient bird's-eye-view (BEV) representation generated from simplex noise, which includes a height field for surface elevation and a semantic field for detailed scene semantics. This BEV scene representation enables 1) representing a 3D scene with quadratic complexity, 2) disentangled geometry and semantics, and 3) efficient training. Moreover, we propose a novel generative neural hash grid to parameterize the latent space based on 3D positions and scene semantics, aiming to encode generalizable features across various scenes. Lastly, a neural volumetric renderer, learned from 2D image collections through adversarial training, is employed to produce photorealistic images. Extensive experiments demonstrate the effectiveness of SceneDreamer and superiority over state-of-the-art methods in generating vivid yet diverse unbounded 3D worlds.Comment: Project Page https://scene-dreamer.github.io/ Code https://github.com/FrozenBurning/SceneDreame

    Modelling and Visualisation of the Optical Properties of Cloth

    Get PDF
    Cloth and garment visualisations are widely used in fashion and interior design, entertaining, automotive and nautical industry and are indispensable elements of visual communication. Modern appearance models attempt to offer a complete solution for the visualisation of complex cloth properties. In the review part of the chapter, advanced methods that enable visualisation at micron resolution, methods used in three-dimensional (3D) visualisation workflow and methods used for research purposes are presented. Within the review, those methods offering a comprehensive approach and experiments on explicit clothes attributes that present specific optical phenomenon are analysed. The review of appearance models includes surface and image-based models, volumetric and explicit models. Each group is presented with the representative authors’ research group and the application and limitations of the methods. In the final part of the chapter, the visualisation of cloth specularity and porosity with an uneven surface is studied. The study and visualisation was performed using image data obtained with photography. The acquisition of structure information on a large scale namely enables the recording of structure irregularities that are very common on historical textiles, laces and also on artistic and experimental pieces of cloth. The contribution ends with the presentation of cloth visualised with the use of specular and alpha maps, which is the result of the image processing workflow

    Invisible Seams

    Get PDF
    International audienceSurface materials are commonly described by attributes stored in textures (for instance, color, normal, or displacement). Interpolation during texture lookup provides a continuous value field everywhere on the surface, except at the chart boundaries where visible discontinuities appear. We propose a solution to make these seams invisible, while still outputting a standard texture atlas. Our method relies on recent advances in quad remeshing using global parameterization to produce a set of texture coordinates aligning texel grids across chart boundaries. This property makes it possible to ensure that the interpolated value fields on both sides of a chart boundary precisely match, making all seams invisible. However, this requirement on the uv coordinates needs to be complemented by a set of constraints on the colors stored in the texels. We propose an algorithm solving for all the necessary constraints between texel values, including through different magnification modes (nearest, bilinear, biquadratic and bicubic), and across facets using different texture resolutions. In the typical case of bilinear magnification and uniform resolution, none of the texels appearing on the surface are constrained. Our approach also ensures perfect continuity across several MIP-mapping levels

    360Roam: Real-Time Indoor Roaming Using Geometry-Aware 360^\circ Radiance Fields

    Full text link
    Virtual tour among sparse 360^\circ images is widely used while hindering smooth and immersive roaming experiences. The emergence of Neural Radiance Field (NeRF) has showcased significant progress in synthesizing novel views, unlocking the potential for immersive scene exploration. Nevertheless, previous NeRF works primarily focused on object-centric scenarios, resulting in noticeable performance degradation when applied to outward-facing and large-scale scenes due to limitations in scene parameterization. To achieve seamless and real-time indoor roaming, we propose a novel approach using geometry-aware radiance fields with adaptively assigned local radiance fields. Initially, we employ multiple 360^\circ images of an indoor scene to progressively reconstruct explicit geometry in the form of a probabilistic occupancy map, derived from a global omnidirectional radiance field. Subsequently, we assign local radiance fields through an adaptive divide-and-conquer strategy based on the recovered geometry. By incorporating geometry-aware sampling and decomposition of the global radiance field, our system effectively utilizes positional encoding and compact neural networks to enhance rendering quality and speed. Additionally, the extracted floorplan of the scene aids in providing visual guidance, contributing to a realistic roaming experience. To demonstrate the effectiveness of our system, we curated a diverse dataset of 360^\circ images encompassing various real-life scenes, on which we conducted extensive experiments. Quantitative and qualitative comparisons against baseline approaches illustrated the superior performance of our system in large-scale indoor scene roaming

    Improving Filtering for Computer Graphics

    Get PDF
    When drawing images onto a computer screen, the information in the scene is typically more detailed than can be displayed. Most objects, however, will not be close to the camera, so details have to be filtered out, or anti-aliased, when the objects are drawn on the screen. I describe new methods for filtering images and shapes with high fidelity while using computational resources as efficiently as possible. Vector graphics are everywhere, from drawing 3D polygons to 2D text and maps for navigation software. Because of its numerous applications, having a fast, high-quality rasterizer is important. I developed a method for analytically rasterizing shapes using wavelets. This approach allows me to produce accurate 2D rasterizations of images and 3D voxelizations of objects, which is the first step in 3D printing. I later improved my method to handle more filters. The resulting algorithm creates higher-quality images than commercial software such as Adobe Acrobat and is several times faster than the most highly optimized commercial products. The quality of texture filtering also has a dramatic impact on the quality of a rendered image. Textures are images that are applied to 3D surfaces, which typically cannot be mapped to the 2D space of an image without introducing distortions. For situations in which it is impossible to change the rendering pipeline, I developed a method for precomputing image filters over 3D surfaces. If I can also change the pipeline, I show that it is possible to improve the quality of texture sampling significantly in real-time rendering while using the same memory bandwidth as used in traditional methods

    PyNeRF: Pyramidal Neural Radiance Fields

    Full text link
    Neural Radiance Fields (NeRFs) can be dramatically accelerated by spatial grid representations. However, they do not explicitly reason about scale and so introduce aliasing artifacts when reconstructing scenes captured at different camera distances. Mip-NeRF and its extensions propose scale-aware renderers that project volumetric frustums rather than point samples but such approaches rely on positional encodings that are not readily compatible with grid methods. We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions. At render time, we simply use coarser grids to render samples that cover larger volumes. Our method can be easily applied to existing accelerated NeRF methods and significantly improves rendering quality (reducing error rates by 20-90% across synthetic and unbounded real-world scenes) while incurring minimal performance overhead (as each model head is quick to evaluate). Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.Comment: Neurips 2023 Project page: https://haithemturki.com/pynerf

    11th German Conference on Chemoinformatics (GCC 2015) : Fulda, Germany. 8-10 November 2015.

    Get PDF
    corecore