55 research outputs found

    Efficient Graphics Representation with Differentiable Indirection

    Full text link
    We introduce differentiable indirection -- a novel learned primitive that employs differentiable multi-scale lookup tables as an effective substitute for traditional compute and data operations across the graphics pipeline. We demonstrate its flexibility on a number of graphics tasks, i.e., geometric and image representation, texture mapping, shading, and radiance field representation. In all cases, differentiable indirection seamlessly integrates into existing architectures, trains rapidly, and yields both versatile and efficient results.Comment: Project website: https://sayan1an.github.io/din.htm

    ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields

    Full text link
    We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene editing method that enables the replacement of specific objects within a scene. Given multi-view images of a scene, a text prompt describing the object to replace, and a text prompt describing the new object, our Erase-and-Replace approach can effectively swap objects in the scene with newly generated content while maintaining 3D consistency across multiple viewpoints. We demonstrate the versatility of ReplaceAnything3D by applying it to various realistic 3D scenes, showcasing results of modified foreground objects that are well-integrated with the rest of the scene without affecting its overall integrity.Comment: For our project page, see https://replaceanything3d.github.io

    PRPF3

    Get PDF
    Purpose. To characterize the clinical and molecular genetic characteristics of a large, multigenerational Chinese family showing different phenotypes. Methods. A pedigree consisted of 56 individuals in 5 generations was recruited. Comprehensive ophthalmic examinations were performed in 16 family members affected. Mutation screening of CYP4V2 was performed by Sanger sequencing. Next-generation sequencing (NGS) was performed to capture and sequence all exons of 47 known retinal dystrophy-associated genes in two affected family members who had no mutations in CYP4V2. The detected variants in NGS were validated by Sanger sequencing in the family members. Results. Two compound heterozygous CYP4V2 mutations (c.802-8_810del17insGC and c.992A>C) were detected in the proband who presented typical clinical features of BCD. One missense mutation (c.1482C>T, p.T494M) in the PRPF3 gene was detected in 9 out of 22 affected family members who manifested classical clinical features of RP. Conclusions. Our results showed that two compound heterozygous CYP4V2 mutations caused BCD, and one missense mutation in PRPF3 was responsible for adRP in this large family. This study suggests that accurate phenotypic diagnosis, molecular diagnosis, and genetic counseling are necessary for patients with hereditary retinal degeneration in some large mutigenerational family

    IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images

    Full text link
    While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion. The reconstruction of physically based material properties and lighting via inverse rendering promises to enable such applications. However, most inverse rendering techniques require high dynamic range (HDR) images as input, a setting that is inaccessible to most users. We present a method that recovers the physically based material properties and spatially-varying HDR lighting of a scene from multi-view, low-dynamic-range (LDR) images. We model the LDR image formation process in our inverse rendering pipeline and propose a novel optimization strategy for material, lighting, and a camera response model. We evaluate our approach with synthetic and real scenes compared to the state-of-the-art inverse rendering methods that take either LDR or HDR input. Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.Comment: Project Website: https://irisldr.github.io

    Neural-PBIR Reconstruction of Shape, Material, and Illumination

    Full text link
    Reconstructing the shape and spatially varying surface appearances of a physical-world object as well as its surrounding illumination based on 2D images (e.g., photographs) of the object has been a long-standing problem in computer vision and graphics. In this paper, we introduce a robust object reconstruction pipeline combining neural based object reconstruction and physics-based inverse rendering (PBIR). Specifically, our pipeline firstly leverages a neural stage to produce high-quality but potentially imperfect predictions of object shape, reflectance, and illumination. Then, in the later stage, initialized by the neural predictions, we perform PBIR to refine the initial results and obtain the final high-quality reconstruction. Experimental results demonstrate our pipeline significantly outperforms existing reconstruction methods quality-wise and performance-wise

    Physically-Based Editing of Indoor Scene Lighting from a Single Image

    Full text link
    We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks. This is an extremely challenging problem that requires modeling complex light transport, and disentangling HDR lighting from material and geometry with only a partial LDR observation of the scene. We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions. We use physically-based indoor light representations that allow for intuitive editing, and infer both visible and invisible light sources. Our neural rendering framework combines physically-based direct illumination and shadow rendering with deep networks to approximate global illumination. It can capture challenging lighting effects, such as soft shadows, directional lighting, specular materials, and interreflections. Previous single image inverse rendering methods usually entangle scene lighting and geometry and only support applications like object insertion. Instead, by combining parametric 3D lighting estimation with neural scene rendering, we demonstrate the first automatic method to achieve full scene relighting, including light source insertion, removal, and replacement, from a single image. All source code and data will be publicly released

    Inclination Measurement Based on MEMS Accelerometer

    Get PDF
    MEMS accelerometer is very suitable for dip angle measurement with its small size, low power consumption and so on. The working principle of MEMS accelerometer was described in this study, and using the accelerometer to measure inclination was analyzed. Triaxial digital chip ADXL345 of acceleration was controlled via SPI mode driving using MSP430F149 microcontroller, and interface circuit and driver were designed, thus successfully achieving inclination measurement. Moreover, error is ±0.3o, and resolution may be up to 0.015o, while measuring system has the advantage of low power consumption
    • …
    corecore