9 research outputs found

    Learning to Generate 3D Shapes from a Single Example

    Full text link
    Existing generative models for 3D shapes are typically trained on a large 3D dataset, often of a specific object category. In this paper, we investigate the deep generative model that learns from only a single reference 3D shape. Specifically, we present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop the tri-plane hybrid representation, which requires only 2D convolutions. We train our generative model on a voxel pyramid of the reference shape, without the need of any external supervision or manual annotation. Once trained, our model can generate diverse and high-quality 3D shapes possibly of different sizes and aspect ratios. The resulting shapes present variations across different scales, and at the same time retain the global structure of the reference shape. Through extensive evaluation, both qualitative and quantitative, we demonstrate that our model can generate 3D shapes of various types.Comment: SIGGRAPH Asia 2022; 19 pages (including 6 pages appendix), 17 figures. Project page: http://www.cs.columbia.edu/cg/SingleShapeGen

    Sin3DM: Learning a Diffusion Model from a Single 3D Textured Shape

    Full text link
    Synthesizing novel 3D models that resemble the input example has long been pursued by researchers and artists in computer graphics. In this paper, we present Sin3DM, a diffusion model that learns the internal patch distribution from a single 3D textured shape and generates high-quality variations with fine geometry and texture details. Training a diffusion model directly in 3D would induce large memory and computational cost. Therefore, we first compress the input into a lower-dimensional latent space and then train a diffusion model on it. Specifically, we encode the input 3D textured shape into triplane feature maps that represent the signed distance and texture fields of the input. The denoising network of our diffusion model has a limited receptive field to avoid overfitting, and uses triplane-aware 2D convolution blocks to improve the result quality. Aside from randomly generating new samples, our model also facilitates applications such as retargeting, outpainting and local editing. Through extensive qualitative and quantitative evaluation, we show that our model can generate 3D shapes of various types with better quality than prior methods.Comment: Project page: https://Sin3DM.github.io, Code: https://github.com/Sin3DM/Sin3D

    ReconFusion: 3D Reconstruction with Diffusion Priors

    Full text link
    3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at rendering photorealistic novel views of complex scenes. However, recovering a high-quality NeRF typically requires tens to hundreds of input images, resulting in a time-consuming capture process. We present ReconFusion to reconstruct real-world scenes using only a few photos. Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel camera poses beyond those captured by the set of input images. Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions. We perform an extensive evaluation across various real-world datasets, including forward-facing and 360-degree scenes, demonstrating significant performance improvements over previous few-view NeRF reconstruction approaches.Comment: Project page: https://reconfusion.github.io

    Photothermally Activated Artificial Neuromorphic Synapses

    No full text
    Biological nervous systems rely on the coordination of billions of neurons with complex, dynamic connectivity to enable the ability to process information and form memories. In turn, artificial intelligence and neuromorphic computing platforms have sought to mimic biological cognition through software-based neural networks and hardware demonstrations utilizing memristive circuitry with fixed dynamics. To incorporate the advantages of tunable dynamic software implementations of neural networks into hardware, we develop a proof-of-concept artificial synapse with adaptable resistivity. This synapse leverages the photothermally induced local phase transition of VO2 thin films by temporally modulated laser pulses. Such a process quickly modifies the conductivity of the film site-selectively by a factor of 500 to "activate" these neurons and store "memory" by applying varying bias voltages to induce self-sustained Joule heating between electrodes after activation with a laser. These synapses are demonstrated to undergo a complete heating and cooling cycle in less than 120 ns.N
    corecore