9 research outputs found
Learning to Generate 3D Shapes from a Single Example
Existing generative models for 3D shapes are typically trained on a large 3D
dataset, often of a specific object category. In this paper, we investigate the
deep generative model that learns from only a single reference 3D shape.
Specifically, we present a multi-scale GAN-based model designed to capture the
input shape's geometric features across a range of spatial scales. To avoid
large memory and computational cost induced by operating on the 3D volume, we
build our generator atop the tri-plane hybrid representation, which requires
only 2D convolutions. We train our generative model on a voxel pyramid of the
reference shape, without the need of any external supervision or manual
annotation. Once trained, our model can generate diverse and high-quality 3D
shapes possibly of different sizes and aspect ratios. The resulting shapes
present variations across different scales, and at the same time retain the
global structure of the reference shape. Through extensive evaluation, both
qualitative and quantitative, we demonstrate that our model can generate 3D
shapes of various types.Comment: SIGGRAPH Asia 2022; 19 pages (including 6 pages appendix), 17
figures. Project page: http://www.cs.columbia.edu/cg/SingleShapeGen
Sin3DM: Learning a Diffusion Model from a Single 3D Textured Shape
Synthesizing novel 3D models that resemble the input example has long been
pursued by researchers and artists in computer graphics. In this paper, we
present Sin3DM, a diffusion model that learns the internal patch distribution
from a single 3D textured shape and generates high-quality variations with fine
geometry and texture details. Training a diffusion model directly in 3D would
induce large memory and computational cost. Therefore, we first compress the
input into a lower-dimensional latent space and then train a diffusion model on
it. Specifically, we encode the input 3D textured shape into triplane feature
maps that represent the signed distance and texture fields of the input. The
denoising network of our diffusion model has a limited receptive field to avoid
overfitting, and uses triplane-aware 2D convolution blocks to improve the
result quality. Aside from randomly generating new samples, our model also
facilitates applications such as retargeting, outpainting and local editing.
Through extensive qualitative and quantitative evaluation, we show that our
model can generate 3D shapes of various types with better quality than prior
methods.Comment: Project page: https://Sin3DM.github.io, Code:
https://github.com/Sin3DM/Sin3D
Recommended from our members
Near-Field Nanoimaging of Phases and Carrier Dynamics in Vanadium Dioxide Nanobeams.
The stable coexistence of insulating and metallic phases in strained vanadium dioxide (VO2) has garnered significant research interest due to the intriguing phase transition phenomena. However, the temporal behavior of charge carriers in different phases of VO2 remains elusive. Herein, we employ near-field optical nanoscopy to capture nanoscale alternating phase domains in bent VO2 nanobeams. By conducting transient measurements across the different phases, we observed a prolonged carrier recombination lifetime in the metallic phase of VO2, accompanied by an accelerated diffusion process. Our findings reveal nanoscale carrier dynamics in VO2 nanobeams, offering insights that can facilitate further investigations into phase-change materials and their potential applications in sensing and microelectromechanical devices
ReconFusion: 3D Reconstruction with Diffusion Priors
3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at
rendering photorealistic novel views of complex scenes. However, recovering a
high-quality NeRF typically requires tens to hundreds of input images,
resulting in a time-consuming capture process. We present ReconFusion to
reconstruct real-world scenes using only a few photos. Our approach leverages a
diffusion prior for novel view synthesis, trained on synthetic and multiview
datasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel
camera poses beyond those captured by the set of input images. Our method
synthesizes realistic geometry and texture in underconstrained regions while
preserving the appearance of observed regions. We perform an extensive
evaluation across various real-world datasets, including forward-facing and
360-degree scenes, demonstrating significant performance improvements over
previous few-view NeRF reconstruction approaches.Comment: Project page: https://reconfusion.github.io
Recommended from our members
Transient Nanoscopy of Exciton Dynamics in 2D Transition Metal Dichalcogenides
The electronic and optical properties of 2D transition metal dichalcogenides are dominated by strong excitonic resonances. Exciton dynamics plays a critical role in the functionality and performance of many miniaturized 2D optoelectronic devices; however, the measurement of nanoscale excitonic behaviors remains challenging. Here, a near-field transient nanoscopy is reported to probe exciton dynamics beyond the diffraction limit. Exciton recombination and exciton-exciton annihilation processes in monolayer and bilayer MoS2 are studied as the proof-of-concept demonstration. Moreover, with the capability to access local sites, intriguing exciton dynamics near the monolayer-bilayer interface and at the MoS2 nano-wrinkles are resolved. Such nanoscale resolution highlights the potential of this transient nanoscopy for fundamental investigation of exciton physics and further optimization of functional devices
Photothermally Activated Artificial Neuromorphic Synapses
Biological nervous systems rely on the coordination of billions of neurons with complex, dynamic connectivity to enable the ability to process information and form memories. In turn, artificial intelligence and neuromorphic computing platforms have sought to mimic biological cognition through software-based neural networks and hardware demonstrations utilizing memristive circuitry with fixed dynamics. To incorporate the advantages of tunable dynamic software implementations of neural networks into hardware, we develop a proof-of-concept artificial synapse with adaptable resistivity. This synapse leverages the photothermally induced local phase transition of VO2 thin films by temporally modulated laser pulses. Such a process quickly modifies the conductivity of the film site-selectively by a factor of 500 to "activate" these neurons and store "memory" by applying varying bias voltages to induce self-sustained Joule heating between electrodes after activation with a laser. These synapses are demonstrated to undergo a complete heating and cooling cycle in less than 120 ns.N