46 research outputs found

    Classifying, quantifying, and witnessing qudit-qumode hybrid entanglement

    Full text link
    Recently, several hybrid approaches to quantum information emerged which utilize both continuous- and discrete-variable methods and resources at the same time. In this work, we investigate the bipartite hybrid entanglement between a finite-dimensional, discrete-variable quantum system and an infinite-dimensional, continuous-variable quantum system. A classification scheme is presented leading to a distinction between pure hybrid entangled states, mixed hybrid entangled states (those effectively supported by an overall finite-dimensional Hilbert space), and so-called truly hybrid entangled states (those which cannot be described in an overall finite-dimensional Hilbert space). Examples for states of each regime are given and entanglement witnessing as well as quantification are discussed. In particular, using the channel map of a thermal photon noise channel, we find that true hybrid entanglement naturally occurs in physically important settings. Finally, extensions from bipartite to multipartite hybrid entanglement are considered.Comment: 15 pages, 10 figures, final published version in Physical Review

    Advantages and challenges in coupling an ideal gas to atomistic models in adaptive resolution simulations

    Full text link
    In adaptive resolution simulations, molecular fluids are modeled employing different levels of resolution in different subregions of the system. When traveling from one region to the other, particles change their resolution on the fly. One of the main advantages of such approaches is the computational efficiency gained in the coarse-grained region. In this respect the best coarse-grained system to employ in the low resolution region would be the ideal gas, making intermolecular force calculations in the coarse-grained subdomain redundant. In this case, however, a smooth coupling is challenging due to the high energetic imbalance between typical liquids and a system of non-interacting particles. In the present work, we investigate this approach, using as a test case the most biologically relevant fluid, water. We demonstrate that a successful coupling of water to the ideal gas can be achieved with current adaptive resolution methods, and discuss the issues that remain to be addressed

    From Classical to Quantum and Back: Hamiltonian Adaptive Resolution Path Integral, Ring Polymer, and Centroid Molecular Dynamics

    Full text link
    Path integral-based simulation methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, such as ring polymer and centroid molecular dynamics, which allow the approximate calculation of both quantum statistical and quantum dynamical properties. To this end, we derive a new integration algorithm which also makes use of multiple time-stepping. The scheme is validated via adaptive classical--path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins

    Latent Space Diffusion Models of Cryo-EM Structures

    Full text link
    Cryo-electron microscopy (cryo-EM) is unique among tools in structural biology in its ability to image large, dynamic protein complexes. Key to this ability is image processing algorithms for heterogeneous cryo-EM reconstruction, including recent deep learning-based approaches. The state-of-the-art method cryoDRGN uses a Variational Autoencoder (VAE) framework to learn a continuous distribution of protein structures from single particle cryo-EM imaging data. While cryoDRGN can model complex structural motions, the Gaussian prior distribution of the VAE fails to match the aggregate approximate posterior, which prevents generative sampling of structures especially for multi-modal distributions (e.g. compositional heterogeneity). Here, we train a diffusion model as an expressive, learnable prior in the cryoDRGN framework. Our approach learns a high-quality generative model over molecular conformations directly from cryo-EM imaging data. We show the ability to sample from the model on two synthetic and two real datasets, where samples accurately follow the data distribution unlike samples from the VAE prior distribution. We also demonstrate how the diffusion model prior can be leveraged for fast latent space traversal and interpolation between states of interest. By learning an accurate model of the data distribution, our method unlocks tools in generative modeling, sampling, and distribution analysis for heterogeneous cryo-EM ensembles.Comment: Machine Learning for Structural Biology Workshop, NeurIPS 2022 (Oral

    Differentially Private Diffusion Models

    Full text link
    While modern machine learning models rely on increasingly large training datasets, data is often limited in privacy-sensitive domains. Generative models trained with differential privacy (DP) on sensitive data can sidestep this challenge, providing access to synthetic data instead. We build on the recent success of diffusion models (DMs) and introduce Differentially Private Diffusion Models (DPDMs), which enforce privacy using differentially private stochastic gradient descent (DP-SGD). We investigate the DM parameterization and the sampling algorithm, which turn out to be crucial ingredients in DPDMs, and propose noise multiplicity, a powerful modification of DP-SGD tailored to the training of DMs. We validate our novel DPDMs on image generation benchmarks and achieve state-of-the-art performance in all experiments. Moreover, on standard benchmarks, classifiers trained on DPDM-generated synthetic data perform on par with task-specific DP-SGD-trained classifiers, which has not been demonstrated before for DP generative models. Project page and code: https://nv-tlabs.github.io/DPDM.Comment: Accepted at TMLR (https://openreview.net/forum?id=ZPpQk7FJXF

    TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion Models

    Full text link
    We present TexFusion (Texture Diffusion), a new method to synthesize textures for given 3D geometries, using large-scale text-guided image diffusion models. In contrast to recent works that leverage 2D text-to-image diffusion models to distill 3D objects using a slow and fragile optimization process, TexFusion introduces a new 3D-consistent generation technique specifically designed for texture synthesis that employs regular diffusion model sampling on different 2D rendered views. Specifically, we leverage latent diffusion models, apply the diffusion model's denoiser on a set of 2D renders of the 3D object, and aggregate the different denoising predictions on a shared latent texture map. Final output RGB textures are produced by optimizing an intermediate neural color field on the decodings of 2D renders of the latent texture. We thoroughly validate TexFusion and show that we can efficiently generate diverse, high quality and globally coherent textures. We achieve state-of-the-art text-guided texture synthesis performance using only image diffusion models, while avoiding the pitfalls of previous distillation-based methods. The text-conditioning offers detailed control and we also do not rely on any ground truth 3D textures for training. This makes our method versatile and applicable to a broad range of geometry and texture types. We hope that TexFusion will advance AI-based texturing of 3D assets for applications in virtual reality, game design, simulation, and more.Comment: Videos and more results on https://research.nvidia.com/labs/toronto-ai/texfusion

    Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models

    Full text link
    Latent Diffusion Models (LDMs) enable high-quality image synthesis while avoiding excessive compute demands by training a diffusion model in a compressed lower-dimensional latent space. Here, we apply the LDM paradigm to high-resolution video generation, a particularly resource-intensive task. We first pre-train an LDM on images only; then, we turn the image generator into a video generator by introducing a temporal dimension to the latent space diffusion model and fine-tuning on encoded image sequences, i.e., videos. Similarly, we temporally align diffusion model upsamplers, turning them into temporally consistent video super resolution models. We focus on two relevant real-world applications: Simulation of in-the-wild driving data and creative content creation with text-to-video modeling. In particular, we validate our Video LDM on real driving videos of resolution 512 x 1024, achieving state-of-the-art performance. Furthermore, our approach can easily leverage off-the-shelf pre-trained image LDMs, as we only need to train a temporal alignment model in that case. Doing so, we turn the publicly available, state-of-the-art text-to-image LDM Stable Diffusion into an efficient and expressive text-to-video model with resolution up to 1280 x 2048. We show that the temporal layers trained in this way generalize to different fine-tuned text-to-image LDMs. Utilizing this property, we show the first results for personalized text-to-video generation, opening exciting directions for future content creation. Project page: https://research.nvidia.com/labs/toronto-ai/VideoLDM/Comment: Conference on Computer Vision and Pattern Recognition (CVPR) 2023. Project page: https://research.nvidia.com/labs/toronto-ai/VideoLDM
    corecore