32 research outputs found
Inhibition of Subsets of G Protein-coupled Receptors by Empty Mutants of G Protein α Subunits in Go, G11, and G16
We previously reported that the xanthine nucleotide binding Goα mutant, GoαX, inhibited the activation of Gi-coupled receptors. We constructed similar mutations in G11α and G16α and characterized their nucleotide binding and receptor interaction. First, we found that G11αX and G16αX expressed in COS-7 cells bound xanthine 5'-O-(thiotriphosphate) instead of guanosine 5'-O-(thiotriphosphate). Second, we found that G11αX and G16αX interacted with βγ subunits in the presence of xanthine diphosphate. These experiments demonstrated that G11aαX and G16αX were xanthine nucleotide-binding proteins, similar to GoαX. Third, in COS-7 cells, both G11αX and G16αX inhibited the activation of Gq-coupled receptors, whereas only G16αX inhibited the activation of Gi-coupled receptors. Therefore, when in the nucleotide-free state, empty G11αX and G16αX appeared to retain the same receptor binding specificity as their wild-type counterparts. Finally, we found that GoαX, G11αX, and G16αX all inhibited the endogenous thrombin receptors and lysophosphatidic acid receptors in NIH3T3 cells, whereas G11αX and G16αX, but not GoαX, inhibited the activation of transfected m1 muscarinic receptor in these cells. We conclude that these empty G protein mutants of Goα, G11α, and G16α can act as dominant negative inhibitors against specific subsets of G protein-coupled receptors
BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping
Diffusion models have demonstrated excellent potential for generating diverse
images. However, their performance often suffers from slow generation due to
iterative denoising. Knowledge distillation has been recently proposed as a
remedy that can reduce the number of inference steps to one or a few without
significant quality degradation. However, existing distillation methods either
require significant amounts of offline computation for generating synthetic
training data from the teacher model or need to perform expensive online
learning with the help of real data. In this work, we present a novel technique
called BOOT, that overcomes these limitations with an efficient data-free
distillation algorithm. The core idea is to learn a time-conditioned model that
predicts the output of a pre-trained diffusion model teacher given any time
step. Such a model can be efficiently trained based on bootstrapping from two
consecutive sampled steps. Furthermore, our method can be easily adapted to
large-scale text-to-image diffusion models, which are challenging for
conventional methods given the fact that the training sets are often large and
difficult to access. We demonstrate the effectiveness of our approach on
several benchmark datasets in the DDIM setting, achieving comparable generation
quality while being orders of magnitude faster than the diffusion teacher. The
text-to-image results show that the proposed approach is able to handle highly
complex distributions, shedding light on more efficient generative modeling.Comment: In progres
Learning Controllable 3D Diffusion Models from Single-view Images
Diffusion models have recently become the de-facto approach for generative
modeling in the 2D domain. However, extending diffusion models to 3D is
challenging due to the difficulties in acquiring 3D ground truth data for
training. On the other hand, 3D GANs that integrate implicit 3D representations
into GANs have shown remarkable 3D-aware generation when trained only on
single-view image datasets. However, 3D GANs do not provide straightforward
ways to precisely control image synthesis. To address these challenges, We
present Control3Diff, a 3D diffusion model that combines the strengths of
diffusion models and 3D GANs for versatile, controllable 3D-aware image
synthesis for single-view datasets. Control3Diff explicitly models the
underlying latent distribution (optionally conditioned on external inputs),
thus enabling direct control during the diffusion process. Moreover, our
approach is general and applicable to any type of controlling input, allowing
us to train it with the same diffusion objective without any auxiliary
supervision. We validate the efficacy of Control3Diff on standard image
generation benchmarks, including FFHQ, AFHQ, and ShapeNet, using various
conditioning inputs such as images, sketches, and text prompts. Please see the
project website (\url{https://jiataogu.me/control3diff}) for video comparisons.Comment: work in progres
Neural Sparse Voxel Fields
Photo-realistic free-viewpoint rendering of real-world scenes using classical
computer graphics techniques is challenging, because it requires the difficult
step of capturing detailed appearance and geometry models. Recent studies have
demonstrated promising results by learning scene representations that
implicitly encode both geometry and appearance without 3D supervision. However,
existing approaches in practice often show blurry renderings caused by the
limited network capacity or the difficulty in finding accurate intersections of
camera rays with the scene geometry. Synthesizing high-resolution imagery from
these representations often requires time-consuming optical ray marching. In
this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene
representation for fast and high-quality free-viewpoint rendering. NSVF defines
a set of voxel-bounded implicit fields organized in a sparse voxel octree to
model local properties in each cell. We progressively learn the underlying
voxel structures with a differentiable ray-marching operation from only a set
of posed RGB images. With the sparse voxel octree structure, rendering novel
views can be accelerated by skipping the voxels containing no relevant scene
content. Our method is typically over 10 times faster than the state-of-the-art
(namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving
higher quality results. Furthermore, by utilizing an explicit sparse voxel
representation, our method can easily be applied to scene editing and scene
composition. We also demonstrate several challenging tasks, including
multi-scene learning, free-viewpoint rendering of a moving human, and
large-scale scene rendering. Code and data are available at our website:
https://github.com/facebookresearch/NSVF.Comment: 20 pages, in progres
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction
3D-aware image synthesis encompasses a variety of tasks, such as scene
generation and novel view synthesis from images. Despite numerous task-specific
methods, developing a comprehensive model remains challenging. In this paper,
we present SSDNeRF, a unified approach that employs an expressive diffusion
model to learn a generalizable prior of neural radiance fields (NeRF) from
multi-view images of diverse objects. Previous studies have used two-stage
approaches that rely on pretrained NeRFs as real data to train diffusion
models. In contrast, we propose a new single-stage training paradigm with an
end-to-end objective that jointly optimizes a NeRF auto-decoder and a latent
diffusion model, enabling simultaneous 3D reconstruction and prior learning,
even from sparsely available views. At test time, we can directly sample the
diffusion prior for unconditional generation, or combine it with arbitrary
observations of unseen objects for NeRF reconstruction. SSDNeRF demonstrates
robust results comparable to or better than leading task-specific methods in
unconditional generation and single/sparse-view 3D reconstruction.Comment: Project page: https://lakonik.github.io/ssdner
Evaluation of a Novel Biphasic Culture Medium for Recovery of Mycobacteria: A Multi-Center Study
on L-J slants. Automated liquid culture systems are expensive. A low-cost culturing medium capable of rapidly indicating the presence of mycobacteria is needed. The aim of this study was to develop and evaluate a novel biphasic culture medium for the recovery of mycobacteria from clinical sputum specimens from suspected pulmonary tuberculosis patients.<0.001).
Quantitative Trait Locus Analysis of Plasma Lipoprotein Levels in an Autoimmune Mouse Model : Interactions Between Lipoprotein Metabolism, Autoimmune Disease, and Atherogenesis.
The autoimmune MRL/lpr mouse strain, a model for systemic lupus erythematosus, exhibited an unusual plasma lipoprotein profile, suggesting a possible interaction of autoimmune disease and lipoprotein metabolism. In an effort to examine the genetic basis of such interactions, and to study their relationship to atherogenesis, we performed a quantitative trait locus analysis using a total of 272 (MRL/lprxBALB/cJ) second generation (F2) intercross mice. These mice were examined for levels of total plasma cholesterol, HDL cholesterol, VLDL and LDL cholesterol, unesterified cholesterol, autoantibodies, and aortic fatty streak lesions. Using a genome scan approach, we identified 4 quantitative trait loci controlling plasma lipoprotein levels on chromosomes (Chrs) 5, 8, 15, and 19. The locus on Chr 15 exhibited lod scores of 11.1 for total cholesterol and 6.7 for VLDL and LDL cholesterol in mice fed an atherogenic diet, and it contains a candidate gene, the sterol regulatory element binding protein-2. The locus on Chr 5 exhibited lod scores of 3.8 for total cholesterol and 4.1 for unesterified cholesterol in mice fed an atherogenic diet, and this locus has been observed in 2 previous studies. The locus on Chr 8 exhibited a lod score of 3.1 for unesterified cholesterol in mice fed a chow diet. This locus contains the lecithin-cholesterol acyltransferase gene, and decreased activity of the enzyme in the MRL strain suggests that this gene underlies the quantitative-trait locus. The locus on Chr 19 exhibited a lod score of 8.4 for HDL cholesterol and includes the Fas gene, which is mutated in MRL/lpr mice and is primarily responsible for the autoimmune phenotype in this cross. That the Fas gene is responsible for the HDL quantitative-trait loci is supported by the finding that autoantibody levels were strongly correlated with HDL cholesterol levels (rho=-0.37,