896 research outputs found
Transcriptional analysis of pqqD and study of the regulation of pyrroloquinoline quinone biosynthesis in Methylobacterium extorquens AM1
Methanol dehydrogenase, the enzyme that oxidizes methanol to formaldehyde in gram-negative methylotrophs, contains the prosthetic group pyrroloquinoline quinone (PQQ). To begin to analyze how the synthesis of PQQ is coordinated with the production of other methanol dehydrogenase components, the transcription of one of the key PQQ synthesis genes has been studied. This gene (pqqD) encodes a 29-amino- acid peptide that is thought to be the precursor for PQQ biosynthesis. A unique transcription start site was mapped to a guanidine nucleotide 95 bp upstream of the pqqD initiator codon. RNA blot analysis identified two transcripts, a major one of 240 bases encoding pqqD and a minor one of 1,300 bases encoding pqqD and the gene immediately downstream, pqqG. Both transcripts are present at similar levels in cells grown on methanol and on succinate, but the levels of PQQ are about fivefold higher in cells grown on methanol than in cells grown on succinate. These results suggest that PQQ production is regulated at a level different from the transcription of pqqD. The genes mxbM, mxbD, mxcQ, mxcE, and mxaB are required for transcription of the genes encoding the methanol dehydrogenase subunits and were assessed for their role in PQQ production. PQQ levels were measured in mutants defective in each of these regulatory genes and compared with levels of pqqD transcription, measured with a transcriptional fusion between the pqqD promoter and xylE. The results showed that only a subset of these regulatory genes (mxbM, mxbD, and mxaB) is required for transcription of pqqD, and only mxbM and mxbD mutants affected the final levels of PQQ significantly
Light Field Blind Motion Deblurring
We study the problem of deblurring light fields of general 3D scenes captured
under 3D camera motion and present both theoretical and practical
contributions. By analyzing the motion-blurred light field in the primal and
Fourier domains, we develop intuition into the effects of camera motion on the
light field, show the advantages of capturing a 4D light field instead of a
conventional 2D image for motion deblurring, and derive simple methods of
motion deblurring in certain cases. We then present an algorithm to blindly
deblur light fields of general scenes without any estimation of scene geometry,
and demonstrate that we can recover both the sharp light field and the 3D
camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201
NeRFs: The Search for the Best 3D Representation
Neural Radiance Fields or NeRFs have become the representation of choice for
problems in view synthesis or image-based rendering, as well as in many other
applications across computer graphics and vision, and beyond. At their core,
NeRFs describe a new representation of 3D scenes or 3D geometry. Instead of
meshes, disparity maps, multiplane images or even voxel grids, they represent
the scene as a continuous volume, with volumetric parameters like
view-dependent radiance and volume density obtained by querying a neural
network. The NeRF representation has now been widely used, with thousands of
papers extending or building on it every year, multiple authors and websites
providing overviews and surveys, and numerous industrial applications and
startup companies. In this article, we briefly review the NeRF representation,
and describe the three decades-long quest to find the best 3D representation
for view synthesis and related problems, culminating in the NeRF papers. We
then describe new developments in terms of NeRF representations and make some
observations and insights regarding the future of 3D representations.Comment: Updated based on feedback in-person and via e-mail at SIGGRAPH 2023.
In particular, I have added references and discussion of seminal SIGGRAPH
image-based rendering papers, and better put the recent Kerbl et al. work in
context, with more reference
Image to Image Translation for Domain Adaptation
We propose a general framework for unsupervised domain adaptation, which
allows deep neural networks trained on a source domain to be tested on a
different target domain without requiring any training annotations in the
target domain. This is achieved by adding extra networks and losses that help
regularize the features extracted by the backbone encoder network. To this end
we propose the novel use of the recently proposed unpaired image-toimage
translation framework to constrain the features extracted by the encoder
network. Specifically, we require that the features extracted are able to
reconstruct the images in both domains. In addition we require that the
distribution of features extracted from images in the two domains are
indistinguishable. Many recent works can be seen as specific cases of our
general framework. We apply our method for domain adaptation between MNIST,
USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in
classification tasks, and also between GTA5 and Cityscapes datasets for a
segmentation task. We demonstrate state of the art performance on each of these
datasets
- …