589 research outputs found
Testing strong line metallicity diagnostics at z~2
High-z galaxy gas-phase metallicities are usually determined through
observations of strong optical emission lines with calibrations tied to the
local universe. Recent debate has questioned if these calibrations are valid in
the high-z universe. We investigate this by analysing a sample of 16 galaxies
at z~2 available in the literature, and for which the metallicity can be
robustly determined using oxygen auroral lines. The sample spans a redshift
range of 1.4 < z < 3.6, has metallicities of 7.4-8.4 in 12+log(O/H) and stellar
masses 10^7.5-10^11 Msun. We test commonly used strong line diagnostics (R23,
O3, O2, O32, N2, O3N2 and Ne3O2 ) as prescribed by four different sets of
empirical calibrations, as well as one fully theoretical calibration. We find
that none of the strong line diagnostics (or calibration set) tested perform
consistently better than the others. Amongst the line ratios tested, R23 and O3
deliver the best results, with accuracies as good as 0.01-0.04 dex and
dispersions of ~0.2 dex in two of the calibrations tested. Generally, line
ratios involving nitrogen predict higher values of metallicity, while results
with O32 and Ne3O2 show large dispersions. The theoretical calibration yields
an accuracy of 0.06 dex, comparable to the best strong line methods. We
conclude that, within the metallicity range tested in this work, the locally
calibrated diagnostics can still be reliably applied at z~2.Comment: 12 pages, 8 Figures, accepted for publication in MNRA
An Extensive Network of Information Flow through the B1b/c Intersubunit Bridge of the Yeast Ribosome
Yeast ribosomal proteins L11 and S18 form a dynamic intersubunit interaction called the B1b/c bridge. Recent high resolution images of the ribosome have enabled targeting of specific residues in this bridge to address how distantly separated regions within the large and small subunits of the ribosome communicate with each other. Mutations were generated in the L11 side of the B1b/c bridge with a particular focus on disrupting the opposing charge motifs that have previously been proposed to be involved in subunit ratcheting. Mutants had wide-ranging effects on cellular viability and translational fidelity, with the most pronounced phenotypes corresponding to amino acid changes resulting in alterations of local charge properties. Chemical protection studies of selected mutants revealed rRNA structural changes in both the large and small subunits. In the large subunit rRNA, structural changes mapped to Helices 39, 80, 82, 83, 84, and the peptidyltransferase center. In the small subunit rRNA, structural changes were identified in helices 30 and 42, located between S18 and the decoding center. The rRNA structural changes correlated with charge-specific alterations to the L11 side of the B1b/c bridge. These analyses underscore the importance of the opposing charge mechanism in mediating B1b/c bridge interactions and suggest an extensive network of information exchange between distinct regions of the large and small subunits
XNect: Real-time Multi-person 3D Human Pose Estimation with a Single RGB Camera
We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates in generic scenes and is robust to difficult occlusions both by other people and objects. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals. We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully-connected neural network turns the possibly partial (on account of occlusion) 2D pose and 3D pose features for each subject into a complete 3D pose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that neither extracted global body positions nor joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes
Human Detection and Segmentation via Multi-View Consensus
Self-supervised detection and segmentation of foreground objects aims for accuracy without annotated training data. However, existing approaches predominantly rely on restrictive assumptions on appearance and motion. For scenes with dynamic activities and camera motion, we propose a multi-camera framework in which geometric constraints are embedded in the form of multi-view consistency during training via coarse 3D localization in a voxel grid and fine-grained offset regression. In this manner, we learn a joint distribution of proposals over multiple views. At inference time, our method operates on single RGB images. We outperform state-of-the-art techniques both on images that visually depart from those of standard benchmarks and on those of the classical Human3.6M dataset
{Mo2Cap2}: Real-time Mobile {3D} Motion Capture with a Cap-mounted Fisheye Camera
We propose the first real-time approach for the egocentric estimation of 3D human body pose in a wide range of unconstrained everyday activities. This setting has a unique set of challenges, such as mobility of the hardware setup, and robustness to long capture sessions with fast recovery from tracking failures. We tackle these challenges based on a novel lightweight setup that converts a standard baseball cap to a device for high-quality pose estimation based on a single cap-mounted fisheye camera. From the captured egocentric live stream, our CNN based 3D pose estimation approach runs at 60Hz on a consumer-level GPU. In addition to the novel hardware setup, our other main contributions are: 1) a large ground truth training corpus of top-down fisheye images and 2) a novel disentangled 3D pose estimation approach that takes the unique properties of the egocentric viewpoint into account. As shown by our evaluation, we achieve lower 3D joint error as well as better 2D overlay than the existing baselines
BodyNet: Volumetric Inference of 3D Human Body Shapes
Human shape estimation is an important task for video editing, animation and
fashion industry. Predicting 3D human body shape from natural images, however,
is highly challenging due to factors such as variation in human bodies,
clothing and viewpoint. Prior methods addressing this problem typically attempt
to fit parametric body models with certain priors on pose and shape. In this
work we argue for an alternative representation and propose BodyNet, a neural
network for direct inference of volumetric body shape from a single image.
BodyNet is an end-to-end trainable network that benefits from (i) a volumetric
3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate
supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them
results in performance improvement as demonstrated by our experiments. To
evaluate the method, we fit the SMPL model to our network output and show
state-of-the-art results on the SURREAL and Unite the People datasets,
outperforming recent approaches. Besides achieving state-of-the-art
performance, our method also enables volumetric body-part segmentation.Comment: Appears in: European Conference on Computer Vision 2018 (ECCV 2018).
27 page
Massive, Absorption-selected Galaxies at Intermediate Redshifts
The nature of absorption-selected galaxies and their connection to the
general galaxy population have been open issues for more than three decades,
with little information available on their gas properties. Here we show, using
detections of carbon monoxide (CO) emission with the Atacama Large
Millimeter/submillimeter Array (ALMA), that five of seven high-metallicity,
absorption-selected galaxies at intermediate redshifts, ,
have large molecular gas masses, and high molecular gas fractions (. Their modest star
formation rates (SFRs), yr, then
imply long gas depletion timescales, Gyr. The
high-metallicity absorption-selected galaxies at appear
distinct from populations of star-forming galaxies at both ,
during the peak of star formation activity in the Universe, and lower
redshifts, . Their relatively low SFRs, despite the large
molecular gas reservoirs, may indicate a transition in the nature of star
formation at intermediate redshifts, .Comment: 8 pages, 3 figures; accepted for publication in Astrophysical Journal
Letters. Minor changes to match the version in press in ApJ
EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras (Extended Abstract)
Marker-based and marker-less optical skeletal motion-capture methods use an outside-in arrangement of cameras placed around a scene, with viewpoints converging on the center. They often create discomfort by possibly needed marker suits, and their recording volume is severely restricted and often constrained to indoor scenes with controlled backgrounds. We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached to a helmet or virtual-reality headset. It combines the strength of a new generative pose estimation framework for fisheye views with a ConvNet-based body-part detector trained on a new automatically annotated and augmented dataset. Our inside-in method captures full-body motion in general indoor and outdoor scenes, and also crowded scenes
- …