21,755 research outputs found

    Intense Star-formation and Feedback at High Redshift: Spatially-resolved Properties of the z=2.6 Submillimeter Galaxy SMMJ14011+0252

    Get PDF
    We present a detailed analysis of the spatially-resolved properties of the lensed submillimeter galaxy SMMJ14011+0252 at z=2.56, combining deep near-infrared integral-field data obtained with SPIFFI on the VLT with other multi-wavelength data sets. The broad characteristics of SMMJ14011+0252 are in agreement with what is expected for the early evolution of local massive spheroidal galaxies. From continuum and line flux, velocity, and dispersion maps, we measure the kinematics, star-formation rates, gas densities, and extinction for individual subcomponents. The star formation intensity is similar to low-redshift ``maximal starbursts'', while the line fluxes and the dynamics of the emission line gas provide direct evidence for a starburst-driven wind with physical properties very similar to local superwinds. We also find circumstantial evidence for "self-regulated" star formation within J1. The relative velocity of the bluer companion J2 yields a dynamical mass estimate for J1 within about 20 kpc, M_dyn \sim 1\times 10^{11} M_sun. The relative metallicity of J2 is 0.4 dex lower than in J1n/s, suggesting different star formation histories. SED fitting of the continuum peak J1c confirms and substantiates previous suggestions that this component is a z=0.25 interloper. When removing J1c, the stellar continuum and H-alpha line emission appear well aligned spatially in two individual components J1n and J1s, and coincide with two kinematically distinct regions in the velocity map, which might well indicate a merging system. This highlights the close similarity between SMGs and ULIRGs, which are often merger-driven maximal starbursts, and suggests that the intrinsic mechanisms of star-formation and related feedback are similar to low-redshift strongly star-forming systems.Comment: Some of the figures changed from b/w to colo

    Disentangling Baryons and Dark Matter in the Spiral Gravitational Lens B1933+503

    Get PDF
    Measuring the relative mass contributions of luminous and dark matter in spiral galaxies is important for understanding their formation and evolution. The combination of a galaxy rotation curve and strong lensing is a powerful way to break the disk-halo degeneracy that is inherent in each of the methods individually. We present an analysis of the 10-image radio spiral lens B1933+503 at z_l=0.755, incorporating (1) new global VLBI observations, (2) new adaptive-optics assisted K-band imaging, (3) new spectroscopic observations for the lens galaxy rotation curve and the source redshift. We construct a three-dimensionally axisymmetric mass distribution with 3 components: an exponential profile for the disk, a point mass for the bulge, and an NFW profile for the halo. The mass model is simultaneously fitted to the kinematics and the lensing data. The NFW halo needs to be oblate with a flattening of a/c=0.33^{+0.07}_{-0.05} to be consistent with the radio data. This suggests that baryons are effective at making the halos oblate near the center. The lensing and kinematics analysis probe the inner ~10 kpc of the galaxy, and we obtain a lower limit on the halo scale radius of 16 kpc (95% CI). The dark matter mass fraction inside a sphere with a radius of 2.2 disk scale lengths is f_{DM,2.2}=0.43^{+0.10}_{-0.09}. The contribution of the disk to the total circular velocity at 2.2 disk scale lengths is 0.76^{+0.05}_{-0.06}, suggesting that the disk is marginally submaximal. The stellar mass of the disk from our modeling is log_{10}(M_{*}/M_{sun}) = 11.06^{+0.09}_{-0.11} assuming that the cold gas contributes ~20% to the total disk mass. In comparison to the stellar masses estimated from stellar population synthesis models, the stellar initial mass function of Chabrier is preferred to that of Salpeter by a probability factor of 7.2.Comment: 16 pages, 13 figures, minor revisions based on referee's comments, accepted for publication in Ap

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    3D Shape Segmentation with Projective Convolutional Networks

    Full text link
    This paper introduces a deep architecture for segmenting 3D objects into their labeled semantic parts. Our architecture combines image-based Fully Convolutional Networks (FCNs) and surface-based Conditional Random Fields (CRFs) to yield coherent segmentations of 3D shapes. The image-based FCNs are used for efficient view-based reasoning about 3D object parts. Through a special projection layer, FCN outputs are effectively aggregated across multiple views and scales, then are projected onto the 3D object surfaces. Finally, a surface-based CRF combines the projected outputs with geometric consistency cues to yield coherent segmentations. The whole architecture (multi-view FCNs and CRF) is trained end-to-end. Our approach significantly outperforms the existing state-of-the-art methods in the currently largest segmentation benchmark (ShapeNet). Finally, we demonstrate promising segmentation results on noisy 3D shapes acquired from consumer-grade depth cameras.Comment: This is an updated version of our CVPR 2017 paper. We incorporated new experiments that demonstrate ShapePFCN performance under the case of consistent *upright* orientation and an additional input channel in our rendered images for encoding height from the ground plane (upright axis coordinate values). Performance is improved in this settin
    • …
    corecore