19 research outputs found

    Structure and Color Gradients of Ultra-diffuse Galaxies in Distant Massive Galaxy Clusters

    Full text link
    We have measured structural parameters and radial color profiles of 108 ultra-diffuse galaxies (UDGs), carefully selected from six distant massive galaxy clusters in the Hubble Frontier Fields (HFF) in redshift range from 0.308 to 0.545. Our best-fitting GALFIT models show that the HFF UDGs have a median S\'ersic index of 1.09, which is close to 0.86 for local UDGs in the Coma cluster. The median axis-ratio value is 0.68 for HFF UDGs and 0.74 for Coma UDGs, respectively. The structural similarity between HFF and Coma UDGs suggests that they are the same kind of galaxies seen at different times and the structures of UDGs do not change at least for several billion years. By checking the distribution of HFF UDGs in the rest-frame UVJUVJ and UVIUVI diagrams, we find a large fraction of them are star-forming. Furthermore, a majority of HFF UDGs show small UV\rm U-V color gradients within \,1\,*\,Re,SMAR_{e,SMA} region, the fluctuation of the median radial color profile of HFF UDGs is smaller than 0.1\,mag, which is compatible to Coma UDGs. Our results indicate that cluster UDGs may fade or quench in a self-similar way, irrespective of the radial distance, in less than \sim 4 Gyrs.Comment: 17 pages, 8 figures, accepted for publication in Ap

    ClusterFormer: Clustering As A Universal Visual Learner

    Full text link
    This paper presents CLUSTERFORMER, a universal vision model that is based on the CLUSTERing paradigm with TransFORMER. It comprises two novel designs: 1. recurrent cross-attention clustering, which reformulates the cross-attention mechanism in Transformer and enables recursive updates of cluster centers to facilitate strong representation learning; and 2. feature dispatching, which uses the updated cluster centers to redistribute image features through similarity-based metrics, resulting in a transparent pipeline. This elegant design streamlines an explainable and transferable workflow, capable of tackling heterogeneous vision tasks (i.e., image classification, object detection, and image segmentation) with varying levels of clustering granularity (i.e., image-, box-, and pixel-level). Empirical results demonstrate that CLUSTERFORMER outperforms various well-known specialized architectures, achieving 83.41% top-1 acc. over ImageNet-1K for image classification, 54.2% and 47.0% mAP over MS COCO for object detection and instance segmentation, 52.4% mIoU over ADE20K for semantic segmentation, and 55.8% PQ over COCO Panoptic for panoptic segmentation. For its efficacy, we hope our work can catalyze a paradigm shift in universal models in computer vision

    CryptoMask : Privacy-preserving Face Recognition

    Full text link
    Face recognition is a widely-used technique for identification or verification, where a verifier checks whether a face image matches anyone stored in a database. However, in scenarios where the database is held by a third party, such as a cloud server, both parties are concerned about data privacy. To address this concern, we propose CryptoMask, a privacy-preserving face recognition system that employs homomorphic encryption (HE) and secure multi-party computation (MPC). We design a new encoding strategy that leverages HE properties to reduce communication costs and enable efficient similarity checks between face images, without expensive homomorphic rotation. Additionally, CryptoMask leaks less information than existing state-of-the-art approaches. CryptoMask only reveals whether there is an image matching the query or not, whereas existing approaches additionally leak sensitive intermediate distance information. We conduct extensive experiments that demonstrate CryptoMask's superior performance in terms of computation and communication. For a database with 100 million 512-dimensional face vectors, CryptoMask offers 5×{\thicksim}5 \times and 144×{\thicksim}144 \times speed-ups in terms of computation and communication, respectively.Comment: 18 pages,3 figures, accepted by ICICS202

    Revisiting Galaxy Evolution in Morphology in the COSMOS field (COSMOS-ReGEM):I. Merging Galaxies

    Full text link
    We revisit the evolution of galaxy morphology in the COSMOS field over the redshift range 0.2z10.2\leq z \leq 1, using a large and complete sample of 33,605 galaxies with a stellar mass of log(MM_{\ast}/M)>9.5_{\odot} )>9.5 with significantly improved redshifts and comprehensive non-parametric morphological parameters. Our sample has 13,881 (41.3%\sim41.3\%) galaxies with reliable spectroscopic redshifts and has more accurate photometric redshifts with a σNMAD0.005\sigma_{\rm NMAD} \sim 0.005. This paper is the first in a series that investigates merging galaxies and their properties. We identify 3,594 major merging galaxies through visual inspection and find 1,737 massive galaxy pairs with log(MM_\ast/M_\odot)>10.1>10.1. Among the family of non-parametric morphological parameters including CC, AA, SS, GiniGini, M20M_{\rm 20}, AOA_{\rm O}, and DOD_{\rm O}, we find that the outer asymmetry parameter AOA_{\rm O} and the second-order momentum parameter M20M_{\rm 20} are the best tracers of merging features than other combinations. Hence, we propose a criterion for selecting candidates of violently star-forming mergers: M20>3AO+3M_{\rm 20}> -3A_{\rm O}+3 at 0.26AO+3.70.2 -6A_{\rm O}+3.7 at 0.6<z<1.00.6<z<1.0. Furthermore, we show that both the visual merger sample and the pair sample exhibit a similar evolution in the merger rate at z<1z<1, with (1+z)1.79±0.13\Re \sim(1+z)^{1.79 \pm 0.13} for the visual merger sample and (1+z)2.02±0.42\Re \sim(1+z)^{2.02\pm 0.42} for the pair sample. The visual merger sample has a specific star formation rate that is about 0.16\,dex higher than that of non-merger galaxies, whereas no significant star formation excess is observed in the pair sample. This suggests that the effects of mergers on star formation differ at different merger stages.Comment: 21 pages, 12 figures; accepted for publication in Ap

    HybPSF: Hybrid PSF reconstruction for the observed JWST NIRCam image

    Full text link
    The James Webb Space Telescope (JWST) ushers in a new era of astronomical observation and discovery, offering unprecedented precision in a variety of measurements such as photometry, astrometry, morphology, and shear measurement. Accurate point spread function (PSF) models are crucial for many of these measurements. In this paper, we introduce a hybrid PSF construction method called HybPSF for JWST NIRCam imaging data. HybPSF combines the WebbPSF software, which simulates the PSF for JWST, with observed data to produce more accurate and reliable PSF models. We apply this method to the SMACS J0723 imaging data and construct supplementary structures from residuals obtained by subtracting the WebbPSF PSF model from the data. Our results show that HybPSF significantly reduces discrepancies between the PSF model and the data compared to WebbPSF. Specifically, the PSF shape parameter ellipticity and size comparisons indicate that HybPSF improves precision by a factor of approximately 10 for \$R^2\$ and \$50\%\$ for \$e\$. This improvement has important implications for astronomical measurements using JWST NIRCam imaging data

    Divergent Syntheses of 2‑Aminonicotinonitriles and Pyrazolines by Copper-Catalyzed Cyclization of Oxime Ester

    No full text
    Copper-catalyzed cyclization of an oxime ester toward divergent heterocycle synthesis is reported. Oxime ester serves as an enamine precursor to cyclize with malononitrile and aldehydes for access to 2-aminonicotinonitriles in a one-pot reaction, while cyclizing with <i>N</i>-sulfonylimines leads to synthesis of pyrazolines

    Color profiles of 108 UDGs identified in HFF fields

    Get PDF
    Multi-band surface brightness profiles of 108 UDGs identified in HFF fields. Details about each diagram are listed below. Panels\,(1) to (3) show the F814W band cutout-images of the UDG, the best-fitting GALFIT model and residual image. The bar at top-right of panel\,(2) represents 1.5\,kpc assuming the cluster redshift. Panels\,(4) to (6) show PSF-matched images in F606W, F814W and F160W bands. In panels\,(7) to (9), we mask neighboring sources classified by `Noisechisel' and overplot our elliptical annuli used in surface brightness analysis. Panel\,(10) presents three-band surface brightness profiles of each UDG, x-axis of colorful points correspond to the out-radius of elliptical annuli. In panel\,(11), we convert observed color profiles into rest-frame U\,-\,V and V\,-\,I profiles. F814W and F160W surface brightness profiles in panel\,(10) and rest-frame V\,-\,I profiles in panel\,(11) are shifted a bit to the right. Finally, the colors of the UDG from inside to outside are shown in the UVI diagram in panel\,(12). Please note that for UDGs which do not have F160W observations will not have panels (6),(9),(12)

    GL-RG: Global-Local Representation Granularity for Video Captioning

    Full text link
    Video captioning is a challenging task as it needs to accurately transform visual understanding into natural language description. To date, state-of-the-art methods inadequately model global-local representation across video frames for caption generation, leaving plenty of room for improvement. In this work, we approach the video captioning task from a new perspective and propose a GL-RG framework for video captioning, namely a \textbf{G}lobal-\textbf{L}ocal \textbf{R}epresentation \textbf{G}ranularity. Our GL-RG demonstrates three advantages over the prior efforts: 1) we explicitly exploit extensive visual representations from different video ranges to improve linguistic expression; 2) we devise a novel global-local encoder to produce rich semantic vocabulary to obtain a descriptive granularity of video contents across frames; 3) we develop an incremental training strategy which organizes model learning in an incremental fashion to incur an optimal captioning behavior. Experimental results on the challenging MSR-VTT and MSVD datasets show that our DL-RG outperforms recent state-of-the-art methods by a significant margin. Code is available at \url{https://github.com/ylqi/GL-RG}.Comment: Accepted to IJCAI 202
    corecore