31 research outputs found

    Structure and Color Gradients of Ultra-diffuse Galaxies in Distant Massive Galaxy Clusters

    Full text link
    We have measured structural parameters and radial color profiles of 108 ultra-diffuse galaxies (UDGs), carefully selected from six distant massive galaxy clusters in the Hubble Frontier Fields (HFF) in redshift range from 0.308 to 0.545. Our best-fitting GALFIT models show that the HFF UDGs have a median S\'ersic index of 1.09, which is close to 0.86 for local UDGs in the Coma cluster. The median axis-ratio value is 0.68 for HFF UDGs and 0.74 for Coma UDGs, respectively. The structural similarity between HFF and Coma UDGs suggests that they are the same kind of galaxies seen at different times and the structures of UDGs do not change at least for several billion years. By checking the distribution of HFF UDGs in the rest-frame UVJUVJ and UVIUVI diagrams, we find a large fraction of them are star-forming. Furthermore, a majority of HFF UDGs show small U−V\rm U-V color gradients within \,1\,*\,Re,SMAR_{e,SMA} region, the fluctuation of the median radial color profile of HFF UDGs is smaller than 0.1\,mag, which is compatible to Coma UDGs. Our results indicate that cluster UDGs may fade or quench in a self-similar way, irrespective of the radial distance, in less than ∼\sim 4 Gyrs.Comment: 17 pages, 8 figures, accepted for publication in Ap

    <i>HT2ML</i>:An efficient hybrid framework for privacy-preserving Machine Learning using HE and TEE

    Get PDF
    Outsourcing Machine Learning (ML) tasks to cloud servers is a cost-effective solution when dealing with distributed data. However, outsourcing these tasks to cloud servers could lead to data breaches. Secure computing methods, such as Homomorphic Encryption (HE) and Trusted Execution Environments (TEE), have been used to protect outsourced data. Nevertheless, HE remains inefficient in processing complicated functions (e.g., non-linear functions) and TEE (e.g., Intel SGX) is not ideal for directly processing ML tasks due to side-channel attacks and parallel-unfriendly computation. In this paper, we propose a hybrid framework integrating SGX and HE, called HT2ML, to protect user's data and models. In HT2ML, HE-friendly functions are protected with HE and performed outside the enclave, while the remaining operations are performed inside the enclave obliviously. HT2ML leverages optimised HE matrix multiplications to accelerate HE computations outside the enclave while using oblivious blocks inside the enclave to prevent access-pattern-based attacks. We evaluate HT2ML using Linear Regression (LR) training and Convolutional Neural Network (CNN) inference as two instantiations. The performance results show that HT2ML is up to ∼11× faster than HE only baseline with 6-dimensional data in LR training. For CNN inference, HT2ML is ∼196× faster than the most recent approach (Xiao et al., ICDCS'21).</p

    ClusterFormer: Clustering As A Universal Visual Learner

    Full text link
    This paper presents CLUSTERFORMER, a universal vision model that is based on the CLUSTERing paradigm with TransFORMER. It comprises two novel designs: 1. recurrent cross-attention clustering, which reformulates the cross-attention mechanism in Transformer and enables recursive updates of cluster centers to facilitate strong representation learning; and 2. feature dispatching, which uses the updated cluster centers to redistribute image features through similarity-based metrics, resulting in a transparent pipeline. This elegant design streamlines an explainable and transferable workflow, capable of tackling heterogeneous vision tasks (i.e., image classification, object detection, and image segmentation) with varying levels of clustering granularity (i.e., image-, box-, and pixel-level). Empirical results demonstrate that CLUSTERFORMER outperforms various well-known specialized architectures, achieving 83.41% top-1 acc. over ImageNet-1K for image classification, 54.2% and 47.0% mAP over MS COCO for object detection and instance segmentation, 52.4% mIoU over ADE20K for semantic segmentation, and 55.8% PQ over COCO Panoptic for panoptic segmentation. For its efficacy, we hope our work can catalyze a paradigm shift in universal models in computer vision

    Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning?

    Full text link
    As the scale of vision models continues to grow, the emergence of Visual Prompt Tuning (VPT) as a parameter-efficient transfer learning technique has gained attention due to its superior performance compared to traditional full-finetuning. However, the conditions favoring VPT (the ``when") and the underlying rationale (the ``why") remain unclear. In this paper, we conduct a comprehensive analysis across 19 distinct datasets and tasks. To understand the ``when" aspect, we identify the scenarios where VPT proves favorable by two dimensions: task objectives and data distributions. We find that VPT is preferrable when there is 1) a substantial disparity between the original and the downstream task objectives (e.g., transitioning from classification to counting), or 2) a similarity in data distributions between the two tasks (e.g., both involve natural images). In exploring the ``why" dimension, our results indicate VPT's success cannot be attributed solely to overfitting and optimization considerations. The unique way VPT preserves original features and adds parameters appears to be a pivotal factor. Our study provides insights into VPT's mechanisms, and offers guidance for its optimal utilization.Comment: 29 pages, 19 figure

    Spectral reflectance reconstruction based on wideband multi-illuminant imaging and a modified particle swarm optimization algorithm

    Get PDF
    A method for spectral reflectance factor reconstruction based on wideband multiilluminant imaging was proposed, using a programmable LED lighting system and modified Bare Bones Particle Swarm Optimization algorithms. From a set of 16 LEDs with different spectral power distributions, nine light sources with correlated color temperatures in the range of 1924 K - 15746 K, most of them daylight simulators, were generated. Samples from three color charts (X-Rite ColorChecker Digital SG, SCOCIE ScoColor paint chart, and SCOCIE ScoColor textile chart), were captured by a color industrial camera under the nine light sources, and used in sequence as training and/or testing colors. The spectral reconstruction models achieved under multi-illuminant imaging were trained and tested using the canonical Bare Bones Particle Swarm Optimization and its proposed modifications, along with six additional and commonly used algorithms. The impacts of different illuminants, illuminant combinations, algorithms, and training colors on reconstruction accuracy were studied comprehensively. The results indicated that training colors covering larger regions of color space give more accurate reconstructions of spectral reflectance factors, and combinations of two illuminants with a large difference of correlated color temperature achieve more than twice the accuracy of that under a single illuminant. Specifically, the average reconstruction error by the method proposed in this paper for patches from two color charts under A+ D90 light sources was 0.94 and 1.08 CIEDE2000 color difference units. The results of the experiment also confirmed that some reconstruction algorithms are unsuitable for predicting spectral reflectance factors from multi-illuminant images due to the complexity of optimization problems and insufficient accuracy. The proposed reconstruction method has many advantages, such as being simple in operation, with no requirement of prior knowledge, and easy to implement in non-contact color measurement and color reproduction devices.Ministerio de Ciencia e Innovación and Agencia Estatal de Investigación (PID2022-138031NB-I00/SRA/ 10.13039/501100011033)National Natural Science Foundation of China (61671329, 61775170

    CryptoMask : Privacy-preserving Face Recognition

    Full text link
    Face recognition is a widely-used technique for identification or verification, where a verifier checks whether a face image matches anyone stored in a database. However, in scenarios where the database is held by a third party, such as a cloud server, both parties are concerned about data privacy. To address this concern, we propose CryptoMask, a privacy-preserving face recognition system that employs homomorphic encryption (HE) and secure multi-party computation (MPC). We design a new encoding strategy that leverages HE properties to reduce communication costs and enable efficient similarity checks between face images, without expensive homomorphic rotation. Additionally, CryptoMask leaks less information than existing state-of-the-art approaches. CryptoMask only reveals whether there is an image matching the query or not, whereas existing approaches additionally leak sensitive intermediate distance information. We conduct extensive experiments that demonstrate CryptoMask's superior performance in terms of computation and communication. For a database with 100 million 512-dimensional face vectors, CryptoMask offers ∼5×{\thicksim}5 \times and ∼144×{\thicksim}144 \times speed-ups in terms of computation and communication, respectively.Comment: 18 pages,3 figures, accepted by ICICS202

    Revisiting Galaxy Evolution in Morphology in the COSMOS field (COSMOS-ReGEM):I. Merging Galaxies

    Full text link
    We revisit the evolution of galaxy morphology in the COSMOS field over the redshift range 0.2≤z≤10.2\leq z \leq 1, using a large and complete sample of 33,605 galaxies with a stellar mass of log(M∗M_{\ast}/M⊙)>9.5_{\odot} )>9.5 with significantly improved redshifts and comprehensive non-parametric morphological parameters. Our sample has 13,881 (∼41.3%\sim41.3\%) galaxies with reliable spectroscopic redshifts and has more accurate photometric redshifts with a σNMAD∼0.005\sigma_{\rm NMAD} \sim 0.005. This paper is the first in a series that investigates merging galaxies and their properties. We identify 3,594 major merging galaxies through visual inspection and find 1,737 massive galaxy pairs with log(M∗M_\ast/M⊙_\odot)>10.1>10.1. Among the family of non-parametric morphological parameters including CC, AA, SS, GiniGini, M20M_{\rm 20}, AOA_{\rm O}, and DOD_{\rm O}, we find that the outer asymmetry parameter AOA_{\rm O} and the second-order momentum parameter M20M_{\rm 20} are the best tracers of merging features than other combinations. Hence, we propose a criterion for selecting candidates of violently star-forming mergers: M20>−3AO+3M_{\rm 20}> -3A_{\rm O}+3 at 0.2−6AO+3.70.2 -6A_{\rm O}+3.7 at 0.6<z<1.00.6<z<1.0. Furthermore, we show that both the visual merger sample and the pair sample exhibit a similar evolution in the merger rate at z<1z<1, with ℜ∼(1+z)1.79±0.13\Re \sim(1+z)^{1.79 \pm 0.13} for the visual merger sample and ℜ∼(1+z)2.02±0.42\Re \sim(1+z)^{2.02\pm 0.42} for the pair sample. The visual merger sample has a specific star formation rate that is about 0.16\,dex higher than that of non-merger galaxies, whereas no significant star formation excess is observed in the pair sample. This suggests that the effects of mergers on star formation differ at different merger stages.Comment: 21 pages, 12 figures; accepted for publication in Ap

    HybPSF: Hybrid PSF reconstruction for the observed JWST NIRCam image

    Full text link
    The James Webb Space Telescope (JWST) ushers in a new era of astronomical observation and discovery, offering unprecedented precision in a variety of measurements such as photometry, astrometry, morphology, and shear measurement. Accurate point spread function (PSF) models are crucial for many of these measurements. In this paper, we introduce a hybrid PSF construction method called HybPSF for JWST NIRCam imaging data. HybPSF combines the WebbPSF software, which simulates the PSF for JWST, with observed data to produce more accurate and reliable PSF models. We apply this method to the SMACS J0723 imaging data and construct supplementary structures from residuals obtained by subtracting the WebbPSF PSF model from the data. Our results show that HybPSF significantly reduces discrepancies between the PSF model and the data compared to WebbPSF. Specifically, the PSF shape parameter ellipticity and size comparisons indicate that HybPSF improves precision by a factor of approximately 10 for \$R^2\$ and \$50\%\$ for \$e\$. This improvement has important implications for astronomical measurements using JWST NIRCam imaging data

    ProMotion: Prototypes As Motion Learners

    Full text link
    In this work, we introduce ProMotion, a unified prototypical framework engineered to model fundamental motion tasks. ProMotion offers a range of compelling attributes that set it apart from current task-specific paradigms. We adopt a prototypical perspective, establishing a unified paradigm that harmonizes disparate motion learning approaches. This novel paradigm streamlines the architectural design, enabling the simultaneous assimilation of diverse motion information. We capitalize on a dual mechanism involving the feature denoiser and the prototypical learner to decipher the intricacies of motion. This approach effectively circumvents the pitfalls of ambiguity in pixel-wise feature matching, significantly bolstering the robustness of motion representation. We demonstrate a profound degree of transferability across distinct motion patterns. This inherent versatility reverberates robustly across a comprehensive spectrum of both 2D and 3D downstream tasks. Empirical results demonstrate that ProMotion outperforms various well-known specialized architectures, achieving 0.54 and 0.054 Abs Rel error on the Sintel and KITTI depth datasets, 1.04 and 2.01 average endpoint error on the clean and final pass of Sintel flow benchmark, and 4.30 F1-all error on the KITTI flow benchmark. For its efficacy, we hope our work can catalyze a paradigm shift in universal models in computer vision.Comment: 11 page
    corecore