3,840 research outputs found

    Fast Color Quantization Using Weighted Sort-Means Clustering

    Full text link
    Color quantization is an important operation with numerous applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, a fast color quantization method based on k-means is presented. The method involves several modifications to the conventional (batch) k-means algorithm including data reduction, sample weighting, and the use of triangle inequality to speed up the nearest neighbor search. Experiments on a diverse set of images demonstrate that, with the proposed modifications, k-means becomes very competitive with state-of-the-art color quantization methods in terms of both effectiveness and efficiency.Comment: 30 pages, 2 figures, 4 table

    The K giant stars from the LAMOST survey data I: identification, metallicity, and distance

    Full text link
    We present a support vector machine classifier to identify the K giant stars from the LAMOST survey directly using their spectral line features. The completeness of the identification is about 75% for tests based on LAMOST stellar parameters. The contamination in the identified K giant sample is lower than 2.5%. Applying the classification method to about 2 million LAMOST spectra observed during the pilot survey and the first year survey, we select 298,036 K giant candidates. The metallicities of the sample are also estimated with uncertainty of 0.13∼0.290.13\sim0.29\,dex based on the equivalent widths of Mgb_{\rm b} and iron lines. A Bayesian method is then developed to estimate the posterior probability of the distance for the K giant stars, based on the estimated metallicity and 2MASS photometry. The synthetic isochrone-based distance estimates have been calibrated using 7 globular clusters with a wide range of metallicities. The uncertainty of the estimated distance modulus at K=11K=11\,mag, which is the median brightness of the K giant sample, is about 0.6\,mag, corresponding to ∼30\sim30% in distance. As a scientific verification case, the trailing arm of the Sagittarius stream is clearly identified with the selected K giant sample. Moreover, at about 80\,kpc from the Sun, we use our K giant stars to confirm a detection of stream members near the apo-center of the trailing tail. These rediscoveries of the features of the Sagittarius stream illustrate the potential of the LAMOST survey for detecting substructures in the halo of the Milky Way.Comment: 24 pages, 20 figures, submitted to Ap

    Functionally distinct and selectively phosphorylated GPCR subpopulations co-exist in a single cell.

    Get PDF
    G protein-coupled receptors (GPCRs) transduce pleiotropic intracellular signals in a broad range of physiological responses and disease states. Activated GPCRs can undergo agonist-induced phosphorylation by G protein receptor kinases (GRKs) and second messenger-dependent protein kinases such as protein kinase A (PKA). Here, we characterize spatially segregated subpopulations of β2-adrenergic receptor (β2AR) undergoing selective phosphorylation by GRKs or PKA in a single cell. GRKs primarily label monomeric β2ARs that undergo endocytosis, whereas PKA modifies dimeric β2ARs that remain at the cell surface. In hippocampal neurons, PKA-phosphorylated β2ARs are enriched in dendrites, whereas GRK-phosphorylated β2ARs accumulate in soma, being excluded from dendrites in a neuron maturation-dependent manner. Moreover, we show that PKA-phosphorylated β2ARs are necessary to augment the activity of L-type calcium channel. Collectively, these findings provide evidence that functionally distinct subpopulations of this prototypical GPCR exist in a single cell

    ALIP: Adaptive Language-Image Pre-training with Synthetic Caption

    Full text link
    Contrastive Language-Image Pre-training (CLIP) has significantly boosted the performance of various vision-language tasks by scaling up the dataset with image-text pairs collected from the web. However, the presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning. To address this issue, we first utilize the OFA model to generate synthetic captions that focus on the image content. The generated captions contain complementary information that is beneficial for pre-training. Then, we propose an Adaptive Language-Image Pre-training (ALIP), a bi-path model that integrates supervision from both raw text and synthetic caption. As the core components of ALIP, the Language Consistency Gate (LCG) and Description Consistency Gate (DCG) dynamically adjust the weights of samples and image-text/caption pairs during the training process. Meanwhile, the adaptive contrastive loss can effectively reduce the impact of noise data and enhances the efficiency of pre-training data. We validate ALIP with experiments on different scales of models and pre-training datasets. Experiments results show that ALIP achieves state-of-the-art performance on multiple downstream tasks including zero-shot image-text retrieval and linear probe. To facilitate future research, the code and pre-trained models are released at https://github.com/deepglint/ALIP.Comment: 15pages, 10figures, ICCV202
    • …
    corecore