7 research outputs found

    Exploring the Limits of Deep Image Clustering using Pretrained Models

    Full text link
    We present a general methodology that learns to classify images without labels by leveraging pretrained feature extractors. Our approach involves self-distillation training of clustering heads, based on the fact that nearest neighbors in the pretrained feature space are likely to share the same label. We propose a novel objective to learn associations between images by introducing a variant of pointwise mutual information together with instance weighting. We demonstrate that the proposed objective is able to attenuate the effect of false positive pairs while efficiently exploiting the structure in the pretrained feature space. As a result, we improve the clustering accuracy over kk-means on 1717 different pretrained models by 6.16.1\% and 12.212.2\% on ImageNet and CIFAR100, respectively. Finally, using self-supervised pretrained vision transformers we push the clustering accuracy on ImageNet to 61.661.6\%. The code will be open-sourced

    Contrastive Language-Image Pretrained (CLIP) Models are Powerful Out-of-Distribution Detectors

    Full text link
    We present a comprehensive experimental study on pretrained feature extractors for visual out-of-distribution (OOD) detection. We examine several setups, based on the availability of labels or image captions and using different combinations of in- and out-distributions. Intriguingly, we find that (i) contrastive language-image pretrained models achieve state-of-the-art unsupervised out-of-distribution performance using nearest neighbors feature similarity as the OOD detection score, (ii) supervised state-of-the-art OOD detection performance can be obtained without in-distribution fine-tuning, (iii) even top-performing billion-scale vision transformers trained with natural language supervision fail at detecting adversarially manipulated OOD images. Finally, we argue whether new benchmarks for visual anomaly detection are needed based on our experiments. Using the largest publicly available vision transformer, we achieve state-of-the-art performance across all 1818 reported OOD benchmarks, including an AUROC of 87.6\% (9.2\% gain, unsupervised) and 97.4\% (1.2\% gain, supervised) for the challenging task of CIFAR100 \rightarrow CIFAR10 OOD detection. The code will be open-sourced

    Comparison of anterior capsule contraction between hydrophobic and hydrophilic intraocular lens models.

    No full text
    To compare the incidence of anterior capsule contraction syndrome (ACCS) after hydrophobic and hydrophilic intraocular lens (IOLs) implantation. In this retrospective study, 639 eyes of 639 patients (one eye from each patient) were included, and were divided in two groups according to the type of IOL implanted [hydrophobic (group 1: 273 eyes) or hydrophilic (group 2: 366 eyes, two different IOL models: group 2a, 267 eyes and group 2b, 99 eyes)]. ACCS incidence between groups 1 and 2 as well as between hydrophilic group IOL models was compared. ACCS was significantly (p = 0.012) less frequent in group 1 (hydrophobic) than group 2 (hydrophilic) (four eyes versus 19 eyes respectively). In the hydrophilic group, no statistically significant difference was observed between the two IOL models (ACCS was observed in 13 eyes of the Quatrix and six eyes of the ACR6D IOL model: p = 0.65). ACCS was significantly greater after hydrophilic IOL implantation when compared with hydrophobic lenses, while there was no statistical significant difference between the two hydrophilic IOL models
    corecore