9 research outputs found
Uniqueness of Lagrangians in
We present a new and simpler proof of the fact that any Lagrangian
in is Hamiltonian isotopic to the zero
section. Our proof mirrors the one given by Li and Wu for the Hamiltonian
uniqueness of Lagrangians in , using surgery to turn Lagrangian spheres
into symplectic ones. The main novel contribution is a detailed proof of the
folklore fact that the complement of a symplectic quadric in
can be identified with the unit cotangent disc bundle of .Comment: 9 pages. Comments welcome
Exploring the Limits of Deep Image Clustering using Pretrained Models
We present a general methodology that learns to classify images without
labels by leveraging pretrained feature extractors. Our approach involves
self-distillation training of clustering heads, based on the fact that nearest
neighbors in the pretrained feature space are likely to share the same label.
We propose a novel objective to learn associations between images by
introducing a variant of pointwise mutual information together with instance
weighting. We demonstrate that the proposed objective is able to attenuate the
effect of false positive pairs while efficiently exploiting the structure in
the pretrained feature space. As a result, we improve the clustering accuracy
over -means on different pretrained models by \% and \% on
ImageNet and CIFAR100, respectively. Finally, using self-supervised pretrained
vision transformers we push the clustering accuracy on ImageNet to \%.
The code will be open-sourced
Rethinking cluster-conditioned diffusion models
We present a comprehensive experimental study on image-level conditioning for
diffusion models using cluster assignments. We elucidate how individual
components regarding image clustering impact image synthesis across three
datasets. By combining recent advancements from image clustering and diffusion
models, we show that, given the optimal cluster granularity with respect to
image synthesis (visual groups), cluster-conditioning can achieve
state-of-the-art FID (i.e. 1.67, 2.17 on CIFAR10 and CIFAR100 respectively),
while attaining a strong training sample efficiency. Finally, we propose a
novel method to derive an upper cluster bound that reduces the search space of
the visual groups using solely feature-based clustering. Unlike existing
approaches, we find no significant connection between clustering and
cluster-conditional image generation. The code and cluster assignments will be
released
Contrastive Language-Image Pretrained (CLIP) Models are Powerful Out-of-Distribution Detectors
We present a comprehensive experimental study on pretrained feature
extractors for visual out-of-distribution (OOD) detection. We examine several
setups, based on the availability of labels or image captions and using
different combinations of in- and out-distributions. Intriguingly, we find that
(i) contrastive language-image pretrained models achieve state-of-the-art
unsupervised out-of-distribution performance using nearest neighbors feature
similarity as the OOD detection score, (ii) supervised state-of-the-art OOD
detection performance can be obtained without in-distribution fine-tuning,
(iii) even top-performing billion-scale vision transformers trained with
natural language supervision fail at detecting adversarially manipulated OOD
images. Finally, we argue whether new benchmarks for visual anomaly detection
are needed based on our experiments. Using the largest publicly available
vision transformer, we achieve state-of-the-art performance across all
reported OOD benchmarks, including an AUROC of 87.6\% (9.2\% gain,
unsupervised) and 97.4\% (1.2\% gain, supervised) for the challenging task of
CIFAR100 CIFAR10 OOD detection. The code will be open-sourced
Comparison of anterior capsule contraction between hydrophobic and hydrophilic intraocular lens models.
To compare the incidence of anterior capsule contraction syndrome (ACCS) after hydrophobic and hydrophilic intraocular lens (IOLs) implantation.
In this retrospective study, 639 eyes of 639 patients (one eye from each patient) were included, and were divided in two groups according to the type of IOL implanted [hydrophobic (group 1: 273 eyes) or hydrophilic (group 2: 366 eyes, two different IOL models: group 2a, 267 eyes and group 2b, 99 eyes)]. ACCS incidence between groups 1 and 2 as well as between hydrophilic group IOL models was compared.
ACCS was significantly (p = 0.012) less frequent in group 1 (hydrophobic) than group 2 (hydrophilic) (four eyes versus 19 eyes respectively). In the hydrophilic group, no statistically significant difference was observed between the two IOL models (ACCS was observed in 13 eyes of the Quatrix and six eyes of the ACR6D IOL model: p = 0.65).
ACCS was significantly greater after hydrophilic IOL implantation when compared with hydrophobic lenses, while there was no statistical significant difference between the two hydrophilic IOL models