22 research outputs found

    Enforcing connectivity of 3D linear structures using their 2D projections

    Full text link
    Many biological and medical tasks require the delineation of 3D curvilinear structures such as blood vessels and neurites from image volumes. This is typically done using neural networks trained by minimizing voxel-wise loss functions that do not capture the topological properties of these structures. As a result, the connectivity of the recovered structures is often wrong, which lessens their usefulness. In this paper, we propose to improve the 3D connectivity of our results by minimizing a sum of topology-aware losses on their 2D projections. This suffices to increase the accuracy and to reduce the annotation effort required to provide the required annotated training data

    Learning to Segment 3D Linear Structures Using Only 2D Annotations

    Get PDF
    We propose a loss function for training a Deep Neural Network (DNN) to segment volumetric data, that accommodates ground truth annotations of 2D projections of the training volumes, instead of annotations of the 3D volumes themselves. In consequence, we significantly decrease the amount of annotations needed for a given training set. We apply the proposed loss to train DNNs for segmentation of vascular and neural networks in microscopy images and demonstrate only a marginal accuracy loss associated to the significant reduction of the annotation effort. The lower labor cost of deploying DNNs, brought in by our method, can contribute to a wide adoption of these techniques for analysis of 3D images of linear structures

    LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections

    Full text link
    Recently, large-scale pre-trained Vision and Language (VL) models have set a new state-of-the-art (SOTA) in zero-shot visual classification enabling open-vocabulary recognition of potentially unlimited set of categories defined as simple language prompts. However, despite these great advances, the performance of these zeroshot classifiers still falls short of the results of dedicated (closed category set) classifiers trained with supervised fine tuning. In this paper we show, for the first time, how to reduce this gap without any labels and without any paired VL data, using an unlabeled image collection and a set of texts auto-generated using a Large Language Model (LLM) describing the categories of interest and effectively substituting labeled visual instances of those categories. Using our label-free approach, we are able to attain significant performance improvements over the zero-shot performance of the base VL model and other contemporary methods and baselines on a wide variety of datasets, demonstrating absolute improvement of up to 11.7% (3.8% on average) in the label-free setting. Moreover, despite our approach being label-free, we observe 1.3% average gains over leading few-shot prompting baselines that do use 5-shot supervision

    Beyond the Pixel-Wise Loss for Topology-Aware Delineation

    Get PDF
    Delineation of curvilinear structures is an important problem in Computer Vision with multiple practical applications. With the advent of Deep Learning, many current approaches on automatic delineation have focused on finding more powerful deep architectures, but have continued using the habitual pixel-wise losses such as binary cross- entropy. In this paper we claim that pixel-wise losses alone are unsuitable for this problem because of their inability to reflect the topological impact of mistakes in the final prediction. We propose a new loss term that is aware of the higher- order topological features of linear structures. We also exploit a refinement pipeline that iteratively applies the same model over the previous delineation to refine the predictions at each step, while keeping the number of parameters and the complexity of the model constant. When combined with the standard pixel-wise loss, both our new loss term and an iterative refinement boost the quality of the predicted delineations, in some cases almost doubling the accuracy as compared to the same classifier trained with the binary cross-entropy alone. We show that our approach outperforms state-of-the-art methods on a wide range of data, from microscopy to aerial images

    MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge

    Full text link
    Large scale Vision-Language (VL) models have shown tremendous success in aligning representations between visual and text modalities. This enables remarkable progress in zero-shot recognition, image generation & editing, and many other exciting tasks. However, VL models tend to over-represent objects while paying much less attention to verbs, and require additional tuning on video data for best zero-shot action recognition performance. While previous work relied on large-scale, fully-annotated data, in this work we propose an unsupervised approach. We adapt a VL model for zero-shot and few-shot action recognition using a collection of unlabeled videos and an unpaired action dictionary. Based on that, we leverage Large Language Models and VL models to build a text bag for each unlabeled video via matching, text expansion and captioning. We use those bags in a Multiple Instance Learning setup to adapt an image-text backbone to video data. Although finetuned on unlabeled video data, our resulting models demonstrate high transferability to numerous unseen zero-shot downstream tasks, improving the base VL model performance by up to 14\%, and even comparing favorably to fully-supervised baselines in both zero-shot and few-shot video recognition transfer. The code will be released later at \url{https://github.com/wlin-at/MAXI}.Comment: Accepted at ICCV 202

    Sit Back and Relax: Learning to Drive Incrementally in All Weather Conditions

    Full text link
    In autonomous driving scenarios, current object detection models show strong performance when tested in clear weather. However, their performance deteriorates significantly when tested in degrading weather conditions. In addition, even when adapted to perform robustly in a sequence of different weather conditions, they are often unable to perform well in all of them and suffer from catastrophic forgetting. To efficiently mitigate forgetting, we propose Domain-Incremental Learning through Activation Matching (DILAM), which employs unsupervised feature alignment to adapt only the affine parameters of a clear weather pre-trained network to different weather conditions. We propose to store these affine parameters as a memory bank for each weather condition and plug-in their weather-specific parameters during driving (i.e. test time) when the respective weather conditions are encountered. Our memory bank is extremely lightweight, since affine parameters account for less than 2% of a typical object detector. Furthermore, contrary to previous domain-incremental learning approaches, we do not require the weather label when testing and propose to automatically infer the weather condition by a majority voting linear classifier.Comment: Intelligent Vehicle Conference (oral presentation

    MATE: Masked Autoencoders are Online 3D Test-Time Learners

    Full text link
    We propose MATE, the first Test-Time-Training (TTT) method designed for 3D data. It makes deep networks trained in point cloud classification robust to distribution shifts occurring in test data, which could not be anticipated during training. Like existing TTT methods, which focused on classifying 2D images in the presence of distribution shifts at test-time, MATE also leverages test data for adaptation. Its test-time objective is that of a Masked Autoencoder: Each test point cloud has a large portion of its points removed before it is fed to the network, tasked with reconstructing the full point cloud. Once the network is updated, it is used to classify the point cloud. We test MATE on several 3D object classification datasets and show that it significantly improves robustness of deep networks to several types of corruptions commonly occurring in 3D point clouds. Further, we show that MATE is very efficient in terms of the fraction of points it needs for the adaptation. It can effectively adapt given as few as 5% of tokens of each test sample, which reduces its memory footprint and makes it lightweight. We also highlight that MATE achieves competitive performance by adapting sparingly on the test data, which further reduces its computational overhead, making it ideal for real-time applications.Comment: Minor fix in citation

    Segmentation des images de façade avec à priori sur la forme

    No full text
    The aim of this work is to propose a framework for facade segmentation with user-defined shape priors. In such a framework, the user specifies a shape prior using a rigorously defined shape prior formalism. The prior expresses a number of hard constraints and soft preference on spatial configuration of segments, constituting the final segmentation. Existing approaches to the problem are affected by a compromise between the type of constraints, the satisfaction of which can be guaranteed by the segmentation algorithm, and the capability to approximate optimal segmentations consistent with a prior. In this thesis we explore a number of approaches to facade parsing that combine prior formalism featuring high expressive power, guarantees of conformance of the resulting segmentations to the prior, and effective inference. We evaluate the proposed algorithms on a number of datasets. Since one of our focus points is the accuracy gain resulting from more effective inference algorithms, we perform a fair comparison to existing methods, using the same data term. Our contributions include a combination of graph grammars for expressing variation of facade structure with graphical models encoding the energy of models of given structures for different positions of facade elements. We also present the first linear formulation of facade parsing with shape priors. Finally, we propose a shape prior formalism that enables formulating the problem of optimal segmentation as the inference in a Markov random field over the standard four-connected grid of pixels. The last method advances the state of the art by combining the flexibility of a user-defined grammar with segmentation accuracy that was reserved for frameworks with pre-defined priors before. It also enables handling occlusions by simultaneously recovering the structure of the occluded facade and segmenting the occluding objects. We believe that it can be extended in many directions, including semantizing three-dimensional point clouds and parsing images of general urban scenesL'objectif de cette thèse concerne l'analyse automatique d'images de façades de bâtiments à partir de descriptions formelles à priori de formes géométriques. Ces informations suggérées par un utilisateur permettent de modéliser, de manière formelle, des contraintes spatiales plus ou moins dures quant à la segmentation sémantique produite par le système. Ceci permet de se défaire de deux principaux écueils inhérents aux méthodes d'analyse de façades existantes qui concernent d'une part la coûteuse fidélité de la segmentation résultante aux données visuelles de départ, d'autre part, la spécificité architecturale des règles imposées lors du processus de traitement. Nous proposons d'explorer au travers de cette thèse, différentes méthodes alternatives à celles proposées dans la littérature en exploitant un formalisme de représentation d'à priori de haut niveau d'abstraction, les propriétés engendrées par ces nouvelles méthodes ainsi que les outils de résolution mis en œuvres par celles-ci. Le système résultant est évalué tant quantitativement que qualitativement sur de multiples bases de données standards et par le biais d'études comparatives à des approches à l'état de l'art en la matière. Parmi nos contributions, nous pouvons citer la combinaison du formalisme des grammaires de graphes exprimant les variations architecturales de façades de bâtiments et les modèles graphiques probabilistes modélisant l'énergie attribuée à une configuration paramétrique donnée, dans un schéma d'optimisation par minimisation d'énergie; ainsi qu'une nouvelle approche par programmation linéaire d'analyse avec à priori de formes. Enfin, nous proposons un formalisme flexible de ces à priori devançant de par ses performances les méthodes à l'état de l'art tout en combinant les avantages de la généricité de contraintes simples manuellement imposées par un utilisateur, à celles de la précision de la segmentation finale qui se faisait jusqu'alors au prix d'un encodage préliminaire restrictif de règles grammaticales complexes propres à une famille architecturale donnée. Le système décrit permet également de traiter avec robustesse des scènes comprenant des objets occultants et pourrait encore être étendu notamment afin de traiter l'extension tri-dimensionnelle de la sémantisation d'environnements urbains sous forme de nuages de points 3D ou d'une analyse multi-image de bâtiment
    corecore