51 research outputs found

    ACAT: Adversarial Counterfactual Attention for Classification and Detection in Medical Imaging

    Full text link
    In some medical imaging tasks and other settings where only small parts of the image are informative for the classification task, traditional CNNs can sometimes struggle to generalise. Manually annotated Regions of Interest (ROI) are sometimes used to isolate the most informative parts of the image. However, these are expensive to collect and may vary significantly across annotators. To overcome these issues, we propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales. We refer to our method as Adversarial Counterfactual Attention (ACAT). ACAT increases the baseline classification accuracy of lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related findings in lung CT scans from 67.71% to 70.84% and exceeds the performance of competing methods. We investigate the best way to generate the saliency maps employed in our architecture and propose a way to obtain them from adversarially generated counterfactual images. They are able to isolate the area of interest in brain and lung CT scans without using any manual annotations. In the task of localising the lesion location out of 6 possible regions, they obtain a score of 65.05% on brain CT scans, improving the score of 61.29% obtained with the best competing method.Comment: 17 pages, 7 figure

    Saliency Detection from Subitizing Processing

    Get PDF
    Most of the saliency methods are evaluated for their ability to generate saliency maps, and not for their functionality in a complete vision pipeline, for instance, image classification or salient object subitizing. In this work, we introduce saliency subitizing as the weak supervision. This task is inspired by the ability of people to quickly and accurately identify the number of items within the subitizing range (e.g., 1 to 4 different types of things). This means that the subitizing information will tell us the number of featured objects in a given image. To this end, we propose a saliency subitizing process (SSP) as a first approximation to learn saliency detection, without the need for any unsupervised methods or some random seeds. We conduct extensive experiments on two benchmark datasets (Toronto and SID4VAM). The experimental results show that our method outperforms other weakly supervised methods and even performs comparable to some fully supervised methods as a first approximation

    Tensor feature hallucination for few-shot learning

    Get PDF
    Few-shot learning addresses the challenge of learning how to address novel tasks given not just limited supervision but limited data as well. An attractive solution is synthetic data generation. However, most such methods are overly sophisticated, focusing on high-quality, realistic data in the input space. It is unclear whether adapting them to the few-shot regime and using them for the downstream task of classification is the right approach. Previous works on synthetic data generation for few-shot classification focus on exploiting complex models, e.g. a Wasserstein GAN with multiple regularizers or a network that transfers latent diversities from known to novel classes.We follow a different approach and investigate how a simple and straightforward synthetic data generation method can be used effectively. We make two contributions, namely we show that: (1) using a simple loss function is more than enough for training a feature generator in the few-shot setting; and (2) learning to generate tensor features instead of vector features is superior. Extensive experiments on miniImagenet, CUB and CIFAR-FS datasets show that our method sets a new state of the art, outperforming more sophisticated few-shot data augmentation methods. The source code can be found at https://github.com/MichalisLazarou/TFH_fewshot
    • …
    corecore