5 research outputs found

    Data efficient deep learning for medical image analysis: A survey

    Full text link
    The rapid evolution of deep learning has significantly advanced the field of medical image analysis. However, despite these achievements, the further enhancement of deep learning models for medical image analysis faces a significant challenge due to the scarcity of large, well-annotated datasets. To address this issue, recent years have witnessed a growing emphasis on the development of data-efficient deep learning methods. This paper conducts a thorough review of data-efficient deep learning methods for medical image analysis. To this end, we categorize these methods based on the level of supervision they rely on, encompassing categories such as no supervision, inexact supervision, incomplete supervision, inaccurate supervision, and only limited supervision. We further divide these categories into finer subcategories. For example, we categorize inexact supervision into multiple instance learning and learning with weak annotations. Similarly, we categorize incomplete supervision into semi-supervised learning, active learning, and domain-adaptive learning and so on. Furthermore, we systematically summarize commonly used datasets for data efficient deep learning in medical image analysis and investigate future research directions to conclude this survey.Comment: Under Revie

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    Lung Pattern Analysis using Artificial Intelligence for the Diagnosis Support of Interstitial Lung Diseases

    Get PDF
    Interstitial lung diseases (ILDs) is a group of more than 200 chronic lung disorders characterized by inflammation and scarring of the lung tissue that leads to respiratory failure. Although ILD is a heterogeneous group of histologically distinct diseases, most of them exhibit similar clinical presentations and their diagnosis often presents a diagnostic dilemma. Early diagnosis is crucial for making treatment decisions, while misdiagnosis may lead to life-threatening complications. If a final diagnosis cannot be reached with the high resolution computed tomography scan, additional invasive procedures are required (e.g. bronchoalveolar lavage, surgical biopsy). The aim of this PhD thesis was to investigate the components of a computational system that will assist radiologists with the diagnosis of ILDs, while avoiding the dangerous, expensive and time-consuming invasive biopsies. The appropriate interpretation of the available radiological data combined with clinical/biochemical information can provide a reliable diagnosis, able to improve the diagnostic accuracy of the radiologists. In this thesis, we introduce two convolutional neural networks particularly designed for ILDs and a training scheme that employs knowledge transfer from the similar domain of general texture classification for performance enhancement. Moreover, we investigate the clinical relevance of breathing information for disease classification. The breathing information is quantified as a deformation field between inhale-exhale lung images using a novel 3D convolutional neural network for medical image registration. Finally, we design and evaluate the final end-to-end computational system for ILD classification using lung anatomy segmentation algorithms from the literature and the proposed ILD quantification neural networks. Deep learning approaches have been mostly investigated for all the aforementioned steps, while the results demonstrated their potential in analyzing lung images

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore