6 research outputs found

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Multiple Object Tracking in Light Microscopy Images Using Graph-based and Deep Learning Methods

    Get PDF
    Multi-Objekt-Tracking (MOT) ist ein Problem der Bildanalyse, welches die Lokalisierung und VerknĂŒpfung von Objekten in einer Bildsequenz ĂŒber die Zeit umfasst, mit zahlreichen Anwendungen in Bereichen wie autonomes Fahren, Robotik oder Überwachung. Neben technischen Anwendungsgebieten besteht auch ein großer Bedarf an MOT in biomedizinischen Anwendungen. So können beispielsweise Experimente, die mittels Lichtmikroskopie ĂŒber mehrere Stunden oder Tage hinweg erfasst wurden, Hunderte oder sogar Tausende von Ă€hnlich aussehenden Objekten enthalten, was eine manuelle Analyse unmöglich macht. Um jedoch zuverlĂ€ssige Schlussfolgerungen aus den verfolgten Objekten abzuleiten, ist eine hohe QualitĂ€t der prĂ€dizierten Trajektorien erforderlich. Daher werden domĂ€nenspezifische MOT-AnsĂ€tze benötigt, die in der Lage sind, die Besonderheiten von lichtmikroskopischen Daten zu berĂŒcksichtigen. In dieser Arbeit werden daher zwei neuartige Methoden fĂŒr das MOT-Problem in Lichtmikroskopie-Bildern erarbeitet sowie AnsĂ€tze zum Vergleich der Tracking-Methoden vorgestellt. Um die Performanz der Tracking-Methode von der QualitĂ€t der Segmentierung zu unterscheiden, wird ein Ansatz vorgeschlagen, der es ermöglicht die Tracking-Methode getrennt von der Segmentierung zu analysieren, was auch eine Untersuchung der Robustheit von Tracking-Methoden gegeben verschlechterter Segmentierungsdaten erlaubt. Des Weiteren wird eine graphbasierte Tracking-Methode vorgeschlagen, welche eine BrĂŒcke zwischen einfach anzuwendenden, aber weniger performanten Tracking-Methoden und performanten Tracking-Methoden mit vielen schwer einstellbaren Parametern schlĂ€gt. Die vorgeschlagene Tracking-Methode hat nur wenige manuell einstellbare Parameter und ist einfach auf 2D- und 3D-DatensĂ€tze anwendbar. Durch die Modellierung von Vorwissen ĂŒber die Form des Tracking-Graphen ist die vorgeschlagene Tracking-Methode außerdem in der Lage, bestimmte Arten von Segmentierungsfehlern automatisch zu korrigieren. DarĂŒber hinaus wird ein auf Deep Learning basierender Ansatz vorgeschlagen, der die Aufgabe der Instanzsegmentierung und Objektverfolgung gleichzeitig in einem einzigen neuronalen Netzwerk erlernt. Außerdem lernt der vorgeschlagene Ansatz ReprĂ€sentationen zu prĂ€dizieren, die fĂŒr den Menschen verstĂ€ndlich sind. Um die Performanz der beiden vorgeschlagenen Tracking-Methoden im Vergleich zu anderen aktuellen, domĂ€nenspezifischen Tracking-AnsĂ€tzen zu zeigen, werden sie auf einen domĂ€nenspezifischen Benchmark angewendet. DarĂŒber hinaus werden weitere Bewertungskriterien fĂŒr Tracking-Methoden eingefĂŒhrt, welche zum Vergleich der beiden vorgeschlagenen Tracking-Methoden herangezogen werden

    Diagnóstico automåtico de melanoma mediante técnicas modernas de aprendizaje automåtico

    Get PDF
    The incidence and mortality rates of skin cancer remain a huge concern in many countries. According to the latest statistics about melanoma skin cancer, only in the Unites States, 7,650 deaths are expected in 2022, which represents 800 and 470 more deaths than 2020 and 2021, respectively. In 2022, melanoma is ranked as the fifth cause of new cases of cancer, with a total of 99,780 people. This illness is mainly diagnosed with a visual inspection of the skin, then, if doubts remain, a dermoscopic analysis is performed. The development of e_ective non-invasive diagnostic tools for the early stages of the illness should increase quality of life, and decrease the required economic resources. The early diagnosis of skin lesions remains a tough task even for expert dermatologists because of the complexity, variability, dubiousness of the symptoms, and similarities between the different categories among skin lesions. To achieve this goal, previous works have shown that early diagnosis from skin images can benefit greatly from using computational methods. Several studies have applied handcrafted-based methods on high quality dermoscopic and histological images, and on top of that, machine learning techniques, such as the k-nearest neighbors approach, support vector machines and random forest. However, one must bear in mind that although the previous extraction of handcrafted features incorporates an important knowledge base into the analysis, the quality of the extracted descriptors relies heavily on the contribution of experts. Lesion segmentation is also performed manually. The above procedures have a common issue: they are time-consuming manual processes prone to errors. Furthermore, an explicit definition of an intuitive and interpretable feature is hardly achievable, since it depends on pixel intensity space and, therefore, they are not invariant regarding the differences in the input images. On the other hand, the use of mobile devices has sharply increased, which offers an almost unlimited source of data. In the past few years, more and more attention has been paid to designing deep learning models for diagnosing melanoma, more specifically Convolutional Neural Networks. This type of model is able to extract and learn high-level features from raw images and/or other data without the intervention of experts. Several studies showed that deep learning models can overcome handcrafted-based methods, and even match the predictive performance of dermatologists. The International Skin Imaging Collaboration encourages the development of methods for digital skin imaging. Every year since 2016 to 2019, a challenge and a conference have been organized, in which more than 185 teams have participated. However, convolutional models present several issues for skin diagnosis. These models can fit on a wide diversity of non-linear data points, being prone to overfitting on datasets with small numbers of training examples per class and, therefore, attaining a poor generalization capacity. On the other hand, this type of model is sensitive to some characteristics in data, such as large inter-class similarities and intra-class variances, variations in viewpoints, changes in lighting conditions, occlusions, and background clutter, which can be mostly found in non-dermoscopic images. These issues represent challenges for the application of automatic diagnosis techniques in the early phases of the illness. As a consequence of the above, the aim of this Ph.D. thesis is to make significant contributions to the automatic diagnosis of melanoma. The proposals aim to avoid overfitting and improve the generalization capacity of deep models, as well as to achieve a more stable learning and better convergence. Bear in mind that research into deep learning commonly requires an overwhelming processing power in order to train complex architectures. For example, when developing NASNet architecture, researchers used 500 x NVidia P100s - each graphic unit cost from 5,899to5,899 to 7,374, which represents a total of 2,949,500.00−2,949,500.00 - 3,687,000.00. Unfortunately, the majority of research groups do not have access to such resources, including ours. In this Ph.D. thesis, the use of several techniques has been explored. First, an extensive experimental study was carried out, which included state-of-the-art models and methods to further increase the performance. Well-known techniques were applied, such as data augmentation and transfer learning. Data augmentation is performed in order to balance out the number of instances per category and act as a regularizer in preventing overfitting in neural networks. On the other hand, transfer learning uses weights of a pre-trained model from another task, as the initial condition for the learning of the target network. Results demonstrate that the automatic diagnosis of melanoma is a complex task. However, different techniques are able to mitigate such issues in some degree. Finally, suggestions are given about how to train convolutional models for melanoma diagnosis and future interesting research lines were presented. Next, the discovery of ensemble-based architectures is tackled by using genetic algorithms. The proposal is able to stabilize the training process. This is made possible by finding sub-optimal combinations of abstract features from the ensemble, which are used to train a convolutional block. Then, several predictive blocks are trained at the same time, and the final diagnosis is achieved by combining all individual predictions. We empirically investigate the benefits of the proposal, which shows better convergence, mitigates the overfitting of the model, and improves the generalization performance. On top of that, the proposed model is available online and can be consulted by experts. The next proposal is focused on designing an advanced architecture capable of fusing classical convolutional blocks and a novel model known as Dynamic Routing Between Capsules. This approach addresses the limitations of convolutional blocks by using a set of neurons instead of an individual neuron in order to represent objects. An implicit description of the objects is learned by each capsule, such as position, size, texture, deformation, and orientation. In addition, a hyper-tuning of the main parameters is carried out in order to ensure e_ective learning under limited training data. An extensive experimental study was conducted where the fusion of both methods outperformed six state-of-the-art models. On the other hand, a robust method for melanoma diagnosis, which is inspired on residual connections and Generative Adversarial Networks, is proposed. The architecture is able to produce plausible photorealistic synthetic 512 x 512 skin images, even with small dermoscopic and non-dermoscopic skin image datasets as problema domains. In this manner, the lack of data, the imbalance problems, and the overfitting issues are tackled. Finally, several convolutional modes are extensively trained and evaluated by using the synthetic images, illustrating its effectiveness in the diagnosis of melanoma. In addition, a framework, which is inspired on Active Learning, is proposed. The batch-based query strategy setting proposed in this work enables a more faster training process by learning about the complexity of the data. Such complexities allow us to adjust the training process after each epoch, which leads the model to achieve better performance in a lower number of iterations compared to random mini-batch sampling. Then, the training method is assessed by analyzing both the informativeness value of each image and the predictive performance of the models. An extensive experimental study is conducted, where models trained with the proposal attain significantly better results than the baseline models. The findings suggest that there is still space for improvement in the diagnosis of skin lesions. Structured laboratory data, unstructured narrative data, and in some cases, audio or observational data, are given by radiologists as key points during the interpretation of the prediction. This is particularly true in the diagnosis of melanoma, where substantial clinical context is often essential. For example, symptoms like itches and several shots of a skin lesion during a period of time proving that the lesion is growing, are very likely to suggest cancer. The use of different types of input data could help to improve the performance of medical predictive models. In this regard, a _rst evolutionary algorithm aimed at exploring multimodal multiclass data has been proposed, which surpassed a single-input model. Furthermore, the predictive features extracted by primary capsules could be used to train other models, such as Support Vector Machine

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
    corecore