536 research outputs found

    Pattern classification approaches for breast cancer identification via MRI: state‐of‐the‐art and vision for the future

    Get PDF
    Mining algorithms for Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCEMRI) of breast tissue are discussed. The algorithms are based on recent advances in multidimensional signal processing and aim to advance current state‐of‐the‐art computer‐aided detection and analysis of breast tumours when these are observed at various states of development. The topics discussed include image feature extraction, information fusion using radiomics, multi‐parametric computer‐aided classification and diagnosis using information fusion of tensorial datasets as well as Clifford algebra based classification approaches and convolutional neural network deep learning methodologies. The discussion also extends to semi‐supervised deep learning and self‐supervised strategies as well as generative adversarial networks and algorithms using generated confrontational learning approaches. In order to address the problem of weakly labelled tumour images, generative adversarial deep learning strategies are considered for the classification of different tumour types. The proposed data fusion approaches provide a novel Artificial Intelligence (AI) based framework for more robust image registration that can potentially advance the early identification of heterogeneous tumour types, even when the associated imaged organs are registered as separate entities embedded in more complex geometric spaces. Finally, the general structure of a high‐dimensional medical imaging analysis platform that is based on multi‐task detection and learning is proposed as a way forward. The proposed algorithm makes use of novel loss functions that form the building blocks for a generated confrontation learning methodology that can be used for tensorial DCE‐MRI. Since some of the approaches discussed are also based on time‐lapse imaging, conclusions on the rate of proliferation of the disease can be made possible. The proposed framework can potentially reduce the costs associated with the interpretation of medical images by providing automated, faster and more consistent diagnosis

    A Pipelined Tracer-Aware Approach for Lesion Segmentation in Breast DCE-MRI

    Get PDF
    The recent spread of Deep Learning (DL) in medical imaging is pushing researchers to explore its suitability for lesion segmentation in Dynamic Contrast-Enhanced Magnetic-Resonance Imaging (DCE-MRI), a complementary imaging procedure increasingly used in breast-cancer analysis. Despite some promising proposed solutions, we argue that a “naive” use of DL may have limited effectiveness as the presence of a contrast agent results in the acquisition of multimodal 4D images requiring thorough processing before training a DL model. We thus propose a pipelined approach where each stage is intended to deal with or to leverage a peculiar characteristic of breast DCE-MRI data: the use of a breast-masking pre-processing to remove non-breast tissues; the use of Three-Time-Points (3TP) slices to effectively highlight contrast agent time course; the application of a motion-correction technique to deal with patient involuntary movements; the leverage of a modified U-Net architecture tailored on the problem; and the introduction of a new “Eras/Epochs” training strategy to handle the unbalanced dataset while performing a strong data augmentation. We compared our pipelined solution against some literature works. The results show that our approach outperforms the competitors by a large margin (+9.13% over our previous solution) while also showing a higher generalization ability

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions

    Full text link
    Medical Image Analysis is currently experiencing a paradigm shift due to Deep Learning. This technology has recently attracted so much interest of the Medical Imaging community that it led to a specialized conference in `Medical Imaging with Deep Learning' in the year 2018. This article surveys the recent developments in this direction, and provides a critical review of the related major aspects. We organize the reviewed literature according to the underlying Pattern Recognition tasks, and further sub-categorize it following a taxonomy based on human anatomy. This article does not assume prior knowledge of Deep Learning and makes a significant contribution in explaining the core Deep Learning concepts to the non-experts in the Medical community. Unique to this study is the Computer Vision/Machine Learning perspective taken on the advances of Deep Learning in Medical Imaging. This enables us to single out `lack of appropriately annotated large-scale datasets' as the core challenge (among other challenges) in this research direction. We draw on the insights from the sister research fields of Computer Vision, Pattern Recognition and Machine Learning etc.; where the techniques of dealing with such challenges have already matured, to provide promising directions for the Medical Imaging community to fully harness Deep Learning in the future

    Atlas Construction for Measuring the Variability of Complex Anatomical Structures

    Get PDF
    RÉSUMÉ La recherche sur l'anatomie humaine, en particulier sur le cœur et le cerveau, est d'un intérêt particulier car leurs anomalies entraînent des pathologies qui sont parmi les principales causes de décès dans le monde et engendrent des coûts substantiels. Heureusement, les progrès en imagerie médicale permettent des diagnostics et des traitements autrefois impossibles. En contrepartie, la quantité phénoménale de données produites par ces technologies nécessite le développement d'outils efficaces pour leur traitement. L'objectif de cette thèse est de proposer un ensemble d'outils permettant de normaliser des mesures prélevées sur différents individus, essentiels à l'étude des caractéristiques de structures anatomiques complexes. La normalisation de mesures consiste à rassembler une collection d'images dans une référence commune, aussi appelée construction d'atlas numériques, afin de combiner des mesures provenant de différents patients. Le processus de construction inclut deux étapes principales; la segmentation d'images pour trouver des régions d'intérêts et le recalage d'images afin de déterminer les correspondances entres régions d'intérêts. Les méthodes actuelles de constructions d'atlas peuvent nécessiter des interventions manuelles, souvent fastidieuses, variables, et sont en outre limitées par leurs mécanismes internes. Principalement, le recalage d'images dépend d'une déformation incrémentales d'images sujettes a des minimums locaux. Le recalage n'est ainsi pas optimal lors de grandes déformations et ces limitations requièrent la nécessite de proposer de nouvelles approches pour la construction d'atlas. Les questions de recherche de cette thèse se concentrent donc sur l'automatisation des méthodes actuelles ainsi que sur la capture de déformations complexes de structures anatomiques, en particulier sur le cœur et le cerveau. La méthodologie adoptée a conduit à trois objectifs de recherche spécifiques. Le premier prévoit un nouveau cadre de construction automatise d'atlas afin de créer le premier atlas humain de l'architecture de fibres cardiaques. Le deuxième vise à explorer une nouvelle approche basée sur la correspondance spectrale, nommée FOCUSR, afin de capturer une grande variabilité de formes sur des maillages. Le troisième aboutit finalement à développer une approche fondamentalement différente pour le recalage d'images à fortes déformations, nommée les démons spectraux. Le premier objectif vise plus particulièrement à construire un atlas statistique de l'architecture des fibres cardiaques a partir de 10 cœurs ex vivo humains. Le système développé a mené à deux contributions techniques et une médicale, soit l'amélioration de la segmentation de structures cardiaques et l'automatisation du calcul de forme moyenne, ainsi que notamment la première étude chez l'homme de la variabilité de l'architecture des fibres cardiaques. Pour résumer les principales conclusions, les fibres du cœur humain moyen varient de +- 12 degrés, l'angle d'helix s'étend entre -41 degrés (+- 26 degrés) sur l'épicarde à +66 degrés (+- 15 degrés) sur l'endocarde, tandis que l'angle transverse varie entre +9 degrés (+- 12 degrés) et +34 degrés (+- 29 degrés) à travers le myocarde. Ces résultats sont importants car ces fibres jouent un rôle clef dans diverses fonctions mécaniques et électrophysiologiques du cœur. Le deuxième objectif cherche à capturer une grande variabilité de formes entre structures anatomiques complexes, plus particulièrement entre cortex cérébraux à cause de l'extrême variabilité de ces surfaces et de leur intérêt pour l'étude de fonctions cognitives. La nouvelle méthode de correspondance surfacique, nommée FOCUSR, exploite des représentations spectrales car l'appariement devient plus facile et rapide dans le domaine spectral plutôt que dans l'espace Euclidien classique. Dans sa forme la plus simple, FOCUSR améliore les méthodes spectrales actuelles par un recalage non rigide des représentations spectrales, toutefois, son plein potentiel est atteint en exploitant des données supplémentaires lors de la mise en correspondance. Par exemple, les résultats ont montré que la profondeur des sillons et de la courbure du cortex cérébral améliore significativement la correspondance de surfaces de cerveaux. Enfin, le troisième objectif vise à améliorer le recalage d'images d'organes ayant des fortes variabilités entre individus ou subis de fortes déformations, telles que celles créées par le mouvement cardiaque. La méthodologie amenée par la correspondance spectrale permet d'améliorer les approches conventionnelles de recalage d'images. En effet, les représentations spectrales, capturant des similitudes géométriques globales entre différentes formes, permettent de surmonter les limitations actuelles des méthodes de recalage qui restent guidées par des forces locales. Le nouvel algorithme, nommé démons spectraux, peut ainsi supporter de très grandes déformations locales et complexes entre images, et peut être tout autant adapté a d'autres approches, telle que dans un cadre de recalage conjoint d'images. Il en résulte un cadre complet de construction d'atlas, nommé démons spectraux multijoints, où la forme moyenne est calculée directement lors du processus de recalage plutôt qu'avec une approche séquentielle de recalage et de moyennage. La réalisation de ces trois objectifs spécifiques a permis des avancées dans l'état de l'art au niveau des méthodes de correspondance spectrales et de construction d'atlas, en permettant l'utilisation d'organes présentant une forte variabilité de formes. Dans l'ensemble, les différentes stratégies fournissent de nouvelles contributions sur la façon de trouver et d'exploiter des descripteurs globaux d'images et de surfaces. D'un point de vue global, le développement des objectifs spécifiques établit un lien entre : a) la première série d'outils, mettant en évidence les défis à recaler des images à fortes déformations, b) la deuxième série d'outils, servant à capturer de fortes déformations entre surfaces mais qui ne reste pas directement applicable a des images, et c) la troisième série d'outils, faisant un retour sur le traitement d'images en permettant la construction d'atlas a partir d'images ayant subies de fortes déformations. Il y a cependant plusieurs limitations générales qui méritent d'être investiguées, par exemple, les données partielles (tronquées ou occluses) ne sont pas actuellement prises en charge les nouveaux outils, ou encore, les stratégies algorithmiques utilisées laissent toujours place à l'amélioration. Cette thèse donne de nouvelles perspectives dans les domaines de l'imagerie cardiaque et de la neuroimagerie, toutefois, les nouveaux outils développés sont assez génériques pour être appliqués a tout recalage d'images ou de surfaces. Les recommandations portent sur des recherches supplémentaires qui établissent des liens avec la segmentation à base de graphes, pouvant conduire à un cadre complet de construction d'atlas où la segmentation, le recalage, et le moyennage de formes seraient tous interdépendants. Il est également recommandé de poursuivre la recherche sur la construction de meilleurs modèles électromécaniques cardiaques à partir des résultats de cette thèse. En somme, les nouveaux outils offrent de nouvelles bases de recherche et développement pour la normalisation de formes, ce qui peut potentiellement avoir un impact sur le diagnostic, ainsi que la planification et la pratique d'interventions médicales.----------ABSTRACT Research on human anatomy, in particular on the heart and the brain, is a primary concern for society since their related diseases are among top killers across the globe and have exploding associated costs. Fortunately, recent advances in medical imaging offer new possibilities for diagnostics and treatments. On the other hand, the growth in data produced by these relatively new technologies necessitates the development of efficient tools for processing data. The focus of this thesis is to provide a set of tools for normalizing measurements across individuals in order to study complex anatomical characteristics. The normalization of measurements consists of bringing a collection of images into a common reference, also known as atlas construction, in order to combine measurements made on different individuals. The process of constructing an atlas involves the topics of segmentation, which finds regions of interest in the data (e.g., an organ, a structure), and registration, which finds correspondences between regions of interest. Current frameworks may require tedious and hardly reproducible user interactions, and are additionally limited by their computational schemes, which rely on slow iterative deformations of images, prone to local minima. Image registration is, therefore, not optimal with large deformations. Such limitations indicate the need to research new approaches for atlas construction. The research questions are consequently addressing the problems of automating current frameworks and capturing global and complex deformations between anatomical structures, in particular between human hearts and brains. More precisely, the methodology adopted in the thesis led to three specific research objectives. Briefly, the first step aims at developing a new automated framework for atlas construction in order to build the first human atlas of the cardiac fiber architecture. The second step intends to explore a new approach based on spectral correspondence, named FOCUSR, in order to precisely capture large shape variability. The third step leads, finally, to a fundamentally new approach for image registration with large deformations, named the Spectral Demons algorithm. The first objective aims more specifically at constructing a statistical atlas of the cardiac fiber architecture from a unique human dataset of 10 ex vivo hearts. The developed framework made two technical, and one medical, contributions, that are the improvement of the segmentation of cardiac structures, the automation of the shape averaging process, and more importantly, the first human study on the variability of the cardiac fiber architecture. To summarize the main finding, the fiber orientations in human hearts has been found to vary with about +- 12 degrees, the range of the helix angle spans from -41 degrees (+- 26 degrees) on the epicardium to +66 degrees (+- 15 degrees) on the endocardium, while, the range of the transverse angle spans from +9 degrees (+- 12 degrees) to +34 degrees (+- 29 degrees) across the myocardial wall. These findings are significant in cardiology since the fiber architecture plays a key role in cardiac mechanical functions and in electrophysiology. The second objective intends to capture large shape variability between complex anatomical structures, in particular between cerebral cortices due to their highly convoluted surfaces and their high anatomical and functional variability across individuals. The new method for surface correspondence, named FOCUSR, exploits spectral representations since matching is easier in the spectral domain rather than in the conventional Euclidean space. In its simplest form, FOCUSR improves current spectral approaches by refining spectral representations with a nonrigid alignment; however, its full power is demonstrated when using additional features during matching. For instance, the results showed that sulcal depth and cortical curvature improve significantly the accuracy of cortical surface matching. Finally, the third objective is to improve image registration for organs with a high inter-subject variability or undergoing very large deformations, such as the heart. The new approach brought by the spectral matching technique allows the improvement of conventional image registration methods. Indeed, spectral representations, which capture global geometric similarities and large deformations between different shapes, may be used to overcome a major limitation of current registration methods, which are in fact guided by local forces and restrained to small deformations. The new algorithm, named Spectral Demons, can capture very large and complex deformations between images, and can additionally be adapted to other approaches, such as in a groupwise configuration. This results in a complete framework for atlas construction, named Groupwise Spectral Demons, where the average shape is computed during the registration process rather than in sequential steps. The achievements of these three specific objectives permitted advances in the state-of-the-art of spectral matching methods and of atlas construction, enabling the registration of organs with significant shape variability. Overall, the investigation of these different strategies provides new contributions on how to find and exploit global descriptions of images and surfaces. From a global perspective, these objectives establish a link between: a) the first set of tools, that highlights the challenges in registering images with very large deformations, b) the second set of tools, that captures very large deformations between surfaces but are not applicable to images, and c) the third set of tools, that comes back on processing images and allows a natural construction of atlases from images with very large deformations. There are, however, several general remaining limitations, for instance, partial data (truncated or occluded) is currently not supported by the new tools, or also, the strategy for computing and using spectral representations still leaves room for improvement. This thesis gives new perspectives in cardiac and neuroimaging, yet at the same time, the new tools remain general enough for virtually any application that uses surface or image registration. It is recommended to research additional links with graph-based segmentation methods, which may lead to a complete framework for atlas construction where segmentation, registration and shape averaging are all interlinked. It is also recommended to pursue research on building better cardiac electromechanical models from the findings of this thesis. Nevertheless, the new tools provide new grounds for research and application of shape normalization, which may potentially impact diagnostic, as well as planning and performance of medical interventions

    Real-time GPU-accelerated Out-of-Core Rendering and Light-field Display Visualization for Improved Massive Volume Understanding

    Get PDF
    Nowadays huge digital models are becoming increasingly available for a number of different applications ranging from CAD, industrial design to medicine and natural sciences. Particularly, in the field of medicine, data acquisition devices such as MRI or CT scanners routinely produce huge volumetric datasets. Currently, these datasets can easily reach dimensions of 1024^3 voxels and datasets larger than that are not uncommon. This thesis focuses on efficient methods for the interactive exploration of such large volumes using direct volume visualization techniques on commodity platforms. To reach this goal specialized multi-resolution structures and algorithms, which are able to directly render volumes of potentially unlimited size are introduced. The developed techniques are output sensitive and their rendering costs depend only on the complexity of the generated images and not on the complexity of the input datasets. The advanced characteristics of modern GPGPU architectures are exploited and combined with an out-of-core framework in order to provide a more flexible, scalable and efficient implementation of these algorithms and data structures on single GPUs and GPU clusters. To improve visual perception and understanding, the use of novel 3D display technology based on a light-field approach is introduced. This kind of device allows multiple naked-eye users to perceive virtual objects floating inside the display workspace, exploiting the stereo and horizontal parallax. A set of specialized and interactive illustrative techniques capable of providing different contextual information in different areas of the display, as well as an out-of-core CUDA based ray-casting engine with a number of improvements over current GPU volume ray-casters are both reported. The possibilities of the system are demonstrated by the multi-user interactive exploration of 64-GVoxel datasets on a 35-MPixel light-field display driven by a cluster of PCs. ------------------------------------------------------------------------------------------------------ Negli ultimi anni si sta verificando una proliferazione sempre più consistente di modelli digitali di notevoli dimensioni in campi applicativi che variano dal CAD e la progettazione industriale alla medicina e le scienze naturali. In modo particolare, nel settore della medicina, le apparecchiature di acquisizione dei dati come RM o TAC producono comunemente dei dataset volumetrici di grosse dimensioni. Questi dataset possono facilmente raggiungere taglie dell’ordine di 10243 voxels e dataset di dimensioni maggiori possono essere frequenti. Questa tesi si focalizza su metodi efficienti per l’esplorazione di tali grossi volumi utilizzando tecniche di visualizzazione diretta su piattaforme HW di diffusione di massa. Per raggiungere tale obiettivo si introducono strutture specializzate multi-risoluzione e algoritmi in grado di visualizzare volumi di dimensioni potenzialmente infinite. Le tecniche sviluppate sono “ouput sensitive” e la loro complessità di rendering dipende soltanto dalle dimensioni delle immagini generate e non dalle dimensioni dei dataset di input. Le caratteristiche avanzate delle architetture moderne GPGPU vengono inoltre sfruttate e combinate con un framework “out-of-core” in modo da offrire una implementazione di questi algoritmi e strutture dati più flessibile, scalabile ed efficiente su singole GPU o cluster di GPU. Per migliorare la percezione visiva e la comprensione dei dati, viene introdotto inoltre l’uso di tecnologie di display 3D di nuova generazione basate su un approccio di tipo light-field. Questi tipi di dispositivi consentono a diversi utenti di percepire ad occhio nudo oggetti che galleggiano all’interno dello spazio di lavoro del display, sfruttando lo stereo e la parallasse orizzontale. Si descrivono infine un insieme di tecniche illustrative interattive in grado di fornire diverse informazioni contestuali in diverse zone del display, così come un motore di “ray-casting out-of-core” basato su CUDA e contenente una serie di miglioramenti rispetto agli attuali metodi GPU di “ray-casting” di volumi. Le possibilità del sistema sono dimostrate attraverso l’esplorazione interattiva di dataset di 64-GVoxel su un display di tipo light-field da 35-MPixel pilotato da un cluster di PC

    Video anomaly detection using deep generative models

    Full text link
    Video anomaly detection faces three challenges: a) no explicit definition of abnormality; b) scarce labelled data and c) dependence on hand-crafted features. This thesis introduces novel detection systems using unsupervised generative models, which can address the first two challenges. By working directly on raw pixels, they also bypass the last
    corecore