46 research outputs found
Skeletonization methods for image and volume inpainting
Image and shape restoration techniques are increasingly important in computer graphics. Many types of restoration techniques have been proposed in the 2D image-processing and according to our knowledge only one to volumetric data. Well-known examples of such techniques include digital inpainting, denoising, and morphological gap filling. However efficient and effective, such methods have several limitations with respect to the shape, size, distribution, and nature of the defects they can find and eliminate. We start by studying the use of 2D skeletons for the restoration of two-dimensional images. To this end, we show that skeletons are useful and efficient for volumetric data reconstruction. To explore our hypothesis in the 3D case, we first overview the existing state-of-the-art in 3D skeletonization methods, and conclude that no such method provides us with the features required by efficient and effective practical usage. We next propose a novel method for 3D skeletonization, and show how it complies with our desired quality requirements, which makes it thereby suitable for volumetric data reconstruction context. The joint results of our study show that skeletons are indeed effective tools to design a variety of shape restoration methods. Separately, our results show that suitable algorithms and implementations can be conceived to yield high end-to-end performance and quality of skeleton-based restoration methods. Finally, our practical applications can generate competitive results when compared to application areas such as digital hair removal and wire artifact removal
Stadiums x covid-19: a new way to twist
Amid the global pandemic, football stadiums have been recruited to fight COVID-19 and others have been used behind closed doors. Thus, this research aims to examine how football stadiums have been utilized in the face of COVID19. The results indicate use as field hospitals, shelters, testing sites, storage for donations and materials, or isolation centers in single cheer for life
Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum
of medical conditions. However, different modalities of medical imaging employ/apply
di erent contrast mechanisms and, consequently, provide different depictions of bodily
anatomy. As a result, there is a frequent problem where the same pathology can be
detected by one type of medical imaging while being missed by others. This problem brings
forward the importance of the development of image processing tools for integrating the
information provided by different imaging modalities via the process of information fusion.
One particularly important example of clinical application of such tools is in the diagnostic
management of breast cancer, which is a prevailing cause of cancer-related mortality in
women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and
Magnetic Resonance Imaging (MRI), which are both important throughout different stages
of detection, localization, and treatment of the disease. The sensitivity of mammography,
however, is known to be limited in the case of relatively dense breasts, while contrast enhanced
MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this
situation, it is critical to find reliable ways of fusing the mammography and MRI scans in
order to improve the sensitivity of the former while boosting the specificity of the latter.
Unfortunately, fusing the above types of medical images is known to be a difficult computational
problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital
mammograms are always planar (2-D). Moreover, mammograms are invariably acquired
under the force of compression paddles, thus making the breast anatomy undergo sizeable
deformations. In the case of MRI, on the other hand, the breast is rarely constrained and
imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely
di erent physical mechanisms, which produce distinct diagnostic contrasts which
are related in a non-trivial way. Under such conditions, the success of information fusion
depends on one's ability to establish spatial correspondences between mammograms
and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the
presence of spatial deformations (+SD). Solving the problem of information fusion in the
CMCD+SD setting is a very challenging analytical/computational problem, still in need
of efficient solutions.
In the literature, there is a lack of a generic and consistent solution to the problem of
fusing mammograms and breast MRIs and using their complementary information. Most
of the existing MRI to mammogram registration techniques are based on a biomechanical
approach which builds a speci c model for each patient to simulate the effect of mammographic
compression. The biomechanical model is not optimal as it ignores the common
characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all
patients. Regardless of the size, shape, or internal con guration of the breast tissue, one
can predict the major part of the deformation only by considering the geometry of the
breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical
modeling, we developed a new and relatively simple approach to estimate the
deformation and nd the correspondences. We consider the total deformation to consist of
two components: a large-magnitude global deformation due to mammographic compression
and a residual deformation of relatively smaller amplitude. We propose a much simpler
way of predicting the global deformation which compares favorably to FEM in terms of
its accuracy. The residual deformation, on the other hand, is recovered in a variational
framework using an elastic transformation model.
The proposed algorithm provides us with a computational pipeline that takes breast
MRIs and mammograms as inputs and returns the spatial transformation which establishes
the correspondences between them. This spatial transformation can be applied in different
applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving
the quality of surgical care) and correlating between different types of mammograms.
We investigate the performance of our proposed pipeline on the application of enhancing
mammograms by means of MRIs and we have shown improvements over the state of the
art
Automatic registration of 3D models to laparoscopic video images for guidance during liver surgery
Laparoscopic liver interventions offer significant advantages over open surgery, such as less pain and trauma, and shorter recovery time for the patient. However, they also bring challenges for the surgeons such as the lack of tactile feedback, limited field of view and occluded anatomy. Augmented reality (AR) can potentially help during laparoscopic liver interventions by displaying sub-surface structures (such as tumours or vasculature). The initial registration between the 3D model extracted from the CT scan and the laparoscopic video feed is essential for an AR system which should be efficient, robust, intuitive to use and with minimal disruption to the surgical procedure. Several challenges of registration methods in laparoscopic interventions include the deformation of the liver due to gas insufflation in the abdomen, partial visibility of the organ and lack of prominent geometrical or texture-wise landmarks. These challenges are discussed in detail and an overview of the state of the art is provided. This research project aims to provide the tools to move towards a completely automatic registration. Firstly, the importance of pre-operative planning is discussed along with the characteristics of the liver that can be used in order to constrain a registration method. Secondly, maximising the amount of information obtained before the surgery, a semi-automatic surface based method is proposed to recover the initial rigid registration irrespective of the position of the shapes. Finally, a fully automatic 3D-2D rigid global registration is proposed which estimates a global alignment of the pre-operative 3D model using a single intra-operative image. Moving towards incorporating the different liver contours can help constrain the registration, especially for partial surfaces. Having a robust, efficient AR system which requires no manual interaction from the surgeon will aid in the translation of such approaches to the clinics
Deep learning applications in the prostate cancer diagnostic pathway
Prostate cancer (PCa) is the second most frequently diagnosed cancer in men worldwide and the fifth leading cause of cancer death in men, with an estimated 1.4 million new cases in 2020 and 375,000 deaths. The risk factors most strongly associated to PCa are advancing age, family history, race, and mutations of the BRCA genes. Since the aforementioned risk factors are not preventable, early and accurate diagnoses are a key objective of the PCa diagnostic pathway.
In the UK, clinical guidelines recommend multiparametric magnetic resonance imaging (mpMRI) of the prostate for use by radiologists to detect, score, and stage lesions that may correspond to clinically significant PCa (CSPCa), prior to confirmatory biopsy and histopathological grading. Computer-aided diagnosis (CAD) of PCa using artificial intelligence algorithms holds a currently unrealized potential to improve upon the diagnostic accuracy achievable by radiologist assessment of mpMRI, improve the reporting consistency between radiologists, and reduce reporting time.
In this thesis, we build and evaluate deep learning-based CAD systems for the PCa diagnostic pathway, which address gaps identified in the literature. First, we introduce a novel patient-level classification framework, PCF, which uses a stacked ensemble of convolutional neural networks (CNNs) and support vector machines (SVMs) to assign a probability of having CSPCa to patients, using mpMRI and clinical features. Second, we introduce AutoProstate, a deep-learning powered framework for automated PCa assessment and reporting; AutoProstate utilizes biparametric MRI and clinical data to populate an automatic diagnostic report containing segmentations of the whole prostate, prostatic zones, and candidate CSPCa lesions, as well as several derived characteristics that are clinically valuable. Finally, as automatic segmentation algorithms have not yet reached the desired robustness for clinical use, we introduce interactive click-based segmentation applications for the whole prostate and prostatic lesions, with potential uses in diagnosis, active surveillance progression monitoring, and treatment planning
Left Ventricular Viability Maps : Fusion of Multimodal Images of Coronary Morphology and Functional Information
RÉSUMÉ
Les maladies coronariennes demeurent encore la première cause de décès aux Etats-Unis étant donné que le taux de mortalité lié à ces maladies enregistré en 2005 est d’une personne sur cinq. Les sténoses (obstructions des artères coronaires) se manifestent par un rétrécissement du diamètre des coronaires, produisant une ischémie soit une réduction du flot sanguin vers le myocarde (le muscle cardiaque). Dans les cas les plus graves, les cellules qui composent le myocarde meurent définitivement et perdent leur fonction contractile. En présence de cette maladie les cliniciens ont recours à l’imagerie médicale pour étudier l’état du myocarde afin de déterminer si les cellules qui le composent sont mortes ou non ainsi que pour diagnostiquer les sténoses dans les coronaires. Actuellement, le clinicien utilise l’imagerie nucléaire pour étudier la perfusion du myocarde afin de déterminer son état. Une projection de cette information sur un modèle segmenté du myocarde, soit le modèle à 17-segments, établie le lien entre les zones atteintes et les coronaires qui sont les plus responsables de leur irrigation. Ce n’est que par la suite, lors d’une angiographie, que le clinicien pourra identifier les sténoses et possiblement intervenir par revascularisation. Une autre méthode de visualisation de la structure coronarienne et de la présence de sténoses est la méthode Green Lane. Le clinicien reproduit la structure des coronaires sur une carte circulaire en se basant sur l’angiographie. L’objectif de notre projet de recherche est de créer un modèle spécifique au patient où il serait possible de voir les territoires coronariens sur la surface du myocarde fusionnés avec la viabilité myocardique. Ce modèle s’adapterait au patient et permettrait l’étude d’autres groupes de coronaires, ce qui n’est pas possible avec le modèle à 17-segments qui est fixe et ne présente que les trois groupes principaux de coronaires (coronaire droite, gauche et circonflexe). De plus, ce modèle divise la surface de l’épicarde en segments à partir de données statistiques qui sont limitées par la nature et la représentativité de l’échantillon de la population considérée et ne permet pas de visualiser la distribution de perte de viabilité sur la surface épicardique.---------- ABSTRACT
Coronary heart disease (CHD) can be attributed to the build up of plaque in the coronary arteries (atherosclerosis) which leads to ischemia, an insufficient supply of blood to the heart wall, which results in myocardial dysfunction. When ischemia remains untreated an infarction may appear (areas of necrosis in cardiac tissues) and consequently the heart’s contractility is affected, which may lead to death. This disease is the basis of one of every five deaths in the United States during 2005, elevating this disease to the largest cause of death in United States. In standard clinical practice, perfusion and viability studies allow clinicians to examine the extent and the severity of CHD over the myocardium. Then, by consulting a population-based coronary territory model, such as the 17-segment model, the clinician mentally integrates affected areas of myocardium, found in nuclear or magnetic resonance imaging, to coronaries that typically irrigate this region with blood. However, population-based models do not fit every patient. There are individuals whose coronary tree structure deviates from that of the majority of the population. In addition, the 17-segment model limits the number of coronary groups to three: left coronary artery (LAD), right coronary artery (RCA) and left circumflex (LCX). Moreover this map is not continuous; it divides the myocardial surface in segments.Our objective is therefore to create a patient-specific map explicitly combining coronary territories and myocardial viability. This continuous model would adapt to the patient and allow the study of groups of coronary unavailable with standard models. After having identified loss of viability, the clinician would use this model to infer the most likely obstructed coronary artery responsible for myocardial damage. Visualization of the loss of viability along with coronary structure would replace the physician’s task of mentally integrating information from various sources