244 research outputs found
Multimodal wavelet embedding representation for data combination (MaWERiC): integratingmagnetic resonance imaging and spectroscopy for prostate cancer detection,”
Recently, both Magnetic Resonance (MR) Imaging (MRI) and Spectroscopy (MRS) have emerged as promising tools for detection of prostate cancer (CaP). However, due to the inherent dimensionality differences in MR imaging and spectral information, quantitative integration of T 2 weighted MRI (T 2 w MRI) and MRS for improved CaP detection has been a major challenge. In this paper, we present a novel computerized decision support system called multimodal wavelet embedding representation for data combination (MaWERiC) that employs, (i) wavelet theory to extract 171 Haar wavelet features from MRS and 54 Gabor features from T 2 w MRI, (ii) dimensionality reduction to individually project wavelet features from MRS and T 2 w MRI into a common reduced Eigen vector space, and (iii), a random forest classifier for automated prostate cancer detection on a per voxel basis from combined 1.5 T in vivo MRI and MRS. A total of 36 1.5 T endorectal in vivo T 2 w MRI and MRS patient studies were evaluated per voxel by MaWERiC using a three-fold cross validation approach over 25 iterations. Ground truth for evaluation of results was obtained by an expert radiologist annotations of prostate cancer on a per voxel basis who compared each MRI section with corresponding ex vivo wholemount histology sections with the disease extent mapped out on histology. Results suggest that MaWERiC based MRS T 2 w meta-classifier (mean AUC, m = 0.89 AE 0.02) significantly outperformed (i) a T 2 w MRI (using wavelet texture features) classifier (m = 0.55 AE 0.02), (ii) a MRS (using metabolite ratios) classifier (m = 0.77 AE 0.03), (iii) a decision fusion classifier obtained by combining individual T 2 w MRI and MRS classifier outputs (m = 0.85 AE 0.03), and (iv) a data combination method involving a combination of metabolic MRS and MR signal intensity features (m = 0.66 AE 0.02)
Deep learning for an improved diagnostic pathway of prostate cancer in a small multi-parametric magnetic resonance data regime
Prostate Cancer (PCa) is the second most commonly diagnosed cancer among men, with an estimated incidence of 1.3 million new cases worldwide in 2018. The current diagnostic pathway of PCa relies on prostate-specific antigen (PSA) levels in serum. Nevertheless, PSA testing comes at the cost of under-detection of malignant lesions and a substantial over-diagnosis of indolent ones, leading to unnecessary invasive testing such biopsies and treatment in indolent PCa lesions.
Magnetic Resonance Imaging (MRI) is a non-invasive technique that has emerged as a valuable tool for PCa detection, staging, early screening, treatment planning and intervention. However, analysis of MRI relies on expertise, can be time-consuming, requires specialized training and in its absence suffers from inter and intra-reader variability and sub-optimal interpretations.
Deep Learning (DL) techniques have the ability to recognize complex patterns in imaging data and are able to automatize certain assessments or tasks while offering a lesser degree of subjectiveness, providing a tool that can help clinicians in their daily tasks. In spite of it, DL success has traditionally relied on the availability of large amounts of labelled data, which are rarely available in the medical field and are costly and hard to obtain due to privacy regulations of patients’ data and required specialized training, among others.
This work investigates DL algorithms specially tailored to work in a limited data regime with the final objective of improving the current prostate cancer diagnostic pathway by improving the performance of DL algorithms for PCa MRI applications in a limited data regime scenario.
In particular, this thesis starts by exploring Generative Adversarial Networks (GAN) to generate synthetic samples and their effect on tasks such as prostate capsule segmentation and PCa lesion significance classification (triage). Following, we explore the use of Auto-encoders (AEs) to exploit the data imbalance that is usually present in medical imaging datasets. Specifically, we propose a framework based on AEs to detect the presence of prostate lesions (tumours) by uniquely learning from control (healthy) data in an outlier detection-like fashion. This thesis also explores more recent DL paradigms that have shown promising results in natural images: generative and contrastive self-supervised learning (SSL). In both cases, we propose specific prostate MRI image manipulations for a PCa lesion classification downstream task and show the improvements offered by the techniques when compared with other initialization methods such as ImageNet pre-training. Finally, we explore data fusion techniques in order to leverage different data sources in the form of MRI sequences (orthogonal views) acquired by default during patient examinations and that are commonly ignored in DL systems. We show improvements in a PCa lesion significance classification when compared to a single input system (axial view)
Visual analytics in histopathology diagnostics: a protocol-based approach
Computer-Aided-Diagnosis (CAD) systems supporting the diagnostic process are widespread in radiology. Digital Pathology is still behind in the introduction of such solutions. Several studies investigated pathologists' behavior but only a few aimed to improve the diagnostic and report process with novel applications. In this work we designed and implemented a first protocol-based CAD viewer supported by visual analytics. The system targets the optimization of the diagnostic workflow in breast cancer diagnosis by means of three image analysis features that belong to the standard grading system (Nottingham Histologic Grade). A pathologist's routine was tracked during the examination of breast cancer tissue slides and diagnostic traces were analyzed from a qualitative perspective. Accordingly, a set of generic requirements was elicited to define the design and the implementation of the CAD-Viewer. A first qualitative evaluation conducted with five pathologists shows that the interface suffices the diagnostic workflow and diminishes the manual effort. We present promising evidence of the usefulness of our CAD-viewer and opportunities for its extension and integration in clinical practice. As a conclusion, the findings demonstrate that it is feasibile to optimize the Nottingham Grading workflow and, generally, the histological diagnosis by integrating computational pathology data with visual analytics techniques
The Human Connectome Project's neuroimaging approach
Noninvasive human neuroimaging has yielded many discoveries about the brain. Numerous methodological advances have also occurred, though inertia has slowed their adoption. This paper presents an integrated approach to data acquisition, analysis and sharing that builds upon recent advances, particularly from the Human Connectome Project (HCP). The 'HCP-style' paradigm has seven core tenets: (i) collect multimodal imaging data from many subjects; (ii) acquire data at high spatial and temporal resolution; (iii) preprocess data to minimize distortions, blurring and temporal artifacts; (iv) represent data using the natural geometry of cortical and subcortical structures; (v) accurately align corresponding brain areas across subjects and studies; (vi) analyze data using neurobiologically accurate brain parcellations; and (vii) share published data via user-friendly databases. We illustrate the HCP-style paradigm using existing HCP data sets and provide guidance for future research. Widespread adoption of this paradigm should accelerate progress in understanding the brain in health and disease
Information Fusion of Magnetic Resonance Images and Mammographic Scans for Improved Diagnostic Management of Breast Cancer
Medical imaging is critical to non-invasive diagnosis and treatment of a wide spectrum
of medical conditions. However, different modalities of medical imaging employ/apply
di erent contrast mechanisms and, consequently, provide different depictions of bodily
anatomy. As a result, there is a frequent problem where the same pathology can be
detected by one type of medical imaging while being missed by others. This problem brings
forward the importance of the development of image processing tools for integrating the
information provided by different imaging modalities via the process of information fusion.
One particularly important example of clinical application of such tools is in the diagnostic
management of breast cancer, which is a prevailing cause of cancer-related mortality in
women. Currently, the diagnosis of breast cancer relies mainly on X-ray mammography and
Magnetic Resonance Imaging (MRI), which are both important throughout different stages
of detection, localization, and treatment of the disease. The sensitivity of mammography,
however, is known to be limited in the case of relatively dense breasts, while contrast enhanced
MRI tends to yield frequent 'false alarms' due to its high sensitivity. Given this
situation, it is critical to find reliable ways of fusing the mammography and MRI scans in
order to improve the sensitivity of the former while boosting the specificity of the latter.
Unfortunately, fusing the above types of medical images is known to be a difficult computational
problem. Indeed, while MRI scans are usually volumetric (i.e., 3-D), digital
mammograms are always planar (2-D). Moreover, mammograms are invariably acquired
under the force of compression paddles, thus making the breast anatomy undergo sizeable
deformations. In the case of MRI, on the other hand, the breast is rarely constrained and
imaged in a pendulous state. Finally, X-ray mammography and MRI exploit two completely
di erent physical mechanisms, which produce distinct diagnostic contrasts which
are related in a non-trivial way. Under such conditions, the success of information fusion
depends on one's ability to establish spatial correspondences between mammograms
and their related MRI volumes in a cross-modal cross-dimensional (CMCD) setting in the
presence of spatial deformations (+SD). Solving the problem of information fusion in the
CMCD+SD setting is a very challenging analytical/computational problem, still in need
of efficient solutions.
In the literature, there is a lack of a generic and consistent solution to the problem of
fusing mammograms and breast MRIs and using their complementary information. Most
of the existing MRI to mammogram registration techniques are based on a biomechanical
approach which builds a speci c model for each patient to simulate the effect of mammographic
compression. The biomechanical model is not optimal as it ignores the common
characteristics of breast deformation across different cases. Breast deformation is essentially the planarization of a 3-D volume between two paddles, which is common in all
patients. Regardless of the size, shape, or internal con guration of the breast tissue, one
can predict the major part of the deformation only by considering the geometry of the
breast tissue. In contrast with complex standard methods relying on patient-speci c biomechanical
modeling, we developed a new and relatively simple approach to estimate the
deformation and nd the correspondences. We consider the total deformation to consist of
two components: a large-magnitude global deformation due to mammographic compression
and a residual deformation of relatively smaller amplitude. We propose a much simpler
way of predicting the global deformation which compares favorably to FEM in terms of
its accuracy. The residual deformation, on the other hand, is recovered in a variational
framework using an elastic transformation model.
The proposed algorithm provides us with a computational pipeline that takes breast
MRIs and mammograms as inputs and returns the spatial transformation which establishes
the correspondences between them. This spatial transformation can be applied in different
applications, e.g., producing 'MRI-enhanced' mammograms (which is capable of improving
the quality of surgical care) and correlating between different types of mammograms.
We investigate the performance of our proposed pipeline on the application of enhancing
mammograms by means of MRIs and we have shown improvements over the state of the
art
Image-based registration methods for quantification and compensation of prostate motion during trans-rectal ultrasound (TRUS)-guided biopsy
Prostate biopsy is the clinical standard for cancer diagnosis and is typically performed under two-dimensional (2D) transrectal ultrasound (TRUS) for needle guidance. Unfortunately, most early stage prostate cancers are not visible on ultrasound and the procedure suffers from high false negative rates due to the lack of visible targets. Fusion of pre-biopsy MRI to 3D TRUS for targeted biopsy could improve cancer detection rates and volume of tumor sampled. In MRI-TRUS fusion biopsy systems, patient or prostate motion during the procedure causes misalignments in the MR targets mapped to the live 2D TRUS images, limiting the targeting accuracy of the biopsy system.
In order to sample smallest clinically significant tumours of 0.5 cm3with 95% confidence, the root mean square (RMS) error of the biopsy system needs to be
The target misalignments due to intermittent prostate motion during the procedure can be compensated by registering the live 2D TRUS images acquired during the biopsy procedure to the pre-acquired baseline 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. We developed an intensity-based 2D-3D rigid registration algorithm and validated it by calculating the target registration error (TRE) using manually identified fiducials within the prostate. We discuss two different approaches that can be used to improve the robustness of this registration to meet the clinical requirements. Firstly, we evaluated the impact of intra-procedural 3D TRUS imaging on motion compensation accuracy since the limited anatomical context available in live 2D TRUS images could limit the robustness of the 2D-3D registration. The results indicated that TRE improved when intra-procedural 3D TRUS images were used in registration, with larger improvements in the base and apex regions as compared with the mid-gland region. Secondly, we developed and evaluated a registration algorithm whose optimization is based on learned prostate motion characteristics. Compared to our initial approach, the updated optimization improved the robustness during 2D-3D registration by reducing the number of registrations with a TRE \u3e 5 mm from 9.2% to 1.2% with an overall RMS TRE of 2.3 mm.
The methods developed in this work were intended to improve the needle targeting accuracy of 3D TRUS-guided biopsy systems. The successful integration of the techniques into current 3D TRUS-guided systems could improve the overall cancer detection rate during the biopsy and help to achieve earlier diagnosis and fewer repeat biopsy procedures in prostate cancer diagnosis
- …