359 research outputs found

    Discriminative Representations for Heterogeneous Images and Multimodal Data

    Get PDF
    Histology images of tumor tissue are an important diagnostic and prognostic tool for pathologists. Recently developed molecular methods group tumors into subtypes to further guide treatment decisions, but they are not routinely performed on all patients. A lower cost and repeatable method to predict tumor subtypes from histology could bring benefits to more cancer patients. Further, combining imaging and genomic data types provides a more complete view of the tumor and may improve prognostication and treatment decisions. While molecular and genomic methods capture the state of a small sample of tumor, histological image analysis provides a spatial view and can identify multiple subtypes in a single tumor. This intra-tumor heterogeneity has yet to be fully understood and its quantification may lead to future insights into tumor progression. In this work, I develop methods to learn appropriate features directly from images using dictionary learning or deep learning. I use multiple instance learning to account for intra-tumor variations in subtype during training, improving subtype predictions and providing insights into tumor heterogeneity. I also integrate image and genomic features to learn a projection to a shared space that is also discriminative. This method can be used for cross-modal classification or to improve predictions from images by also learning from genomic data during training, even if only image data is available at test time.Doctor of Philosoph

    Magnetic resonance fingerprinting review part 2: Technique and directions

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/154317/1/jmri26877.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/154317/2/jmri26877_am.pd

    Solving inverse problems for medical applications

    Get PDF
    It is essential to have an accurate feedback system to improve the navigation of surgical tools. This thesis investigates how to solve inverse problems using the example of two medical prototypes. The first aims to detect the Sentinel Lymph Node (SLN) during the biopsy. This will allow the surgeon to remove the SLN with a small incision, reducing trauma to the patient. The second investigates how to extract depth and tissue characteristic information during bone ablation using the emitted acoustic wave. We solved inverse problems to find our desired solution. For this purpose, we investigated three approaches: In Chapter 3, we had a good simulation of the forward problem; namely, we used a fingerprinting algorithm. Therefore, we compared the measurement with the simulations of the forward problem, and the simulation that was most similar to the measurement was a good approximation. To do so, we used a dictionary of solutions, which has a high computational speed. However, depending on how fine the grid is, it takes a long time to simulate all the solutions of the forward problem. Therefore, a lot of memory is needed to access the dictionary. In Chapter 4, we examined the Adaptive Eigenspace method for solving the Helmholtz equation (Fourier transformed wave equation). Here we used a Conjugate quasi-Newton (CqN) algorithm. We solved the Helmholtz equation and reconstructed the source shape and the medium velocity by using the acoustic wave at the boundary of the area of interest. We accomplished this in a 2D model. We note, that the computation for the 3D model was very long and expensive. In addition, we simplified some conditions and could not confirm the results of our simulations in an ex-vivo experiment. In Chapter 5, we developed a different approach. We conducted multiple experiments and acquired many acoustic measurements during the ablation process. Then we trained a Neural Network (NN) to predict the ablation depth in an end-to-end model. The computational cost of predicting the depth is relatively low once the training is complete. An end-to-end network requires almost no pre-processing. However, there were some drawbacks, e.g., it is cumbersome to obtain the ground truth. This thesis has investigated several approaches to solving inverse problems in medical applications. From Chapter 3 we conclude that if the forward problem is well known, we can drastically improve the speed of the algorithm by using the fingerprinting algorithm. This is ideal for reconstructing a position or using it as a first guess for more complex reconstructions. The conclusion of Chapter 4 is that we can drastically reduce the number of unknown parameters using Adaptive Eigenspace method. In addition, we were able to reconstruct the medium velocity and the acoustic wave generator. However, the model is expensive for 3D simulations. Also, the number of transducers required for the setup was not applicable to our intended setup. In Chapter 5 we found a correlation between the depth of the laser cut and the acoustic wave using only a single air-coupled transducer. This encourages further investigation to characterize the tissue during the ablation process

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    High Dimensional Data Set Analysis Using a Large-Scale Manifold Learning Approach

    Get PDF
    Because of technological advances, a trend occurs for data sets increasing in size and dimensionality. Processing these large scale data sets is challenging for conventional computers due to computational limitations. A framework for nonlinear dimensionality reduction on large databases is presented that alleviates the issue of large data sets through sampling, graph construction, manifold learning, and embedding. Neighborhood selection is a key step in this framework and a potential area of improvement. The standard approach to neighborhood selection is setting a fixed neighborhood. This could be a fixed number of neighbors or a fixed neighborhood size. Each of these has its limitations due to variations in data density. A novel adaptive neighbor-selection algorithm is presented to enhance performance by incorporating sparse â„“ 1-norm based optimization. These enhancements are applied to the graph construction and embedding modules of the original framework. As validation of the proposed â„“1-based enhancement, experiments are conducted on these modules using publicly available benchmark data sets. The two approaches are then applied to a large scale magnetic resonance imaging (MRI) data set for brain tumor progression prediction. Results showed that the proposed approach outperformed linear methods and other traditional manifold learning algorithms

    Structured Sparse Methods for Imaging Genetics

    Get PDF
    abstract: Imaging genetics is an emerging and promising technique that investigates how genetic variations affect brain development, structure, and function. By exploiting disorder-related neuroimaging phenotypes, this class of studies provides a novel direction to reveal and understand the complex genetic mechanisms. Oftentimes, imaging genetics studies are challenging due to the relatively small number of subjects but extremely high-dimensionality of both imaging data and genomic data. In this dissertation, I carry on my research on imaging genetics with particular focuses on two tasks---building predictive models between neuroimaging data and genomic data, and identifying disorder-related genetic risk factors through image-based biomarkers. To this end, I consider a suite of structured sparse methods---that can produce interpretable models and are robust to overfitting---for imaging genetics. With carefully-designed sparse-inducing regularizers, different biological priors are incorporated into learning models. More specifically, in the Allen brain image--gene expression study, I adopt an advanced sparse coding approach for image feature extraction and employ a multi-task learning approach for multi-class annotation. Moreover, I propose a label structured-based two-stage learning framework, which utilizes the hierarchical structure among labels, for multi-label annotation. In the Alzheimer's disease neuroimaging initiative (ADNI) imaging genetics study, I employ Lasso together with EDPP (enhanced dual polytope projections) screening rules to fast identify Alzheimer's disease risk SNPs. I also adopt the tree-structured group Lasso with MLFre (multi-layer feature reduction) screening rules to incorporate linkage disequilibrium information into modeling. Moreover, I propose a novel absolute fused Lasso model for ADNI imaging genetics. This method utilizes SNP spatial structure and is robust to the choice of reference alleles of genotype coding. In addition, I propose a two-level structured sparse model that incorporates gene-level networks through a graph penalty into SNP-level model construction. Lastly, I explore a convolutional neural network approach for accurate predicting Alzheimer's disease related imaging phenotypes. Experimental results on real-world imaging genetics applications demonstrate the efficiency and effectiveness of the proposed structured sparse methods.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Algorithms for enhanced artifact reduction and material recognition in computed tomography

    Full text link
    Computed tomography (CT) imaging provides a non-destructive means to examine the interior of an object which is a valuable tool in medical and security applications. The variety of materials seen in the security applications is higher than in the medical applications. Factors such as clutter, presence of dense objects, and closely placed items in a bag or a parcel add to the difficulty of the material recognition in security applications. Metal and dense objects create image artifacts which degrade the image quality and deteriorate the recognition accuracy. Conventional CT machines scan the object using single source or dual source spectra and reconstruct the effective linear attenuation coefficient of voxels in the image which may not provide the sufficient information to identify the occupying materials. In this dissertation, we provide algorithmic solutions to enhance CT material recognition. We provide a set of algorithms to accommodate different classes of CT machines. First, we provide a metal artifact reduction algorithm for conventional CT machines which perform the measurements using single X-ray source spectrum. Compared to previous methods, our algorithm is robust to severe metal artifacts and accurately reconstructs the regions that are in proximity to metal. Second, we propose a novel joint segmentation and classification algorithm for dual-energy CT machines which extends prior work to capture spatial correlation in material X-ray attenuation properties. We show that the classification performance of our method surpasses the prior work's result. Third, we propose a new framework for reconstruction and classification using a new class of CT machines known as spectral CT which has been recently developed. Spectral CT uses multiple energy windows to scan the object, thus it captures data across higher energy dimensions per detector. Our reconstruction algorithm extracts essential features from the measured data by using spectral decomposition. We explore the effect of using different transforms in performing the measurement decomposition and we develop a new basis transform which encapsulates the sufficient information of the data and provides high classification accuracy. Furthermore, we extend our framework to perform the task of explosive detection. We show that our framework achieves high detection accuracy and it is robust to noise and variations. Lastly, we propose a combined algorithm for spectral CT, which jointly reconstructs images and labels each region in the image. We offer a tractable optimization method to solve the proposed discrete tomography problem. We show that our method outperforms the prior work in terms of both reconstruction quality and classification accuracy

    Mathematics and Algorithms in Tomography

    Get PDF
    This is the eighth Oberwolfach conference on the mathematics of tomography. Modalities represented at the workshop included X-ray tomography, sonar, radar, seismic imaging, ultrasound, electron microscopy, impedance imaging, photoacoustic tomography, elastography, vector tomography, and texture analysis
    • …
    corecore