353 research outputs found

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    NON-RIGID BODY MECHANICAL PROPERTY RECOVERY FROM IMAGES AND VIDEOS

    Get PDF
    Material property has great importance in surgical simulation and virtual reality. The mechanical properties of the human soft tissue are critical to characterize the tissue deformation of each patient. Studies have shown that the tissue stiffness described by the tissue properties may indicate abnormal pathological process. The (recovered) elasticity parameters can assist surgeons to perform better pre-op surgical planning and enable medical robots to carry out personalized surgical procedures. Traditional elasticity parameters estimation methods rely largely on known external forces measured by special devices and strain field estimated by landmarks on the deformable bodies. Or they are limited to mechanical property estimation for quasi-static deformation. For virtual reality applications such as virtual try-on, garment material capturing is of equal significance as the geometry reconstruction. In this thesis, I present novel approaches for automatically estimating the material properties of soft bodies from images or from a video capturing the motion of the deformable body. I use a coupled simulation-optimization-identification framework to deform one soft body at its original, non-deformed state to match the deformed geometry of the same object in its deformed state. The optimal set of material parameters is thereby determined by minimizing the error metric function. This method can simultaneously recover the elasticity parameters of multiple regions of soft bodies using Finite Element Method-based simulation (of either linear or nonlinear materials undergoing large deformation) and particle-swarm optimization methods. I demonstrate the effectiveness of this approach on real-time interaction with virtual organs in patient-specific surgical simulation, using parameters acquired from low-resolution medical images. With the recovered elasticity parameters and the age of the prostate cancer patients as features, I build a cancer grading and staging classifier. The classifier achieves up to 91% for predicting cancer T-Stage and 88% for predicting Gleason score. To recover the mechanical properties of soft bodies from a video, I propose a method which couples statistical graphical model with FEM simulation. Using this method, I can recover the material properties of a soft ball from a high-speed camera video that captures the motion of the ball. Furthermore, I extend the material recovery framework to fabric material identification. I propose a novel method for garment material extraction from a single-view image and a learning based cloth material recovery method from a video recording the motion of the cloth. Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, I propose a method that can compute a 3D model of a human body and its outfit from a single photograph with little human interaction. My proposed learning-based cloth material type recovery method exploits simulated data-set and deep neural network. I demonstrate the effectiveness of my algorithms by re-purposing the reconstructed garments for virtual try-on, garment transfer, and cloth animation on digital characters. With the recovered mechanical properties, one can construct a virtual world with soft objects exhibiting real-world behaviors.Doctor of Philosoph

    A Sliced Inverse Regression (SIR) Decoding the Forelimb Movement from Neuronal Spikes in the Rat Motor Cortex

    Get PDF
    Several neural decoding algorithms have successfully converted brain signals into commands to control a computer cursor and prosthetic devices. A majority of decoding methods, such as population vector algorithms (PVA), optimal linear estimators (OLE), and neural networks (NN), are effective in predicting movement kinematics, including movement direction, speed and trajectory but usually require a large number of neurons to achieve desirable performance. This study proposed a novel decoding algorithm even with signals obtained from a smaller numbers of neurons. We adopted sliced inverse regression (SIR) to predict forelimb movement from single-unit activities recorded in the rat primary motor (M1) cortex in a water-reward lever-pressing task. SIR performed weighted principal component analysis (PCA) to achieve effective dimension reduction for nonlinear regression. To demonstrate the decoding performance, SIR was compared to PVA, OLE, and NN. Furthermore, PCA and sequential feature selection (SFS) which are popular feature selection techniques were implemented for comparison of feature selection effectiveness. Among SIR, PVA, OLE, PCA, SFS, and NN decoding methods, the trajectories predicted by SIR (with a root mean square error, RMSE, of 8.47 ± 1.32 mm) was closer to the actual trajectories compared with those predicted by PVA (30.41 ± 11.73 mm), OLE (20.17 ± 6.43 mm), PCA (19.13 ± 0.75 mm), SFS (22.75 ± 2.01 mm), and NN (16.75 ± 2.02 mm). The superiority of SIR was most obvious when the sample size of neurons was small. We concluded that SIR sorted the input data to obtain the effective transform matrices for movement prediction, making it a robust decoding method for conditions with sparse neuronal information

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    A Fully Automatic Segmentation Method for Breast Ultrasound Images

    Get PDF
    Breast cancer is the second leading cause of death of women worldwide. Accurate lesion boundary detection is important for breast cancer diagnosis. Since many crucial features for discriminating benign and malignant lesions are based on the contour, shape, and texture of the lesion, an accurate segmentation method is essential for a successful diagnosis. Ultrasound is an effective screening tool and primarily useful for differentiating benign and malignant lesions. However, due to inherent speckle noise and low contrast of breast ultrasound imaging, automatic lesion segmentation is still a challenging task. This research focuses on developing a novel, effective, and fully automatic lesion segmentation method for breast ultrasound images. By incorporating empirical domain knowledge of breast structure, a region of interest is generated. Then, a novel enhancement algorithm (using a novel phase feature) and a newly developed neutrosophic clustering method are developed to detect the precise lesion boundary. Neutrosophy is a recently introduced branch of philosophy that deals with paradoxes, contradictions, antitheses, and antinomies. When neutrosophy is used to segment images with vague boundaries, its unique ability to deal with uncertainty is brought to bear. In this work, we apply neutrosophy to breast ultrasound image segmentation and propose a new clustering method named neutrosophic l-means. We compare the proposed method with traditional fuzzy c-means clustering and three other well-developed segmentation methods for breast ultrasound images, using the same database. Both accuracy and time complexity are analyzed. The proposed method achieves the best accuracy (TP rate is 94.36%, FP rate is 8.08%, and similarity rate is 87.39%) with a fairly rapid processing speed (about 20 seconds). Sensitivity analysis shows the robustness of the proposed method as well. Cases with multiple-lesions and severe shadowing effect (shadow areas having similar intensity values of the lesion and tightly connected with the lesion) are not included in this study

    New Mechatronic Systems for the Diagnosis and Treatment of Cancer

    Get PDF
    Both two dimensional (2D) and three dimensional (3D) imaging modalities are useful tools for viewing the internal anatomy. Three dimensional imaging techniques are required for accurate targeting of needles. This improves the efficiency and control over the intervention as the high temporal resolution of medical images can be used to validate the location of needle and target in real time. Relying on imaging alone, however, means the intervention is still operator dependent because of the difficulty of controlling the location of the needle within the image. The objective of this thesis is to improve the accuracy and repeatability of needle-based interventions over conventional techniques: both manual and automated techniques. This includes increasing the accuracy and repeatability of these procedures in order to minimize the invasiveness of the procedure. In this thesis, I propose that by combining the remote center of motion concept using spherical linkage components into a passive or semi-automated device, the physician will have a useful tracking and guidance system at their disposal in a package, which is less threatening than a robot to both the patient and physician. This design concept offers both the manipulative transparency of a freehand system, and tremor reduction through scaling currently offered in automated systems. In addressing each objective of this thesis, a number of novel mechanical designs incorporating an remote center of motion architecture with varying degrees of freedom have been presented. Each of these designs can be deployed in a variety of imaging modalities and clinical applications, ranging from preclinical to human interventions, with an accuracy of control in the millimeter to sub-millimeter range

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    Bi-temporal 3D active appearance models with applications to unsupervised ejection fraction estimation

    Get PDF
    Rapid and unsupervised quantitative analysis is of utmost importance to ensure clinical acceptance of many examinations using cardiac magnetic resonance imaging (MRI). We present a framework that aims at fulfilling these goals for the application of left ventricular ejection fraction estimation in four-dimensional MRI. The theoretical foundation of our work is the generative two-dimensional Active Appearance Models by Cootes et al., here extended to bi-temporal, three-dimensional models. Further issues treated include correction of respiratory induced slice displacements, systole detection, and a texture model pruning strategy. Cross-validation carried out on clinical-quality scans of twelve volunteers indicates that ejection fraction and cardiac blood pool volumes can be estimated automatically and rapidly with accuracy on par with typical inter-observer variability. \u

    Data-driven quantitative photoacoustic tomography

    Get PDF
    Spatial information about the 3D distribution of blood oxygen saturation (sO2) in vivo is of clinical interest as it encodes important physiological information about tissue health/pathology. Photoacoustic tomography (PAT) is a biomedical imaging modality that, in principle, can be used to acquire this information. Images are formed by illuminating the sample with a laser pulse where, after multiple scattering events, the optical energy is absorbed. A subsequent rise in temperature induces an increase in pressure (the photoacoustic initial pressure p0) that propagates to the sample surface as an acoustic wave. These acoustic waves are detected as pressure time series by sensor arrays and used to reconstruct images of sample’s p0 distribution. This encodes information about the sample’s absorption distribution, and can be used to estimate sO2. However, an ill-posed nonlinear inverse problem stands in the way of acquiring estimates in vivo. Current approaches to solving this problem fall short of being widely and successfully applied to in vivo tissues due to their reliance on simplifying assumptions about the tissue, prior knowledge of its optical properties, or the formulation of a forward model accurately describing image acquisition with a specific imaging system. Here, we investigate the use of data-driven approaches (deep convolutional networks) to solve this problem. Networks only require a dataset of examples to learn a mapping from PAT data to images of the sO2 distribution. We show the results of training a 3D convolutional network to estimate the 3D sO2 distribution within model tissues from 3D multiwavelength simulated images. However, acquiring a realistic training set to enable successful in vivo application is non-trivial given the challenges associated with estimating ground truth sO2 distributions and the current limitations of simulating training data. We suggest/test several methods to 1) acquire more realistic training data or 2) improve network performance in the absence of adequate quantities of realistic training data. For 1) we describe how training data may be acquired from an organ perfusion system and outline a possible design. Separately, we describe how training data may be generated synthetically using a variant of generative adversarial networks called ambientGANs. For 2), we show how the accuracy of networks trained with limited training data can be improved with self-training. We also demonstrate how the domain gap between training and test sets can be minimised with unsupervised domain adaption to improve quantification accuracy. Overall, this thesis clarifies the advantages of data-driven approaches, and suggests concrete steps towards overcoming the challenges with in vivo application
    • …
    corecore