80 research outputs found

    Sparse Reconstruction of Compressive Sensing Magnetic Resonance Imagery using a Cross Domain Stochastic Fully Connected Conditional Random Field Framework

    Get PDF
    Prostate cancer is a major health care concern in our society. Early detection of prostate cancer is crucial in the successful treatment of the disease. Many current methods used in detecting prostate cancer can either be inconsistent or invasive and discomforting to the patient. Magnetic resonance imaging (MRI) has demonstrated its ability as a non-invasive and non-ionizing medical imaging modality with a lengthy acquisition time that can be used for the early diagnosis of cancer. Speeding up the MRI acquisition process can greatly increase the number of early detections for prostate cancer diagnosis. Compressive sensing has exhibited the ability to reduce the imaging time for MRI by sampling a sparse yet sufficient set of measurements. Compressive sensing strategies are usually accompanied by strong reconstruction algorithms. This work presents a comprehensive framework for a cross-domain stochastically fully connected conditional random field (CD-SFCRF) reconstruction approach to facilitate compressive sensing MRI. This approach takes into account original k-space measurements made by the MRI machine with neighborhood and spatial consistencies of the image in the spatial domain. This approach facilitates the difference in domain between MRI measurements made in the k-space, and the reconstruction results in spatial domain. An adaptive extension of the CD-SFCRF approach that takes into account regions of interest in the image and changes the CD-SFCRF neighborhood connectivity based on importance is presented and tested as well. Finally, a compensated CD-SFCRF approach that takes into account MRI machine imaging apparatus properties to correct for degradations and aberrations from the image acquisition process is presented and tested. Clinical MRI data were collected from twenty patients with ground truth data examined and con firmed by an expert radiologist with multiple years of prostate cancer diagnosis experience. Compressive sensing simulations were performed and the reconstruction results show the CD-SFCRF and extension frameworks having noticeable improvements over state of the art methods. Tissue structure and image details are well preserved while sparse sampling artifacts were reduced and eliminated. Future work on this framework include extending the current work in multiple ways. Extensions including integration into computer aided diagnosis applications as well as improving on the compressive sensing strategy

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Theoretical and Experimental Investigations into Causality, its Measures and Applications

    Get PDF
    A major part of human scientific endeavour aims at making causal inferences of observed phenomena. While some of the studies conducted are experimental, others are observational, the latter often making use of recorded data. Since temporal data can be easily acquired and stored in today’s world, time-series causality estimation measures have come into wide use across a range of disciplines such as neuroscience, earth science and econometrics. In this context, model-free/data-driven methods for causality estimation are extremely useful, as the underlying model generating the data is often unknown. However, existing data-driven measures such as Granger Causality and Transfer Entropy impose strong statistical assumptions on the data and can only estimate causality by associational means. Associational causality, being the most rudimentary level of causality has several limitations. In this thesis, we propose a novel Interventional Complexity Causality scheme for time-series measurements so as to capture a higher level of causality based on intervention which until now could be inferred only through model-based measures. Based on this interventional scheme, we formulate a Compression-Complexity Causality (CCC) measure that is rigorously tested on simulations of stochastic and deterministic systems and shown to overcome the limitations of existing measures. CCC is then applied to infer causal relations from real data mainly in the domain of neuroscience. These include the study of brain connectivity in human subjects performing a motor task and a study to distinguish between awake and anaesthesia states in monkeys using electrophysiological brain recordings. Through theoretical and empirical advances in causality testing, the thesis also makes contributions to a number of allied disciplines. A causal perspective is given for the ubiquitous phenomenon of chaotic synchronization. One of the major contributions in this regard is the introduction of the notion of Causal Stability and formulation (with proof) of a novel Causal Stability Synchronization Theorem which gives a condition for complete synchronization of coupled chaotic systems. Further, we propose and test for techniques to analyse causality between sparse signals using compressed sensing. A real application is demonstrated for the case of sparse neuronal spike trains recorded from rat prefrontal cortex. The area of temporal-reversibility detection of time-series is also closely linked to the domain of causality testing. We develop and test a new method to check for time-reversibility of processes and explore the behaviour of causality measures on coupled time-reversed processes

    ROBUST DEEP LEARNING METHODS FOR SOLVING INVERSE PROBLEMS IN MEDICAL IMAGING

    Get PDF
    The medical imaging field has a long history of incorporating machine learning algorithms to address inverse problems in image acquisition and analysis. With the impressive successes of deep neural networks on natural images, we seek to answer the obvious question: do these successes also transfer to the medical image domain? The answer may seem straightforward on the surface. Tasks like image-to-image transformation, segmentation, detection, etc., have direct applications for medical images. For example, metal artifact reduction for Computed Tomography (CT) and reconstruction from undersampled k-space signal for Magnetic Resonance (MR) imaging can be formulated as an image-to-image transformation; lesion/tumor detection and segmentation are obvious applications for higher level vision tasks. While these tasks may be similar in formulation, many practical constraints and requirements exist in solving these tasks for medical images. Patient data is highly sensitive and usually only accessible from individual institutions. This creates constraints on the available groundtruth, dataset size, and computational resources in these institutions to train performant models. Due to the mission-critical nature in healthcare applications, requirements such as performance robustness and speed are also stringent. As such, the big-data, dense-computation, supervised learning paradigm in mainstream deep learning is often insufficient to address these situations. In this dissertation, we investigate ways to benefit from the powerful representational capacity of deep neural networks while still satisfying the above-mentioned constraints and requirements. The first part of this dissertation focuses on adapting supervised learning to account for variations such as different medical image modality, image quality, architecture designs, tasks, etc. The second part of this dissertation focuses on improving model robustness on unseen data through domain adaptation, which ameliorates performance degradation due to distribution shifts. The last part of this dissertation focuses on self-supervised learning and learning from synthetic data with a focus in tomographic imaging; this is essential in many situations where the desired groundtruth may not be accessible

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Temperature aware power optimization for multicore floating-point units

    Full text link

    Use of prior information and probabilistic image reconstruction for optical tomographic imaging

    Get PDF
    Preclinical bioluminescence tomographic reconstruction is underdetermined. This work addresses the use of prior information in bioluminescence tomography to improve image acquisition, reconstruction, and analysis. A structured light surface metrology method was developed to measure surface geometry and enable robust and automatic integration of mirrors into the measurement process. A mouse phantom was imaged and accuracy was measured at 0.2mm with excellent surface coverage. A sparsity-regularised reconstruction algorithm was developed to use instrument noise statistics to automatically determine the stopping point of reconstruction. It was applied to in silico and in simulacra data and successfully reconstructed and resolved two separate luminescent sources within a plastic mouse phantom. A Bayesian framework was constructed that incorporated bioluminescence properties and instrument properties. Distribution expectations and standard deviations were estimated, providing reconstructions and measures of reconstruction uncertainty. The reconstructions showed superior performance when applied to in simulacra data compared to the sparsity-based algorithm. The information content of measurements using different sets of wavelengths was quantified using the Bayesian framework via mutual information and applied to an in silico problem. Significant differences in information content were observed and comparison against a condition number-based approach indicated subtly different results
    • …
    corecore