523 research outputs found

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Intelligent Imaging of Perfusion Using Arterial Spin Labelling

    Get PDF
    Arterial spin labelling (ASL) is a powerful magnetic resonance imaging technique, which can be used to noninvasively measure perfusion in the brain and other organs of the body. Promising research results show how ASL might be used in stroke, tumours, dementia and paediatric medicine, in addition to many other areas. However, significant obstacles remain to prevent widespread use: ASL images have an inherently low signal to noise ratio, and are susceptible to corrupting artifacts from motion and other sources. The objective of the work in this thesis is to move towards an "intelligent imaging" paradigm: one in which the image acquisition, reconstruction and processing are mutually coupled, and tailored to the individual patient. This thesis explores how ASL images may be improved at several stages of the imaging pipeline. We review the relevant ASL literature, exploring details of ASL acquisitions, parameter inference and artifact post-processing. We subsequently present original work: we use the framework of Bayesian experimental design to generate optimised ASL acquisitions, we present original methods to improve parameter inference through anatomically-driven modelling of spatial correlation, and we describe a novel deep learning approach for simultaneous denoising and artifact filtering. Using a mixture of theoretical derivation, simulation results and imaging experiments, the work in this thesis presents several new approaches for ASL, and hopefully will shape future research and future ASL usage

    Structure-aware image denoising, super-resolution, and enhancement methods

    Get PDF
    Denoising, super-resolution and structure enhancement are classical image processing applications. The motive behind their existence is to aid our visual analysis of raw digital images. Despite tremendous progress in these fields, certain difficult problems are still open to research. For example, denoising and super-resolution techniques which possess all the following properties, are very scarce: They must preserve critical structures like corners, should be robust to the type of noise distribution, avoid undesirable artefacts, and also be fast. The area of structure enhancement also has an unresolved issue: Very little efforts have been put into designing models that can tackle anisotropic deformations in the image acquisition process. In this thesis, we design novel methods in the form of partial differential equations, patch-based approaches and variational models to overcome the aforementioned obstacles. In most cases, our methods outperform the existing approaches in both quality and speed, despite being applicable to a broader range of practical situations.Entrauschen, Superresolution und Strukturverbesserung sind klassische Anwendungen der Bildverarbeitung. Ihre Existenz bedingt sich in dem Bestreben, die visuelle Begutachtung digitaler Bildrohdaten zu unterstützen. Trotz erheblicher Fortschritte in diesen Feldern bedürfen bestimmte schwierige Probleme noch weiterer Forschung. So sind beispielsweise Entrauschungsund Superresolutionsverfahren, welche alle der folgenden Eingenschaften besitzen, sehr selten: die Erhaltung wichtiger Strukturen wie Ecken, Robustheit bezüglich der Rauschverteilung, Vermeidung unerwünschter Artefakte und niedrige Laufzeit. Auch im Gebiet der Strukturverbesserung liegt ein ungelöstes Problem vor: Bisher wurde nur sehr wenig Forschungsaufwand in die Entwicklung von Modellen investieret, welche anisotrope Deformationen in bildgebenden Verfahren bewältigen können. In dieser Arbeit entwerfen wir neue Methoden in Form von partiellen Differentialgleichungen, patch-basierten Ansätzen und Variationsmodellen um die oben erwähnten Hindernisse zu überwinden. In den meisten Fällen übertreffen unsere Methoden nicht nur qualitativ die bisher verwendeten Ansätze, sondern lösen die gestellten Aufgaben auch schneller. Zudem decken wir mit unseren Modellen einen breiteren Bereich praktischer Fragestellungen ab

    Speckle Noise Reduction in Medical Ultrasound Images Using Modelling of Shearlet Coefficients as a Nakagami Prior

    Get PDF
    The diagnosis of UltraSound (US) medical images is affected due to the presence of speckle noise. This noise degrades the diagnostic quality of US images by reducing small details and edges present in the image. This paper presents a novel method based on shearlet coefficients modeling of log-transformed US images. Noise-free log-transformed coefficients are modeled as Nakagami distribution and speckle noise coefficients are modeled as Gaussian distribution. Method of Log Cumulants (MoLC) and Method of Moments (MoM) are used for parameter estimation of Nakagami distribution and noise free shearlet coefficients respectively. Then noise free shearlet coefficients are obtained using Maximum a Posteriori (MaP) estimation of noisy coefficients. The experimental results were presented by performing various experiments on synthetic and real US images. Subjective and objective quality assessment of the proposed method is presented and is compared with six other existing methods. The effectiveness of the proposed method over other methods can be seen from the obtained results

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods

    Get PDF
    Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    A retinal vasculature tracking system guided by a deep architecture

    Get PDF
    Many diseases such as diabetic retinopathy (DR) and cardiovascular diseases show their early signs on retinal vasculature. Analysing the vasculature in fundus images may provide a tool for ophthalmologists to diagnose eye-related diseases and to monitor their progression. These analyses may also facilitate the discovery of new relations between changes on retinal vasculature and the existence or progression of related diseases or to validate present relations. In this thesis, a data driven method, namely a Translational Deep Belief Net (a TDBN), is adapted to vasculature segmentation. The segmentation performance of the TDBN on low resolution images was found to be comparable to that of the best-performing methods. Later, this network is used for the implementation of super-resolution for the segmentation of high resolution images. This approach provided an acceleration during segmentation, which relates to down-sampling ratio of an input fundus image. Finally, the TDBN is extended for the generation of probability maps for the existence of vessel parts, namely vessel interior, centreline, boundary and crossing/bifurcation patterns in centrelines. These probability maps are used to guide a probabilistic vasculature tracking system. Although segmentation can provide vasculature existence in a fundus image, it does not give quantifiable measures for vasculature. The latter has more practical value in medical clinics. In the second half of the thesis, a retinal vasculature tracking system is presented. This system uses Particle Filters to describe vessel morphology and topology. Apart from previous studies, the guidance for tracking is provided with the combination of probability maps generated by the TDBN. The experiments on a publicly available dataset, REVIEW, showed that the consistency of vessel widths predicted by the proposed method was better than that obtained from observers. Moreover, very noisy and low contrast vessel boundaries, which were hardly identifiable to the naked eye, were accurately estimated by the proposed tracking system. Also, bifurcation/crossing locations during the course of tracking were detected almost completely. Considering these promising initial results, future work involves analysing the performance of the tracking system on automatic detection of complete vessel networks in fundus images.Open Acces
    corecore