373 research outputs found
Image Deblurring Using Derivative Compressed Sensing for Optical Imaging Application
Reconstruction of multidimensional signals from the samples of their partial
derivatives is known to be a standard problem in inverse theory. Such and
similar problems routinely arise in numerous areas of applied sciences,
including optical imaging, laser interferometry, computer vision, remote
sensing and control. Though being ill-posed in nature, the above problem can be
solved in a unique and stable manner, provided proper regularization and
relevant boundary conditions. In this paper, however, a more challenging setup
is addressed, in which one has to recover an image of interest from its noisy
and blurry version, while the only information available about the imaging
system at hand is the amplitude of the generalized pupil function (GPF) along
with partial observations of the gradient of GPF's phase. In this case, the
phase-related information is collected using a simplified version of the
Shack-Hartmann interferometer, followed by recovering the entire phase by means
of derivative compressed sensing. Subsequently, the estimated phase can be
combined with the amplitude of the GPF to produce an estimate of the point
spread function (PSF), whose knowledge is essential for subsequent image
deconvolution. In summary, the principal contribution of this work is twofold.
First, we demonstrate how to simplify the construction of the Shack-Hartmann
interferometer so as to make it less expensive and hence more accessible.
Second, it is shown by means of numerical experiments that the above
simplification and its associated solution scheme produce image reconstructions
of the quality comparable to those obtained using dense sampling of the GPF
phase
Probability-guaranteed set-membership state estimation for polynomially uncertain linear time-invariant systems
2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksConventional deterministic set-membership (SM) estimation is limited to unknown-but-bounded uncertainties. In order to exploit distributional information of probabilistic uncertainties, a probability-guaranteed SM state estimation approach is proposed for uncertain linear time-invariant systems. This approach takes into account polynomial dependence on probabilistic uncertain parameters as well as additive stochastic noises. The purpose is to compute, at each time instant, a bounded set that contains the actual state with a guaranteed probability. The proposed approach relies on the extended form of an observer representation over a sliding window. For the offline observer synthesis, a polynomial-chaos-based method is proposed to minimize the averaged H2 estimation performance with respect to probabilistic uncertain parameters. It explicitly accounts for the polynomial uncertainty structure, whilst most literature relies on conservative affine or polytopic overbounding. Online state estimation restructures the extended observer form, and constructs a Gaussian mixture model to approximate the state distribution. This enables computationally efficient ellipsoidal calculus to derive SM estimates with a predefined confidence level. The proposed approach preserves time invariance of the uncertain parameters and fully exploits the polynomial uncertainty structure, to achieve tighter SM bounds. This improvement is illustrated by a numerical example with a comparison to a deterministic zonotopic method.Peer ReviewedPostprint (author's final draft
Recommended from our members
Ultra-fast Imaging of Two-Phase Flow in Structured Monolith Reactors; Techniques and Data Analysis
This thesis will address the use of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) techniques to probe the “monolith reactor”, which consists of a structured catalyst over which reactions may occur. This reactor has emerged as a potential alternative to more traditional chemical engineering systems such as trickle bed and slurry reactors. However, being a relatively new design, its associated flow phenomena and design procedures are not rigorously understood, which is retarding its acceptance in industry. Traditional observations are unable to provide the necessary information for design since the systems are opaque and dynamic. Therefore, NMR is proposed as an ideal tool to probe these systems in detail.
The theory of NMR is summarised and the development of novel NMR techniques is presented. Novel techniques are validated in simple systems, and tested in more complex systems to ascertain their quantitative nature, and to find their limitations. These techniques are improvements over existing techniques in that they either decrease the acquisition time (allowing the observation of dynamically-changing systems) or allow us to probe systems in different ways to extract useful information. The goal of this research is to better understand the flow phenomena present in such systems, and to use this information to design better, more efficient, more controllable industrial reactors.
The analysis of the NMR data acquired is discussed in detail, and several novel image-processing techniques have been developed to aid in the quantification of features within the images, and also to measure quantities such as holdup and velocity. These novel techniques are validated, and then applied to the systems of interest.
Various configurations of monolith reactor, ranging from low flow rate systems to more challenging (and more industrially relevant) turbulent systems, are probed using these methods, and the contrasting flow phenomena and performance of these systems are discussed, with a view to optimisation of the choice of design parameters
Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence
Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research:
The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter.
The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions.
Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement.
The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort.
Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio.
In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram
Dictionary optimization for representing sparse signals using Rank-One Atom Decomposition (ROAD)
Dictionary learning has attracted growing research interest during recent years. As it is a bilinear inverse problem, one typical way to address this problem is to iteratively alternate between two stages: sparse coding and dictionary update. The general principle of the alternating approach is to fix one variable and optimize the other one. Unfortunately, for the alternating method, an ill-conditioned dictionary in the training process may not only introduce numerical instability but also trap the overall training process towards a singular point. Moreover, it leads to difficulty in analyzing its convergence, and few dictionary learning algorithms have been proved to have global convergence. For the other bilinear inverse problems, such as short-and-sparse deconvolution (SaSD) and convolutional dictionary learning (CDL), the alternating method is still a popular choice. As these bilinear inverse problems are also ill-posed and complicated, they are tricky to handle. Additional inner iterative methods are usually required for both of the updating stages, which aggravates the difficulty of analyzing the convergence of the whole learning process. It is also challenging to determine the number of iterations for each stage, as over-tuning any stage will trap the whole process into a local minimum that is far from the ground truth.
To mitigate the issues resulting from the alternating method, this thesis proposes a novel algorithm termed rank-one atom decomposition (ROAD), which intends to recast a bilinear inverse problem into an optimization problem with respect to a single variable, that is, a set of rank-one matrices. Therefore, the resulting algorithm is one stage, which minimizes the sparsity of the coefficients while keeping the data consistency constraint throughout the whole learning process. Inspired by recent advances in applying the alternating direction method of multipliers (ADMM) to nonconvex nonsmooth problems, an ADMM solver is adopted to address ROAD problems, and a lower bound of the penalty parameter is derived to guarantee a convergence in the augmented Lagrangian despite nonconvexity of the optimization formulation. Compared to two-stage dictionary learning methods, ROAD simplifies the learning process, eases the difficulty of analyzing convergence, and avoids the singular point issue. From a practical point of view, ROAD reduces the number of tuning parameters required in other benchmark algorithms. Numerical tests reveal that ROAD outperforms other benchmark algorithms in both synthetic data tests and single image super-resolution applications. In addition to dictionary learning, the ROAD formulation can also be extended to solve the SaSD and CDL problems. ROAD can still be employed to recast these problems into a one-variable optimization problem. Numerical tests illustrate that ROAD has better performance in estimating convolutional kernels compared to the latest SaSD and CDL algorithms.Open Acces
Compressed Sensing in the Presence of Side Information
Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization.
After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse
signals.
CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices.
Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction.
A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme
Recommended from our members
An automated image processing system for the detection of photoreceptor cells in adaptive optics retinal images
The rapid progress in Adaptive Optics (AO) imaging, in the last decades, has had a transformative impact on the entire approach underpinning the investigations of retinal tissues. Capable of imaging the retina in vivo at the cellular level, AO systems have revealed new insights into retinal structures, function, and the origins of various retinal pathologies. This has expanded the field of clinical research and opened a wide range of applications for AO imaging. The advances in image processing techniques contribute to a better observation of retinal microstructures and therefore more accurate detection of pathological conditions. The development of automated tools for processing images obtained with AO allows for objective examination of a larger number of images with time and cost savings and thus facilitates the use of AO imaging as a practical and efficient tool, by making it widely accessible to the clinical ophthalmic community.
In this work, an image processing framework is developed that allows for enhancement of AO high-resolution retinal images and accurate detection of photoreceptor cells. The proposed framework consists of several stages: image quality assessment, illumination compensation, noise suppression, image registration, image restoration, enhancement and detection of photoreceptor cells. The visibility of retinal features is improved by tackling specific components of the AO imaging system, affecting the quality of acquired retinal data. Therefore, we attempt to fully recover AO retinal images, free from any induced degradation effects. A comparative study of different methods and evaluation of their efficiency on retinal datasets is performed by assessing image quality. In order to verify the achieved results, the cone packing density distribution was calculated and correlated with statistical histological data. From the performed experiments, it can be concluded that the proposed image processing framework can effectively improve photoreceptor cell image quality and thus can serve as a platform for further investigation of retinal tissues. Quantitative analysis of the retinal images obtained with the proposed image processing framework can be used for comparison with data related to pathological retinas, as well as for understanding the effect of age and retinal pathology on cone packing density and other microstructures
Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos
High quality digital images have become pervasive in modern scientific and everyday life —
in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However
there are always limits to the quality of these images due to uncertainty and imprecision in the
measurement systems. Modern signal processing methods offer the promise of overcoming
some of these problems by postprocessing
these blurred and noisy images. In this thesis,
novel methods using nonstationary statistical models are developed for the removal of blurs
from out of focus and other types of degraded photographic images.
The work tackles the fundamental problem blind image deconvolution (BID); its goal is
to restore a sharp image from a blurred observation when the blur itself is completely unknown.
This is a “doubly illposed”
problem — extreme lack of information must be countered
by strong prior constraints about sensible types of solution. In this work, the hierarchical
Bayesian methodology is used as a robust and versatile framework to impart the required prior
knowledge.
The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along
with techniques and models for its solution. Observation models are developed, with an
emphasis on photographic restoration, concluding with a discussion of how these are reduced
to the common linear spatially-invariant
(LSI) convolutional model. Classical methods for the
solution of illposed
problems are summarised to provide a foundation for the main theoretical
ideas that will be used under the Bayesian framework. This is followed by an indepth
review
and discussion of the various prior image and blur models appearing in the literature, and then
their applications to solving the problem with both Bayesian and nonBayesian
techniques.
The second part covers novel restoration methods, making use of the theory presented in Part I.
Firstly, two new nonstationary image models are presented. The first models local variance in
the image, and the second extends this with locally adaptive noncausal
autoregressive (AR)
texture estimation and local mean components. These models allow for recovery of image
details including edges and texture, whilst preserving smooth regions. Most existing methods
do not model the boundary conditions correctly for deblurring of natural photographs, and a
Chapter is devoted to exploring Bayesian solutions to this topic.
Due to the complexity of the models used and the problem itself, there are many challenges
which must be overcome for tractable inference. Using the new models, three different inference
strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori
(MMAP) method with deterministic optimisation; proceeding with the stochastic methods
of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution
using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective
way to deal with a variety of different types of unknown blurs. Along the way, details are given
of the numerical strategies developed to give accurate results and to accelerate performance.
Finally, the thesis demonstrates state of the art
results in blind restoration of synthetic and real
degraded images, such as recovering details in out of focus photographs
Recommended from our members
Bayesian Inference for Genomic Data Analysis
High-throughput genomic data contain gazillion of information that are influenced by the complex biological processes in the cell. As such, appropriate mathematical modeling frameworks are required to understand the data and the data generating processes. This dissertation focuses on the formulation of mathematical models and the description of appropriate computational algorithms to obtain insights from genomic data.
Specifically, characterization of intra-tumor heterogeneity is studied. Based on the total number of allele copies at the genomic locations in the tumor subclones, the problem is viewed from two perspectives: the presence or absence of copy-neutrality assumption. With the presence of copy-neutrality, it is assumed that the genome contains mutational variability and the three possible genotypes may be present at each genomic location. As such, the genotypes of all the genomic locations in the tumor subclones are modeled by a ternary matrix. In the second case, in addition to mutational variability, it is assumed that the genomic locations may be affected by structural variabilities such as copy number variation (CNV). Thus, the genotypes are modeled with a pair of (Q + 1)-ary matrices. Using the categorical Indian buffet process (cIBP), state-space modeling framework is employed in describing the two processes and the sequential Monte Carlo (SMC) methods for dynamic models are applied to perform inference on important model parameters.
Moreover, the problem of estimating gene regulatory network (GRN) from measurement with missing values is presented. Specifically, gene expression time series data may contain missing values for entire expression values of a single point or some set of consecutive time points. However, complete data is often needed to make inference on the underlying GRN. Using the missing measurement, a dynamic stochastic model is used to describe the evolution of gene expression and point-based Gaussian approximation (PBGA) filters with one-step or two-step missing measurements are applied for the inference. Finally, the problem of deconvolving gene expression data from complex heterogeneous biological samples is examined, where the observed data are a mixture of different cell types. A statistical description of the problem is used and the SMC method for static models is applied to estimate the cell-type specific expressions and the cell type proportions in the heterogeneous samples
Image Restoration
This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with
- …