101 research outputs found

    Blind deconvolution techniques and applications

    Get PDF

    Bayesian Estimation of Smooth Altimetric Parameters: Application to Conventional and Delay/Doppler Altimetry

    Get PDF
    International audienceThis paper proposes a new Bayesian strategy for the smooth estimation of altimetric parameters. The altimetric signal is assumed to be corrupted by a thermal and speckle noise distributed according to an independent and non-identically Gaussian distribution. We introduce a prior enforcing a smooth temporal evolution of the altimetric parameters which improves their physical interpretation. The posterior distribution of the resulting model is optimized using a gradient descent algorithm which allows us to compute the maximum a posteriori estimator of the unknown model parameters. This algorithm has a low computational cost that is suitable for real-time applications. The proposed Bayesian strategy and the corresponding estimation algorithm are evaluated using both synthetic and real data associated with conventional and delay/Doppler altimetry. The analysis of real Jason-2 and CryoSat-2 waveforms shows an improvement in parameter estimation when compared to state-of-the-art estimation algorithms

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object

    Back to Basics: Fast Denoising Iterative Algorithm

    Full text link
    We introduce Back to Basics (BTB), a fast iterative algorithm for noise reduction. Our method is computationally efficient, does not require training or ground truth data, and can be applied in the presence of independent noise, as well as correlated (coherent) noise, where the noise level is unknown. We examine three study cases: natural image denoising in the presence of additive white Gaussian noise, Poisson-distributed image denoising, and speckle suppression in optical coherence tomography (OCT). Experimental results demonstrate that the proposed approach can effectively improve image quality, in challenging noise settings. Theoretical guarantees are provided for convergence stability

    Algorithms for Blind Equalization Based on Relative Gradient and Toeplitz Constraints

    Get PDF
    Blind Equalization (BE) refers to the problem of recovering the source symbol sequence from a signal received through a channel in the presence of additive noise and channel distortion, when the channel response is unknown and a training sequence is not accessible. To achieve BE, statistical or constellation properties of the source symbols are exploited. In BE algorithms, two main concerns are convergence speed and computational complexity. In this dissertation, we explore the application of relative gradient for equalizer adaptation with a structure constraint on the equalizer matrix, for fast convergence without excessive computational complexity. We model blind equalization with symbol-rate sampling as a blind source separation (BSS) problem and study two single-carrier transmission schemes, specifically block transmission with guard intervals and continuous transmission. Under either scheme, blind equalization can be achieved using independent component analysis (ICA) algorithms with a Toeplitz or circulant constraint on the structure of the separating matrix. We also develop relative gradient versions of the widely used Bussgang-type algorithms. Processing the equalizer outputs in sliding blocks, we are able to use the relative gradient for adaptation of the Toeplitz constrained equalizer matrix. The use of relative gradient makes the Bussgang condition appear explicitly in the matrix adaptation and speeds up convergence. For the ICA-based and Bussgang-type algorithms with relative gradient and matrix structure constraints, we simplify the matrix adaptations to obtain equivalent equalizer vector adaptations for reduced computational cost. Efficient implementations with fast Fourier transform, and approximation schemes for the cross-correlation terms used in the adaptation, are shown to further reduce computational cost. We also consider the use of a relative gradient algorithm for channel shortening in orthogonal frequency division multiplexing (OFDM) systems. The redundancy of the cyclic prefix symbols is used to shorten a channel with a long impulse response. We show interesting preliminary results for a shortening algorithm based on relative gradient

    Joint Communication and Positioning based on Channel Estimation

    Get PDF
    Mobile wireless communication systems have rapidly and globally become an integral part of everyday life and have brought forth the internet of things. With the evolution of mobile wireless communication systems, joint communication and positioning becomes increasingly important and enables a growing range of new applications. Humanity has already grown used to having access to multimedia data everywhere at every time and thereby employing all sorts of location-based services. Global navigation satellite systems can provide highly accurate positioning results whenever a line-of-sight path is available. Unfortunately, harsh physical environments are known to degrade the performance of existing systems. Therefore, ground-based systems can assist the existing position estimation gained by satellite systems. Determining positioning-relevant information from a unified signal structure designed for a ground-based joint communication and positioning system can either complement existing systems or substitute them. Such a system framework promises to enhance the existing systems by enabling a highly accurate and reliable positioning performance and increased coverage. Furthermore, the unified signal structure yields synergetic effects. In this thesis, I propose a channel estimation-based joint communication and positioning system that employs a virtual training matrix. This matrix consists of a relatively small training percentage, plus the detected communication data itself. Via a core semi- blind estimation approach, this iteratively includes the already detected data to accurately determine the positioning-relevant parameter, by mutually exchanging information between the communication part and the positioning part of the receiver. Synergy is created. I propose a generalized system framework, suitable to be used in conjunction with various communication system techniques. The most critical positioning-relevant parameter, the time-of-arrival, is part of a physical multipath parameter vector. Estimating the time-of-arrival, therefore, means solving a global, non-linear, multi-dimensional optimization problem. More precisely, it means solving the so-called inverse problem. I thoroughly assess various problem formulations and variations thereof, including several different measurements and estimation algorithms. A significant challenge, when it comes to solving the inverse problem to determine the positioning-relevant path parameters, is imposed by realistic multipath channels. Most parameter estimation algorithms have proven to perform well in moderate multipath environments. It is mathematically straightforward to optimize this performance in the sense that the number of observations has to exceed the number of parameters to be estimated. The typical parameter estimation problem, on the other hand, is based on channel estimates, and it assumes that so-called snapshot measurements are available. In the case of realistic channel models, however, the number of observations does not necessarily exceed the number of unknowns. In this thesis, I overcome this problem, proposing a method to reduce the problem dimensionality via joint model order selection and parameter estimation. Employing the approximated and estimated parameter covariance matrix inherently constrains the estimation problem’s model order selection to result in optimal parameter estimation performance and hence optimal positioning performance. To compare these results with the optimally achievable solution, I introduce a focused order-related lower bound in this thesis. Additionally, I use soft information as a weighting matrix to enhance the positioning algorithm positioning performance. For demonstrating the feasibility and the interplay of the proposed system components, I utilize a prototype system, based on multi-layer interleave division multiple access. This proposed system framework and the investigated techniques can be employed for multiple existing systems or build the basis for future joint communication and positioning systems. The assessed estimation algorithms are transferrable to all kinds of joint communication and positioning system designs. This thesis demonstrates their capability to, in principle, successfully cope with challenging estimation problems stemming from harsh physical environments

    Joint methods in imaging based on diffuse image representations

    Get PDF
    This thesis deals with the application and the analysis of different variants of the Mumford-Shah model in the context of image processing. In this kind of models, a given function is approximated in a piecewise smooth or piecewise constant manner. Especially the numerical treatment of the discontinuities requires additional models that are also outlined in this work. The main part of this thesis is concerned with four different topics. Simultaneous edge detection and registration of two images: The image edges are detected with the Ambrosio-Tortorelli model, an approximation of the Mumford-Shah model that approximates the discontinuity set with a phase field, and the registration is based on these edges. The registration obtained by this model is fully symmetric in the sense that the same matching is obtained if the roles of the two input images are swapped. Detection of grain boundaries from atomic scale images of metals or metal alloys: This is an image processing problem from materials science where atomic scale images are obtained either experimentally for instance by transmission electron microscopy or by numerical simulation tools. Grains are homogenous material regions whose atomic lattice orientation differs from their surroundings. Based on a Mumford-Shah type functional, the grain boundaries are modeled as the discontinuity set of the lattice orientation. In addition to the grain boundaries, the model incorporates the extraction of a global elastic deformation of the atomic lattice. Numerically, the discontinuity set is modeled by a level set function following the approach by Chan and Vese. Joint motion estimation and restoration of motion-blurred video: A variational model for joint object detection, motion estimation and deblurring of consecutive video frames is proposed. For this purpose, a new motion blur model is developed that accurately describes the blur also close to the boundary of a moving object. Here, the video is assumed to consist of an object moving in front of a static background. The segmentation into object and background is handled by a Mumford-Shah type aspect of the proposed model. Convexification of the binary Mumford-Shah segmentation model: After considering the application of Mumford-Shah type models to tackle specific image processing problems in the previous topics, the Mumford-Shah model itself is studied more closely. Inspired by the work of Nikolova, Esedoglu and Chan, a method is developed that allows global minimization of the binary Mumford-Shah segmentation model by solving a convex, unconstrained optimization problem. In an outlook, segmentation of flowfields into piecewise affine regions using this convexification method is briefly discussed
    • 

    corecore