964 research outputs found

    Sparse representation-based synthetic aperture radar imaging

    Get PDF
    There is increasing interest in using synthetic aperture radar (SAR) images in automated target recognition and decision-making tasks. The success of such tasks depends on how well the reconstructed SAR images exhibit certain features of the underlying scene. Based on the observation that typical underlying scenes usually exhibit sparsity in terms of such features, we develop an image formation method which formulates the SAR imaging problem as a sparse signal representation problem. Sparse signal representation, which has mostly been exploited in real-valued problems, has many capabilities such as superresolution and feature enhancement for various reconstruction and recognition tasks. However, for problems of complex-valued nature, such as SAR, a key challenge is how to choose the dictionary and the representation scheme for effective sparse representation. Since we are usually interested in features of the magnitude of the SAR reflectivity field, our new approach is designed to sparsely represent the magnitude of the complex-valued scattered field. This turns the image reconstruction problem into a joint optimization problem over the representation of magnitude and phase of the underlying field reflectivities. We develop the mathematical framework for this method and propose an iterative solution for the corresponding joint optimization problem. Our experimental results demonstrate the superiority of this method over previous approaches in terms of both producing high quality SAR images as well as exhibiting robustness to uncertain or limited data

    Sparse representation-based SAR imaging

    Get PDF
    There is increasing interest in using synthetic aperture radar (SAR) images in automated target recognition and decision-making tasks. The success of such tasks depends on how well the reconstructed SAR images exhibit certain features of the underlying scene. Based on the observation that typical underlying scenes usually exhibit sparsity in terms of such features, we develop an image formation method which formulates the SAR imaging problem as a sparse signal representation problem. Sparse signal representation, which has mostly been exploited in real-valued problems, has many capabilities such as superresolution and feature enhancement for various reconstruction and recognition tasks. However, for problems of complex-valued nature, such as SAR, a key challenge is how to choose the dictionary and the representation scheme for effective sparse representation. Since we are usually interested in features of the magnitude of the SAR reflectivity field, our new approach is designed to sparsely represent the magnitude of the complex-valued scattered field. This turns the image reconstruction problem into a joint optimization problem over the representation of magnitude and phase of the underlying field reflectivities. We develop the mathematical framework for this method and propose an iterative solution for the corresponding joint optimization problem. Our experimental results demonstrate the superiority of this method over previous approaches in terms of both producing high quality SAR images as well as exhibiting robustness to uncertain or limited data

    Bayesian super-resolution with application to radar target recognition

    Get PDF
    This thesis is concerned with methods to facilitate automatic target recognition using images generated from a group of associated radar systems. Target recognition algorithms require access to a database of previously recorded or synthesized radar images for the targets of interest, or a database of features based on those images. However, the resolution of a new image acquired under non-ideal conditions may not be as good as that of the images used to generate the database. Therefore it is proposed to use super-resolution techniques to match the resolution of new images with the resolution of database images. A comprehensive review of the literature is given for super-resolution when used either on its own, or in conjunction with target recognition. A new superresolution algorithm is developed that is based on numerical Markov chain Monte Carlo Bayesian statistics. This algorithm allows uncertainty in the superresolved image to be taken into account in the target recognition process. It is shown that the Bayesian approach improves the probability of correct target classification over standard super-resolution techniques. The new super-resolution algorithm is demonstrated using a simple synthetically generated data set and is compared to other similar algorithms. A variety of effects that degrade super-resolution performance, such as defocus, are analyzed and techniques to compensate for these are presented. Performance of the super-resolution algorithm is then tested as part of a Bayesian target recognition framework using measured radar data

    Microwave Sensing and Imaging

    Get PDF
    In recent years, microwave sensing and imaging have acquired an ever-growing importance in several applicative fields, such as non-destructive evaluations in industry and civil engineering, subsurface prospection, security, and biomedical imaging. Indeed, microwave techniques allow, in principle, for information to be obtained directly regarding the physical parameters of the inspected targets (dielectric properties, shape, etc.) by using safe electromagnetic radiations and cost-effective systems. Consequently, a great deal of research activity has recently been devoted to the development of efficient/reliable measurement systems, which are effective data processing algorithms that can be used to solve the underlying electromagnetic inverse scattering problem, and efficient forward solvers to model electromagnetic interactions. Within this framework, this Special Issue aims to provide some insights into recent microwave sensing and imaging systems and techniques

    Digital Image Processing

    Get PDF
    Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding. Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application. The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable. Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England

    Assessing deviations to the ΛCDM model: the importance of model-independent approaches

    Get PDF
    The remarkable improvement of the accuracy of cosmological data in the last years has provided tight constraints on the parameters of the ΛCDM model. For example, gravitational-wave events have confirmed that the speed of gravity is very close to the speed of light. This result has ruled out several modified gravity models. The remaining allowed models are nearly indistinguishable from the standard ΛCDM in data comparison. One approach to discriminate models is to use estimators built for that purpose, as, for example, a model-independent determination of the anisotropic-stress parameter. From this estimator, one can infer if the perfect fluid approximation done in General Relativity is valid, testing any theory that includes this approximation. In this dissertation, we use the latest available data from several cosmological probes and three different methods to reconstruct the anisotropic stress parameter in a model-independent way. Our conclusions depend mildly on the data reconstruction method but agree at the 2σ level. The resulting anisotropic stress may rule out standard gravity within a 1-2σ level depending on the method or redshift. An important question is how the amount of information in the data can be measured. Ideally, we would like to quantify the degree of belief in ΛCDM. In this dissertation, we tackle these questions by using information theory. We compute the entropy of model parameters for specific cosmological probes. We compare this approach with the widely-used Fisher matrix, typically computed when forecasting future large-scale structure surveys. The uncertainties on each parameter are obtained and thus the quality of the data is usually associated with certain properties of the Fisher matrix. Information entropies can also measure how different combinations of cosmological probes constrain the parameters of a model. The same procedure is applied to the recently found data tensions, and it can be used in case of model comparison. Information entropies can be extremely useful due to its analytical expressions if a Gaussian distribution is assumed but a generalization to any distribution is possible. The main message of this dissertation is that new ways of testing gravity are needed, specifically with the decreasing uncertainty in cosmological datasets and the appearance of discrepancies between datasets. We need to better discriminate competing theories. This can be done through estimators that should not rely on a specific scenario. Another possibility is to find a different perspective on statistical inference, which is particularly useful in order to re-evaluate the assumptions done in data reduction

    Signal processing based method for solving inverse scattering problems

    Get PDF
    The problem of reconstructing an image of the permittivity distribution inside a penetrable and strongly scattering object from a finite number of noisy scattered field measurements has always been very challenging because it is ill-posed in nature. Several techniques have been developed which are either computationally very expensive or typically require the object to be weakly scattering. I have developed here a non-linear signal processing method, which will recover images for both strong scatterers and weak scatterers. This nonlinear or cepstral filtering method requires that the scattered field data is first preprocessed to generate a minimum phase function in the object domain. In 2-D or higher dimensional problems, I describe the conditions for minimum phase and demonstrate how an artificial reference wave can be numerically combined with measured complex scattering data in order to enforce this condition, by satisfying Rouche‘s theorem. In the cepstral domain one can filter the frequencies associated with an object from those of the scattered field. After filtering, the next step is to inverse Fourier transform these data and exponentiate to recover the image of the object under test. In addition I also investigate the scattered field sampling requirements for the inverse scattering problem. The proposed inversion technique is applied to the measured experimental data to recover both shape and relative permittivity of unknown objects. The obtained results confirm the effectiveness of this algorithm and show that one can identify optimal parameters for the reference wave and an optimal procedure that results in good reconstructions of a penetrable, strongly scattering permittivity distribution

    Metric Gaussian variational inference

    Get PDF
    One main result of this dissertation is the development of Metric Gaussian Variational Inference (MGVI), a method to perform approximate inference in extremely high dimensions and for complex probabilistic models. The problem with high-dimensional and complex models is twofold. Fist, to capture the true posterior distribution accurately, a sufficiently rich approximation for it is required. Second, the number of parameters to express this richness scales dramatically with the number of model parameters. For example, explicitly expressing the correlation between all model parameters requires their squared number of correlation coefficients. In settings with millions of model parameter, this is unfeasible. MGVI overcomes this limitation by replacing the explicit covariance with an implicit approximation, which does not have to be stored and is accessed via samples. This procedure scales linearly with the problem size and allows to account for the full correlations in even extremely large problems. This makes it also applicable to significantly more complex setups. MGVI enabled a series of ambitious signal reconstructions by me and others, which will be showcased. This involves a time- and frequency-resolved reconstruction of the shadow around the black hole M87* using data provided by the Event Horizon Telescope Collaboration, a three-dimensional tomographic reconstruction of interstellar dust within 300pc around the sun from Gaia starlight-absorption and parallax data, novel medical imaging methods for computed tomography, an all-sky Faraday rotation map, combining distinct data sources, and simultaneous calibration and imaging with a radio-interferometer. The second main result is an an approach to use several, independently trained and deep neural networks to reason on complex tasks. Deep learning allows to capture abstract concepts by extracting them from large amounts of training data, which alleviates the necessity of an explicit mathematical formulation. Here a generative neural network is used as a prior distribution and certain properties are imposed via classification and regression networks. The inference is then performed in terms of the latent variables of the generator, which is done using MGVI and other methods. This allows to flexibly answer novel questions without having to re-train any neural network and to come up with novel answers through Bayesian reasoning. This novel approach of Bayesian reasoning with neural networks can also be combined with conventional measurement data
    • …
    corecore