13 research outputs found

    Parameters Estimation For Image Restoration

    Get PDF
    Image degradation generally occurs due to transmission channel error, camera mis-focus, atmospheric turbulence, relative object-camera motion, etc. Such degradations are unavoidable while a scene is captured through a camera. As degraded images are having less scientific values, restoration of such images is extremely essential in many practical applications. In this thesis, attempts have been made to recover images from their degraded observations. Various degradations including, out-of-focus blur, motion blur, atmospheric turbulence blur along with Gaussian noise are considered. Basically image restoration schemes are based on classical, regularisation parameter estimation and PSF estimation. In this thesis, five different contributions have been made based on various aspects of restoration. Four of them deal with spatial invariant degradation and in one of the approach we attempt for removal of spatial variant degradation. Two different schemes are proposed to estimate the motion blur parameters. Two dimensional Gabor filter has been used to calculate the direction of the blur. Radial basis function neural network (RBFNN) has been utilised to find the length of the blur. Subsequently, Wiener filter has been used to restore the images. Noise robustness of the proposed scheme is tested with different noise strengths. The blur parameter estimation problem is modelled as a pattern classification problem and is solved using support vector machine (SVM). The length parameter of motion blur and sigma (σ) parameter of Gaussian blur are identified through multi-class SVM. Support vector regression (SVR) has been utilised to obtain a true mapping of the images from the observed noisy blurred image. The parameters in SVR play a key role in SVR performance and these are optimised through particle swarm optimisation (PSO) technique. The optimised SVR model is used to restore the noisy blurred images. Blur in the presence of noise makes the restoration problem ill-conditioned. The regularisation parameter required for restoration of noisy blurred image is discussed and for the purpose, a global optimisation scheme namely PSO is utilisedto minimise the cost function of generalised cross validation (GCV) measure, which is dependent on regularisation parameter. This eliminates the problem of falling into a local minima. The scheme adapts to degradations due to motion and out-of-focus blur, associated with noise of varying strengths. In another contribution, an attempt has been made to restore images degraded due to rotational motion. Such situation is considered as spatial variant blur and handled by considering this as a combination of a number of spatial invariant blurs. The proposed scheme divides the blurred image into a number of images using elliptical path modelling. Each image is deblurred separately using Wiener filter and finally integrated to construct the whole image. Each model is studied separately, and experiments are conducted to evaluate their performances. The visual as well as the peak signal to noise ratio (PSNR in dB) of restored images are compared with competent recent schemes

    Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos

    Get PDF
    High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs

    Digital Image Processing

    Get PDF
    Newspapers and the popular scientific press today publish many examples of highly impressive images. These images range, for example, from those showing regions of star birth in the distant Universe to the extent of the stratospheric ozone depletion over Antarctica in springtime, and to those regions of the human brain affected by Alzheimer’s disease. Processed digitally to generate spectacular images, often in false colour, they all make an immediate and deep impact on the viewer’s imagination and understanding. Professor Jonathan Blackledge’s erudite but very useful new treatise Digital Image Processing: Mathematical and Computational Methods explains both the underlying theory and the techniques used to produce such images in considerable detail. It also provides many valuable example problems - and their solutions - so that the reader can test his/her grasp of the physical, mathematical and numerical aspects of the particular topics and methods discussed. As such, this magnum opus complements the author’s earlier work Digital Signal Processing. Both books are a wonderful resource for students who wish to make their careers in this fascinating and rapidly developing field which has an ever increasing number of areas of application. The strengths of this large book lie in: • excellent explanatory introduction to the subject; • thorough treatment of the theoretical foundations, dealing with both electromagnetic and acoustic wave scattering and allied techniques; • comprehensive discussion of all the basic principles, the mathematical transforms (e.g. the Fourier and Radon transforms), their interrelationships and, in particular, Born scattering theory and its application to imaging systems modelling; discussion in detail - including the assumptions and limitations - of optical imaging, seismic imaging, medical imaging (using ultrasound), X-ray computer aided tomography, tomography when the wavelength of the probing radiation is of the same order as the dimensions of the scatterer, Synthetic Aperture Radar (airborne or spaceborne), digital watermarking and holography; detail devoted to the methods of implementation of the analytical schemes in various case studies and also as numerical packages (especially in C/C++); • coverage of deconvolution, de-blurring (or sharpening) an image, maximum entropy techniques, Bayesian estimators, techniques for enhancing the dynamic range of an image, methods of filtering images and techniques for noise reduction; • discussion of thresholding, techniques for detecting edges in an image and for contrast stretching, stochastic scattering (random walk models) and models for characterizing an image statistically; • investigation of fractal images, fractal dimension segmentation, image texture, the coding and storing of large quantities of data, and image compression such as JPEG; • valuable summary of the important results obtained in each Chapter given at its end; • suggestions for further reading at the end of each Chapter. I warmly commend this text to all readers, and trust that they will find it to be invaluable. Professor Michael J Rycroft Visiting Professor at the International Space University, Strasbourg, France, and at Cranfield University, England

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    Learning from High-Dimensional Multivariate Signals.

    Full text link
    Modern measurement systems monitor a growing number of variables at low cost. In the problem of characterizing the observed measurements, budget limitations usually constrain the number n of samples that one can acquire, leading to situations where the number p of variables is much larger than n. In this situation, classical statistical methods, founded on the assumption that n is large and p is fixed, fail both in theory and in practice. A successful approach to overcome this problem is to assume a parsimonious generative model characterized by a number k of parameters, where k is much smaller than p. In this dissertation we develop algorithms to fit low-dimensional generative models and extract relevant information from high-dimensional, multivariate signals. First, we define extensions of the well-known Scalar Shrinkage-Thresholding Operator, that we name Multidimensional and Generalized Shrinkage-Thresholding Operators, and show that these extensions arise in numerous algorithms for structured-sparse linear and non-linear regression. Using convex optimization techniques, we show that these operators, defined as the solutions to a class of convex, non-differentiable, optimization problems have an equivalent convex, low-dimensional reformulation. Our equivalence results shed light on the behavior of a general class of penalties that includes classical sparsity-inducing penalties such as the LASSO and the Group LASSO. In addition, our reformulation leads in some cases to new efficient algorithms for a variety of high-dimensional penalized estimation problems. Second, we introduce two new classes of low-dimensional factor models that account for temporal shifts commonly occurring in multivariate signals. Our first contribution, called Order Preserving Factor Analysis, can be seen as an extension of the non-negative, sparse matrix factorization model to allow for order-preserving temporal translations in the data. We develop an efficient descent algorithm to fit this model using techniques from convex and non-convex optimization. Our second contribution extends Principal Component Analysis to the analysis of observations suffering from circular shifts, and we call it Misaligned Principal Component Analysis. We quantify the effect of the misalignments in the spectrum of the sample covariance matrix in the high-dimensional regime and develop simple algorithms to jointly estimate the principal components and the misalignment parameters.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91544/1/atibaup_1.pd

    Abstracts on Radio Direction Finding (1899 - 1995)

    Get PDF
    The files on this record represent the various databases that originally composed the CD-ROM issue of "Abstracts on Radio Direction Finding" database, which is now part of the Dudley Knox Library's Abstracts and Selected Full Text Documents on Radio Direction Finding (1899 - 1995) Collection. (See Calhoun record https://calhoun.nps.edu/handle/10945/57364 for further information on this collection and the bibliography). Due to issues of technological obsolescence preventing current and future audiences from accessing the bibliography, DKL exported and converted into the three files on this record the various databases contained in the CD-ROM. The contents of these files are: 1) RDFA_CompleteBibliography_xls.zip [RDFA_CompleteBibliography.xls: Metadata for the complete bibliography, in Excel 97-2003 Workbook format; RDFA_Glossary.xls: Glossary of terms, in Excel 97-2003 Workbookformat; RDFA_Biographies.xls: Biographies of leading figures, in Excel 97-2003 Workbook format]; 2) RDFA_CompleteBibliography_csv.zip [RDFA_CompleteBibliography.TXT: Metadata for the complete bibliography, in CSV format; RDFA_Glossary.TXT: Glossary of terms, in CSV format; RDFA_Biographies.TXT: Biographies of leading figures, in CSV format]; 3) RDFA_CompleteBibliography.pdf: A human readable display of the bibliographic data, as a means of double-checking any possible deviations due to conversion

    From the sun to the Galactic Center

    Get PDF
    The centers of galaxies are their own ultimate gravitational sinks. Massive black holes and star clusters as well as gas are especially likely to fall into the centers of galaxies by dynamical friction or dissipation. Many galactic centers harbor supermassive black holes (SMBH) and dense nuclear (star) clusters which possibly arrived there by these processes. Nuclear clusters can be formed in situ from gas, or from smaller star clusters which fall to the center. Since the Milky Way harbors both an SMBH and a nuclear cluster, both can be studied best in the Galactic Center (GC), which is the closest galactic nucleus to us. In Chapter 1, I introduce the different components of the Milky Way, and put these into the context of the GC. I then give an overview of relevant properties (e.g. star content and distribution) of the GC. Afterwards, I report the results of four different studies about the GC. In Chapter 2, I analyze the limitations of astrometry, one of the most useful methods for the study of the GC. Thanks to the high density of stars and its relatively small distance from us it is possible to measure the motions of thousands of stars in the GC with images, separated by few years only. I find two main limitations to this method: (1) for bright stars the not perfectly correctable distortion of the camera limits the accuracy, and (2) for the majority of the fainter stars, the main limitation is crowding from the other stars in the GC. The position uncertainty of faint stars is mainly caused by the seeing halos of bright stars. In the very center faint unresolvable stars are also important for the position uncertainty. In Chapter 3, I evaluate the evidence for an intermediate mass black hole in the small candidate cluster IRS13E within the GC. Intermediate mass black holes (IMBHs) have a mass between the two types of confirmed black hole: the stellar remnants and the supermassive black holes in the centers of galaxies. One possibility for their formation is the collision of stars in a dense young star cluster. Such a cluster could sink to the GC by dynamical friction. There it would consist of few bright stars like IRS13E. Firstly, I analyze the SEDs of the objects in IRS13E. The SEDs of most objects can be explained by pure dust emission. Thus, most objects in IRS13E are pure dust clumps and only three young stars. This reduces the significance of the 'cluster' IRS13E compared to the stellar background. Secondly, I obtain acceleration limits for these three stars. The non-detection of accelerations makes an IMBH an unlikely scenario in IRS13E. However, since its three stars form a comoving association, which is unlikely to form by chance, the nature of IRS13E is not yet settled. In the third study (Chapter 4) I measure and analyze the extinction curve toward the GC. The extinction is a contaminant for GC observations and therefore it is necessary to know the extinction toward the GC to determine the luminosity properties of its stars. I obtain the extinction curve by measuring the flux of the HII region in the GC in several infrared HII lines and in the unextincted radio continuum. I compare these ratios with the ratios expected from recombination physics and obtain extinctions at 22 different lines between 1 and 19 micron. For the K-band I derive A_Ks=2.62+/-0.11. The extinction curve follows a power law with a steep slope of -2.11+/-0.06 shortward of 2.8 micron. At longer wavelengths the extinction is grayer and there are absorption features from ices. The extinction curve is a tool to constrain the properties of cosmic dust between the sun and the GC. The extinction curve cannot be explained by dust grains consisting of carbonaceous and silicate grains only. In addition composite particles, which also contain ices are necessary to fit the extinction curve. In the final part of this thesis (Chapter 5) I look at the properties of most of the stars in the GC. These are the old stars that form the nuclear cluster of the Milky Way. I obtain the mass distribution and the light distribution of these stars. I find that the flattening of the stellar distribution increases outside 70''. This indicates that inside a nearly spherical nuclear cluster dominates and that the surrounding light belongs mostly to the nuclear disk. I dissect the light in two components and obtain for the nuclear cluster L_Ks=2.7*10^7 L_sun. I obtain proper motions for more than 10000 stars and radial velocities for more than 2400 stars. Using Jeans modeling I combine velocities and the radial profile to obtain within 100'' (4 pc) a mass of 6.02*10^6 M_sun and a total nuclear cluster mass of 12.88*10^6 M_sun. The Jeans modeling and various other evidence weakly favor a core in the extended mass compared to a cusp. The old star light shows a similar core. The mass to light ratio of the old stars of the nuclear cluster is consistent with the usual initial mass function in the Galaxy. This suggests that most stars in GC formed in the usual way, in a mode different from the origin of the youngest stars there

    Advanced receivers for distributed cooperation in mobile ad hoc networks

    Get PDF
    Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulato
    corecore