147 research outputs found

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    Get PDF
    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected

    Multiresolution image models and estimation techniques

    Get PDF

    Probabilistic methods for high dimensional signal processing

    Get PDF
    This thesis investigates the use of probabilistic and Bayesian methods for analysing high dimensional signals. The work proceeds in three main parts sharing similar objectives. Throughout we focus on building data efficient inference mechanisms geared toward high dimensional signal processing. This is achieved by using probabilistic models on top of informative data representation operators. We also improve on the fitting objective to make it better suited to our requirements. Variational Inference We introduce a variational approximation framework using direct optimisation of what is known as the scale invariant Alpha-Beta divergence (sAB-divergence). This new objective encompasses most variational objectives that use the Kullback-Leibler, the RĂ©nyi or the gamma divergences. It also gives access to objective functions never exploited before in the context of variational inference. This is achieved via two easy to interpret control parameters, which allow for a smooth interpolation over the divergence space while trading-off properties such as mass-covering of a target distribution and robustness to outliers in the data. Furthermore, the sAB variational objective can be optimised directly by re-purposing existing methods for Monte Carlo computation of complex variational objectives, leading to estimates of the divergence instead of variational lower bounds. We show the advantages of this objective on Bayesian models for regression problems. Roof-Edge hidden Markov Random Field We propose a method for semi-local Hurst estimation by incorporating a Markov random field model to constrain a wavelet-based pointwise Hurst estimator. This results in an estimator which is able to exploit the spatial regularities of a piecewise parametric varying Hurst parameter. The pointwise estimates are jointly inferred along with the parametric form of the underlying Hurst function which characterises how the Hurst parameter varies deterministically over the spatial support of the data. Unlike recent Hurst regularisation methods, the proposed approach is flexible in that arbitrary parametric forms can be considered and is extensible in as much as the associated gradient descent algorithm can accommodate a broad class of distributional assumptions without any significant modifications. The potential benefits of the approach are illustrated with simulations of various first-order polynomial forms. Scattering Hidden Markov Tree We here combine the rich, over-complete signal representation afforded by the scattering transform together with a probabilistic graphical model which captures hierarchical dependencies between coefficients at different layers. The wavelet scattering network result in a high-dimensional representation which is translation invariant and stable to deformations whilst preserving informative content. Such properties are achieved by cascading wavelet transform convolutions with non-linear modulus and averaging operators. The network structure and its distributions are described using a Hidden Markov Tree. This yields a generative model for high dimensional inference and offers a means to perform various inference tasks such as prediction. Our proposed scattering convolutional hidden Markov tree displays promising results on classification tasks of complex images in the challenging case where the number of training examples is extremely small. We also use variational methods on the aforementioned model and leverage the objective sAB variational objective defined earlier to improve the quality of the approximation

    Bayesian super-resolution with application to radar target recognition

    Get PDF
    This thesis is concerned with methods to facilitate automatic target recognition using images generated from a group of associated radar systems. Target recognition algorithms require access to a database of previously recorded or synthesized radar images for the targets of interest, or a database of features based on those images. However, the resolution of a new image acquired under non-ideal conditions may not be as good as that of the images used to generate the database. Therefore it is proposed to use super-resolution techniques to match the resolution of new images with the resolution of database images. A comprehensive review of the literature is given for super-resolution when used either on its own, or in conjunction with target recognition. A new superresolution algorithm is developed that is based on numerical Markov chain Monte Carlo Bayesian statistics. This algorithm allows uncertainty in the superresolved image to be taken into account in the target recognition process. It is shown that the Bayesian approach improves the probability of correct target classification over standard super-resolution techniques. The new super-resolution algorithm is demonstrated using a simple synthetically generated data set and is compared to other similar algorithms. A variety of effects that degrade super-resolution performance, such as defocus, are analyzed and techniques to compensate for these are presented. Performance of the super-resolution algorithm is then tested as part of a Bayesian target recognition framework using measured radar data

    Visual image processing in various representation spaces for documentary preservation

    Get PDF
    This thesis establishes an advanced image processing framework for the enhancement and restoration of historical document images (HDI) in both intensity (gray-scale or color) and multispectral (MS) representation spaces. It provides three major contributions: 1) the binarization of gray-scale HDI; 2) the visual quality restoration of MS HDI; and 3) automatic reference data (RD) estimation for HDI binarization. HDI binarization is one of the enhancement techniques that produces bi-level information which is easy to handle using methods of analysis (OCR, for instance) and is less computationally costly to process than 256 levels of grey or color images. Restoring the visual quality of HDI in an MS representation space enhances their legibility, which is not possible with conventional intensity-based restoration methods, and HDI legibility is the main concern of historians and librarians wishing to transfer knowledge and revive ancient cultural heritage. The use of MS imaging systems is a new and attractive research trend in the field of numerical processing of cultural heritage documents. In this thesis, these systems are also used for automatically estimating more accurate RD to be used for the evaluation of HDI binarization algorithms in order to track the level of human performance. Our first contribution, which is a new adaptive method of intensity-based binarization, is defined at the outset. Since degradation is present over document images, binarization methods must be adapted to handle degradation phenomena locally. Unfortunately, these methods are not effective, as they are not able to capture weak text strokes, which results in a deterioration of the performance of character recognition engines. The proposed approach first detects a subset of the most probable text pixels, which are used to locally estimate the parameters of the two classes of pixels (text and background), and then performs a simple maximum likelihood (ML) to locally classify the remaining pixels based on their class membership. To the best of our knowledge, this is the first time local parameter estimation and classification in an ML framework has been introduced for HDI binarization with promising results. A limitation of this method in the case with as the intensity-based methods of enhancement is that they are not effective in dealing with severely degraded HDI. Developing more advanced methods based on MS information would be a promising alternative avenue of research. In the second contribution, a novel approach to the visual restoration of HDI is defined. The approach is aimed at providing end users (historians, librarians, etc..) with better HDI visualization, specifically; it aims to restore them from degradations, while keeping the original appearance of the HDI intact. Practically, this problem cannot be solved by conventional intensity-based restoration methods. To cope with these limitations, MS imaging is used to produce additional spectral images in the invisible light (infrared and ultraviolet) range, which gives greater contrast to objects in the documents. The inpainting-based variational framework proposed here for HDI restoration involves isolating the degradation phenomena in the infrared spectral images, and then inpainting them in the visible spectral images. The final color image to visualize is therefore reconstructed from the restored visible spectral images. To the best of our knowledge, this is the first time the inpainting technique has been introduced for MS HDI. The experimental results are promising, and our objective, in collaboration with the BAnQ (BibliothĂšque et Archives nationales de QuĂ©bec), is to push heritage documents into the public domain and build an intelligent engine for accessing them. It is useful to note that the proposed model can be extended to other MS-based image processing tasks. Our third contribution is presented, which is to consider a new problem of RD (reference data) estimation, in order to show the importance of working with MS images rather than gray-scale or color images. RDs are mandatory for comparing different binarization algorithms, and they are usually generated by an expert. However, an expert’s RD is always subject to mislabeling and judgment errors, especially in the case of degraded data in restricted representation spaces (gray-scale or color images). In the proposed method, multiple RD generated by several experts are used in combination with MS HDI to estimate new, more accurate RD. The idea is to include the agreement of experts about labels and the multivariate data fidelity in a single Bayesian classification framework to estimate the a posteriori probability of new labels forming the final estimated RD. Our experiments show that estimated RD are more accurate than an expert’s RD. To the best of our knowledge, no similar work to combine binary data and multivariate data for the estimation of RD has been conducted

    Urban air pollution modelling with machine learning using fixed and mobile sensors

    Get PDF
    Detailed air quality (AQ) information is crucial for sustainable urban management, and many regions in the world have built static AQ monitoring networks to provide AQ information. However, they can only monitor the region-level AQ conditions or sparse point-based air pollutant measurements, but cannot capture the urban dynamics with high-resolution spatio-temporal variations over the region. Without pollution details, citizens will not be able to make fully informed decisions when choosing their everyday outdoor routes or activities, and policy-makers can only make macroscopic regulating decisions on controlling pollution triggering factors and emission sources. An increasing research effort has been paid on mobile and ubiquitous sampling campaigns as they are deemed the more economically and operationally feasible methods to collect urban AQ data with high spatio-temporal resolution. The current research proposes a Machine Learning based AQ Inference (Deep AQ) framework from data-driven perspective, consisting of data pre-processing, feature extraction and transformation, and pixelwise (grid-level) AQ inference. The Deep AQ framework is adaptable to integrate AQ measurements from the fixed monitoring sites (temporally dense but spatially sparse), and mobile low-cost sensors (temporally sparse but spatially dense). While instantaneous pollutant concentration varies in the micro-environment, this research samples representative values in each grid-cell-unit and achieves AQ inference at 1 km \times 1 km pixelwise scale. This research explores the predictive power of the Deep AQ framework based on samples from only 40 fixed monitoring sites in Chengdu, China (4,900 {\mathrm{km}}^\mathrm{2}, 26 April - 12 June 2019) and collaborative sampling from 28 fixed monitoring sites and 15 low-cost sensors equipped with taxis deployed in Beijing, China (3,025 {\mathrm{km}}^\mathrm{2}, 19 June - 16 July 2018). The proposed Deep AQ framework is capable of producing high-resolution (1 km \times 1 km, hourly) pixelwise AQ inference based on multi-source AQ samples (fixed or mobile) and urban features (land use, population, traffic, and meteorological information, etc.). This research has achieved high-resolution (1 km \times 1 km, hourly) AQ inference (Chengdu: less than 1% spatio-temporal coverage; Beijing: less than 5% spatio-temporal coverage) with reasonable and satisfactory accuracy by the proposed methods in urban cases (Chengdu: SMAPE \mathrm{<} 20%; Beijing: SMAPE \mathrm{<} 15%). Detailed outcomes and main conclusions are provided in this thesis on the aspects of fixed and mobile sensing, spatio-temporal coverage and density, and the relative importance of urban features. Outcomes from this research facilitate to provide a scientific and detailed health impact assessment framework for exposure analysis and inform policy-makers with data driven evidence for sustainable urban management.Open Acces

    Analyse de sĂ©ries temporelles d’images Ă  moyenne rĂ©solution spatiale : reconstruction de profils de LAI, dĂ©mĂ©langeage : application pour le suivi de la vĂ©gĂ©tation sur des images MODIS

    Get PDF
    This PhD dissertation is concerned with time series analysis for medium spatial resolution (MSR) remote sensing images. The main advantage of MSR data is their high temporal rate which allows to monitor land use. However, two main problems arise with such data. First, because of cloud coverage and bad acquisition conditions, the resulting time series are often corrupted and not directly exploitable. Secondly, pixels in medium spatial resolution images are often “mixed” in the sense that the spectral response is a combination of the response of “pure” elements.These two problems are addressed in this PhD. First, we propose a data assimilation technique able to recover consistent time series of Leaf Area Index from corrupted MODIS sequences. To this end, a plant growth model, namely GreenLab, is used as a dynamical constraint. Second, we propose a new and efficient unmixing technique for time series. It is in particular based on the use of “elastic” kernels able to properly compare time series shifted in time or of various lengths.Experimental results are shown both on synthetic and real data and demonstrate the efficiency of the proposed methodologies.Cette thĂšse s’intĂ©resse Ă  l’analyse de sĂ©ries temporelles d’images satellites Ă  moyenne rĂ©solution spatiale. L’intĂ©rĂȘt principal de telles donnĂ©es est leur haute rĂ©pĂ©titivitĂ© qui autorise des analyses de l’usage des sols. Cependant, deux problĂšmes principaux subsistent avec de telles donnĂ©es. En premier lieu, en raison de la couverture nuageuse, des mauvaises conditions d’acquisition, ..., ces donnĂ©es sont souvent trĂšs bruitĂ©es. DeuxiĂšmement, les pixels associĂ©s Ă  la moyenne rĂ©solution spatiale sont souvent “mixtes” dans la mesure oĂč leur rĂ©ponse spectrale est une combinaison de la rĂ©ponse de plusieurs Ă©lĂ©ments “purs”. Ces deux problĂšmes sont abordĂ©s dans cette thĂšse. PremiĂšrement, nous proposons une technique d’assimilation de donnĂ©es capable de recouvrer des sĂ©ries temporelles cohĂ©rentes de LAI (Leaf Area Index) Ă  partir de sĂ©quences d’images MODIS bruitĂ©es. Pour cela, le modĂšle de croissance de plantes GreenLab estutilisĂ©. En second lieu, nous proposons une technique originale de dĂ©mĂ©langeage, qui s’appuie notamment sur des noyaux â€œĂ©lastiques” capables de gĂ©rer les spĂ©cificitĂ©s des sĂ©ries temporelles (sĂ©ries de taille diffĂ©rentes, dĂ©calĂ©es dans le temps, ...)Les rĂ©sultats expĂ©rimentaux, sur des donnĂ©es synthĂ©tiques et rĂ©elles, montrent de bonnes performances des mĂ©thodologies proposĂ©es
    • 

    corecore