49 research outputs found

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Deep learning for inverse problems in remote sensing: super-resolution and SAR despeckling

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Image Motion Analysis using Inertial Sensors

    Get PDF

    Camera-independent learning and image quality assessment for super-resolution

    Get PDF
    An increasing number of applications require high-resolution images in situations where the access to the sensor and the knowledge of its specifications are limited. In this thesis, the problem of blind super-resolution is addressed, here defined as the estimation of a high-resolution image from one or more low-resolution inputs, under the condition that the degradation model parameters are unknown. The assessment of super-resolved results, using objective measures of image quality, is also addressed.Learning-based methods have been successfully applied to the single frame super-resolution problem in the past. However, sensor characteristics such as the Point Spread Function (PSF) must often be known. In this thesis, a learning-based approach is adapted to work without the knowledge of the PSF thus making the framework camera-independent. However, the goal is not only to super-resolve an image under this limitation, but also to provide an estimation of the best PSF, consisting of a theoretical model with one unknown parameter.In particular, two extensions of a method performing belief propagation on a Markov Random Field are presented. The first method finds the best PSF parameter by performing a search for the minimum mean distance between training examples and patches from the input image. In the second method, the best PSF parameter and the super-resolution result are found simultaneously by providing a range of possible PSF parameters from which the super-resolution algorithm will choose from. For both methods, a first estimate is obtained through blind deconvolution and an uncertainty is calculated in order to restrict the search.Both camera-independent adaptations are compared and analyzed in various experiments, and a set of key parameters are varied to determine their effect on both the super-resolution and the PSF parameter recovery results. The use of quality measures is thus essential to quantify the improvements obtained from the algorithms. A set of measures is chosen that represents different aspects of image quality: the signal fidelity, the perceptual quality and the localization and scale of the edges.Results indicate that both methods improve similarity to the ground truth and can in general refine the initial PSF parameter estimate towards the true value. Furthermore, the similarity measure results show that the chosen learning-based framework consistently improves a measure designed for perceptual quality

    Nonrigid Surface Tracking, Analysis and Evaluation

    Get PDF

    Relating Spontaneous Activity and Cognitive States via NeuroDynamic Modeling

    Get PDF
    Stimulus-free brain dynamics form the basis of current knowledge concerning functional integration and segregation within the human brain. These relationships are typically described in terms of resting-state brain networks—regions which spontaneously coactivate. However, despite the interest in the anatomical mechanisms and biobehavioral correlates of stimulus-free brain dynamics, little is known regarding the relation between spontaneous brain dynamics and task-evoked activity. In particular, no computational framework has been previously proposed to unite spontaneous and task dynamics under a single, data-driven model. Model development in this domain will provide new insight regarding the mechanisms by which exogeneous stimuli and intrinsic neural circuitry interact to shape human cognition. The current work bridges this gap by deriving and validating a new technique, termed Mesoscale Individualized NeuroDynamic (MINDy) modeling, to estimate large-scale neural population models for individual human subjects using resting-state fMRI. A combination of ground-truth simulations and test-retest data are used to demonstrate that the approach is robust to various forms of noise, motion, and data processing choices. The MINDy formalism is then extended to simultaneously estimating neural population models and the neurovascular coupling which gives rise to BOLD fMRI. In doing so, I develop and validate a new optimization framework for simultaneously estimating system states and parameters. Lastly, MINDy models derived from resting-state data are used to predict task-based activity and remove the effects of intrinsic dynamics. Removing the MINDy model predictions from task fMRI, enables separation of exogenously-driven components of activity from their indirect consequences (the model predictions). Results demonstrate that removing the predicted intrinsic dynamics improves detection of event-triggered and sustained responses across four cognitive tasks. Together, these findings validate the MINDy framework and demonstrate that MINDy models predict brain dynamics across contexts. These dynamics contribute to the variance of task-evoked brain activity between subjects. Removing the influence of intrinsic dynamics improves the estimation of task effects

    Precise measurements of time delays in gravitationally lensed quasars for competitive and independent determination of the Hubble constant

    Get PDF
    During these last decades, by virtue of observations, the Standard Cosmological Model has emerged, providing a description of the Universe's evolution using a minimal set of independent constraints - the cosmological parameters. Among them is the expansion rate of the Universe, the so-called Hubble constant or H0, first measured by Lemaître in 1927. The century that followed this cornerstone measurement saw numerous attempts to refine the initial value, and for good reason: a precise and independent measurement of H0 will bring strong constraints on the cosmological models. It could notably help the astronomers to better understand the nature of dark energy, thus making it one of the most sought-after prizes in modern cosmology. My work at the Laboratory of Astrophysics of EPFL is embedded in this context. I am part of the COSMOGRAIL and H0LiCOW collaborations, aiming to measure the Hubble constant with the highest level of precision using time-delay cosmography, a method based on the theory of strong gravitational lensing. This effect occurs when an observer looks at a light source located behind a massive foreground galaxy. The mass of the galaxy acts similarly to an optical lens and focuses the light rays emitted by the source. As a consequence, multiple lensed images of the source appear around the lens galaxy. If the luminosity of the source changes over time, the variations will be seen in all the lensed images but with a temporal delay due to the different travel paths of the light rays. By carefully monitoring the luminosity variations of each lensed image, one can precisely measure the temporal delays between them. Combined to high-resolution observations of the foreground galaxy and its surroundings, it is possible to directly measure the Hubble constant upon the sole assumption that the General Relativity is correct. Since more than 13 years, COSMOGRAIL monitors dozens of lensed quasars to produce high-quality light curves and time-delay measurements. During these last four years, I took care of the monitoring schedule, continuous data reduction and time-delay measurements through the development of curve-shifting techniques. I produced light curves and measured time delays on a variety of lenses. After more than a decade of endeavours, COSMOGRAIL and H0LiCOW finally revealed their measurement of the expansion rate of the Universe from a blind analysis of three lensed sources. I had the privilege to be the lead author of the publication presenting our measurement of the Hubble constant, H0=71.9 -3.0+2.4 km/s/Mpc 3.8% precision in the Standard Cosmological Model. Such a precision allows a direct comparison with the results of the distance ladder technique in the local Universe and the Planck satellite Cosmic Microwave Background observations in the distant Universe, both of which being currently in a significant tension of unknown source

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Apprentissage statistique pour la personnalisation de modèles cardiaques à partir de données d’imagerie

    Get PDF
    This thesis focuses on the calibration of an electromechanical model of the heart from patient-specific, image-based data; and on the related task of extracting the cardiac motion from 4D images. Long-term perspectives for personalized computer simulation of the cardiac function include aid to the diagnosis, aid to the planning of therapy and prevention of risks. To this end, we explore tools and possibilities offered by statistical learning. To personalize cardiac mechanics, we introduce an efficient framework coupling machine learning and an original statistical representation of shape & motion based on 3D+t currents. The method relies on a reduced mapping between the space of mechanical parameters and the space of cardiac motion. The second focus of the thesis is on cardiac motion tracking, a key processing step in the calibration pipeline, with an emphasis on quantification of uncertainty. We develop a generic sparse Bayesian model of image registration with three main contributions: an extended image similarity term, the automated tuning of registration parameters and uncertainty quantification. We propose an approximate inference scheme that is tractable on 4D clinical data. Finally, we wish to evaluate the quality of uncertainty estimates returned by the approximate inference scheme. We compare the predictions of the approximate scheme with those of an inference scheme developed on the grounds of reversible jump MCMC. We provide more insight into the theoretical properties of the sparse structured Bayesian model and into the empirical behaviour of both inference schemesCette thèse porte sur un problème de calibration d'un modèle électromécanique de cœur, personnalisé à partir de données d'imagerie médicale 3D+t ; et sur celui - en amont - de suivi du mouvement cardiaque. A cette fin, nous adoptons une méthodologie fondée sur l'apprentissage statistique. Pour la calibration du modèle mécanique, nous introduisons une méthode efficace mêlant apprentissage automatique et une description statistique originale du mouvement cardiaque utilisant la représentation des courants 3D+t. Notre approche repose sur la construction d'un modèle statistique réduit reliant l'espace des paramètres mécaniques à celui du mouvement cardiaque. L'extraction du mouvement à partir d'images médicales avec quantification d'incertitude apparaît essentielle pour cette calibration, et constitue l'objet de la seconde partie de cette thèse. Plus généralement, nous développons un modèle bayésien parcimonieux pour le problème de recalage d'images médicales. Notre contribution est triple et porte sur un modèle étendu de similarité entre images, sur l'ajustement automatique des paramètres du recalage et sur la quantification de l'incertitude. Nous proposons une technique rapide d'inférence gloutonne, applicable à des données cliniques 4D. Enfin, nous nous intéressons de plus près à la qualité des estimations d'incertitude fournies par le modèle. Nous comparons les prédictions du schéma d'inférence gloutonne avec celles données par une procédure d'inférence fidèle au modèle, que nous développons sur la base de techniques MCMC. Nous approfondissons les propriétés théoriques et empiriques du modèle bayésien parcimonieux et des deux schémas d'inférenc

    Multiresolution image models and estimation techniques

    Get PDF
    corecore