175 research outputs found

    Statistical performance analysis of a fast super-resolution technique using noisy translations

    Full text link
    It is well known that the registration process is a key step for super-resolution reconstruction. In this work, we propose to use a piezoelectric system that is easily adaptable on all microscopes and telescopes for controlling accurately their motion (down to nanometers) and therefore acquiring multiple images of the same scene at different controlled positions. Then a fast super-resolution algorithm \cite{eh01} can be used for efficient super-resolution reconstruction. In this case, the optimal use of r2r^2 images for a resolution enhancement factor rr is generally not enough to obtain satisfying results due to the random inaccuracy of the positioning system. Thus we propose to take several images around each reference position. We study the error produced by the super-resolution algorithm due to spatial uncertainty as a function of the number of images per position. We obtain a lower bound on the number of images that is necessary to ensure a given error upper bound with probability higher than some desired confidence level.Comment: 15 pages, submitte

    Quantifying and containing the curse of high resolution coronal imaging

    Get PDF
    Future missions such as Solar Orbiter (SO), InterHelioprobe, or Solar Probe aim at approaching the Sun closer than ever before, with on board some high resolution imagers (HRI) having a subsecond cadence and a pixel area of about (80km)2(80km)^2 at the Sun during perihelion. In order to guarantee their scientific success, it is necessary to evaluate if the photon counts available at these resolution and cadence will provide a sufficient signal-to-noise ratio (SNR). We perform a first step in this direction by analyzing and characterizing the spatial intermittency of Quiet Sun images thanks to a multifractal analysis. We identify the parameters that specify the scale-invariance behavior. This identification allows next to select a family of multifractal processes, namely the Compound Poisson Cascades, that can synthesize artificial images having some of the scale-invariance properties observed on the recorded images. The prevalence of self-similarity in Quiet Sun coronal images makes it relevant to study the ratio between the SNR present at SoHO/EIT images and in coarsened images. SoHO/EIT images thus play the role of 'high resolution' images, whereas the 'low-resolution' coarsened images are rebinned so as to simulate a smaller angular resolution and/or a larger distance to the Sun. For a fixed difference in angular resolution and in Spacecraft-Sun distance, we determine the proportion of pixels having a SNR preserved at high resolution given a particular increase in effective area. If scale-invariance continues to prevail at smaller scales, the conclusion reached with SoHO/EIT images can be transposed to the situation where the resolution is increased from SoHO/EIT to SO/HRI resolution at perihelion.Comment: 25 pages, 1 table, 7 figure

    Spectral analysis of stationary random bivariate signals

    Full text link
    A novel approach towards the spectral analysis of stationary random bivariate signals is proposed. Using the Quaternion Fourier Transform, we introduce a quaternion-valued spectral representation of random bivariate signals seen as complex-valued sequences. This makes possible the definition of a scalar quaternion-valued spectral density for bivariate signals. This spectral density can be meaningfully interpreted in terms of frequency-dependent polarization attributes. A natural decomposition of any random bivariate signal in terms of unpolarized and polarized components is introduced. Nonparametric spectral density estimation is investigated, and we introduce the polarization periodogram of a random bivariate signal. Numerical experiments support our theoretical analysis, illustrating the relevance of the approach on synthetic data.Comment: 11 pages, 3 figure

    Quantitative control of the error bounds of a fast super-resolution technique for microscopy and astronomy

    Get PDF
    International audienceWhile the registration step is often problematic for super-resolution, many microscopes and telescopes are now equipped with a piezoelectric mechanical system which permits to ac-curately control their motion (down to nanometers). There-fore one can use such devices to acquire multiple images of the same scene at various controlled positions. Then a fast super-resolution algorithm [1] can be used for efficient super-resolution. However the minimal use of r 2 images for a resolution enhancement factor r is generally not sufficient to obtain good results. We propose to take several images at po-sitions randomly distributed close to each reference position. We study the number of images necessary to control the error resulting from the super-resolution algorithm by [1] due to the uncertainty on positions. The main result is a lower bound on the number of images to respect a given error upper bound with probability higher than a desired confidence level

    A Bayesian fusion model for space-time reconstruction of finely resolved velocities in turbulent flows from low resolution measurements

    Full text link
    The study of turbulent flows calls for measurements with high resolution both in space and in time. We propose a new approach to reconstruct High-Temporal-High-Spatial resolution velocity fields by combining two sources of information that are well-resolved either in space or in time, the Low-Temporal-High-Spatial (LTHS) and the High-Temporal-Low-Spatial (HTLS) resolution measurements. In the framework of co-conception between sensing and data post-processing, this work extensively investigates a Bayesian reconstruction approach using a simulated database. A Bayesian fusion model is developed to solve the inverse problem of data reconstruction. The model uses a Maximum A Posteriori estimate, which yields the most probable field knowing the measurements. The DNS of a wall-bounded turbulent flow at moderate Reynolds number is used to validate and assess the performances of the present approach. Low resolution measurements are subsampled in time and space from the fully resolved data. Reconstructed velocities are compared to the reference DNS to estimate the reconstruction errors. The model is compared to other conventional methods such as Linear Stochastic Estimation and cubic spline interpolation. Results show the superior accuracy of the proposed method in all configurations. Further investigations of model performances on various range of scales demonstrate its robustness. Numerical experiments also permit to estimate the expected maximum information level corresponding to limitations of experimental instruments.Comment: 15 pages, 6 figure

    A diffusion strategy for distributed dictionary learning

    Get PDF
    International audienceWe consider the problem of a set of nodes which is required to collectively learn a common dictionary from noisy measurements. This distributed dictionary learning approach may be useful in several contexts including sensor networks. Dif-fusion cooperation schemes have been proposed to estimate a consensus solution to distributed linear regression. This work proposes a diffusion-based adaptive dictionary learning strategy. Each node receives measurements which may be shared or not with its neighbors. All nodes cooperate with their neighbors by sharing their local dictionary to estimate a common representa-tion. In a diffusion approach, the resulting algorithm corresponds to a distributed alternate optimization. Beyond dictionary learn-ing, this strategy could be adapted to many matrix factorization problems in various settings. We illustrate its efficiency on some numerical experiments, including the difficult problem of blind hyperspectral images unmixing

    Simulation de champs de pression en paroi par des processus aléatoires

    Get PDF
    4 pagesInternational audienceThe detailed investigation of the aerodynamical noise inside a vehicle calls for a good knowledge of the various components of the wall pressure field produced by the external flow. Experimental measures give access to the superposition of the acoustic and turbulent advected fields. We propose a method to simulate random fields that reproduce the essential statistical properties (distribution, spatio-temporal correlations) of acoustic and advected pressure fields separately. The simulations are using a spectral synthesis based on the usual empirical spatio-temporal correlation models of the literature. The resulting random fields are numerically low-cost and quite well reproduce the properties of measured signals. We simulate the result of a measure by an antenna of sensors. These simulations will permit to study the measurement device as far as spectral aliasing and components separation are concerned

    Caractérisation statistique d'une assemblée de nanotubes en imagerie microscopique

    Get PDF
    Dans le domaine des nanotechnologies, de nombreux dispositifs incorporant des nano-objets sont développés. Les différents procédés de fabrication sont en général validés à l'aide d'images de microscopie souvent interprétées de manière relativement qualitative. Une caractérisation systématique et automatique de ces observations permettrait des comparaisons quantitatives et objectives. Nous nous intéressons en particulier à des nanomembranes en nanotubes de carbone destinées à des applications en instrumentation. Nous proposons une chaîne complète de traitement d'images de microscopie électronique à balayage. Les principales étapes sont un débruitage anisotrope, une amélioration de contraste et une segmentation utilisant la morphologie mathématique ainsi qu'une estimation de l'orientation locale de filaments grâce au tenseur de structure. Nous abordons aussi l'extraction de filaments individuels. Cette approche n'implique aucune intervention de l'utilisateur et peut être utilisée automatiquement sur des ensembles de plusieurs centaines d'images. Nous obtenons notamment pour la première fois les histogrammes d'orientation de nanotubes constituant des nanomembranes et accédons ainsi à une mesure objective de la qualité de l'alignement des nanotubes, essentielle pour la physique du capteur

    Convergence of a Finite Volume Scheme for a Corrosion Model

    Get PDF
    In this paper, we study the numerical approximation of a system of partial dif-ferential equations describing the corrosion of an iron based alloy in a nuclear waste repository. In particular, we are interested in the convergence of a numerical scheme consisting in an implicit Euler scheme in time and a Scharfetter-Gummel finite volume scheme in space
    • …
    corecore