134 research outputs found

    Fast and easy blind deblurring using an inverse filter and PROBE

    Full text link
    PROBE (Progressive Removal of Blur Residual) is a recursive framework for blind deblurring. Using the elementary modified inverse filter at its core, PROBE's experimental performance meets or exceeds the state of the art, both visually and quantitatively. Remarkably, PROBE lends itself to analysis that reveals its convergence properties. PROBE is motivated by recent ideas on progressive blind deblurring, but breaks away from previous research by its simplicity, speed, performance and potential for analysis. PROBE is neither a functional minimization approach, nor an open-loop sequential method (blur kernel estimation followed by non-blind deblurring). PROBE is a feedback scheme, deriving its unique strength from the closed-loop architecture rather than from the accuracy of its algorithmic components

    Determination of tip transfer function for quantitative MFM using frequency domain filtering and least squares method

    Get PDF
    Magnetic force microscopy has unsurpassed capabilities in analysis of nanoscale and microscale magnetic samples and devices. Similar to other Scanning Probe Microscopy techniques, quantitative analysis remains a challenge. Despite large theoretical and practical progress in this area, present methods are seldom used due to their complexity and lack of systematic understanding of related uncertainties and recommended best practice. Use of the Tip Transfer Function (TTF) is a key concept in making Magnetic Force Microscopy measurements quantitative. We present a numerical study of several aspects of TTF reconstruction using multilayer samples with perpendicular magnetisation. We address the choice of numerical approach, impact of non-periodicity and windowing, suitable conventions for data normalisation and units, criteria for choice of regularisation parameter and experimental effects observed in real measurements. We present a simple regularisation parameter selection method based on TTF width and verify this approach via numerical experiments. Examples of TTF estimation are shown on both 2D and 3D experimental datasets. We give recommendations on best practices for robust TTF estimation, including the choice of windowing function, measurement strategy and dealing with experimental error sources. A method for synthetic MFM data generation, suitable for large scale numerical experiments is also presented

    ASKI: full-sky lensing map making algorithms

    Full text link
    Within the context of upcoming full-sky lensing surveys, the edge-preserving non- linear algorithm Aski is presented. Using the framework of Maximum A Posteriori inversion, it aims at recovering the full-sky convergence map from surveys with masks. It proceeds in two steps: CCD images of crowded galactic fields are deblurred using automated edge-preserving deconvolution; once the reduced shear is estimated, the convergence map is also inverted via an edge- preserving method. For the deblurring, it is found that when the observed field is crowded, this gain can be quite significant for realistic ground-based surveys when both positivity and edge-preserving penalties are imposed during the iterative deconvolution. For the convergence inversion, the quality of the reconstruction is investigated on noisy maps derived from the horizon N-body simulation, with and without Galactic cuts, and quantified using one-point statistics, power spectra, cluster counts, peak patches and the skeleton. It is found that the reconstruction is able to interpolate and extrapolate within the Galactic cuts/non-uniform noise; its sharpness-preserving penalization avoids strong biasing near the clusters of the map; it reconstructs well the shape of the PDF as traced by its skewness and kurtosis; the geometry and topology of the reconstructed map is close to the initial map as traced by the peak patch distribution and the skeleton's differential length; the two-points statistics of the recovered map is consistent with the corresponding smoothed version of the initial map; the distribution of point sources is also consistent with the corresponding smoothing, with a significant improvement when edge preserving prior is applied. The contamination of B-modes when realistic Galactic cuts are present is also investigated. Leakage mainly occurs on large scales.Comment: 24 pages, 21 figures accepted for publication to MNRAS

    aski: full-sky lensing map-making algorithms

    Get PDF
    Within the context of upcoming full-sky lensing surveys, the edge-preserving non-linear algorithm aski (All-Sky κ Inversion) is presented. Using the framework of Maximum A Posteriori inversion, it aims at recovering the optimal full-sky convergence map from noisy surveys with masks. aski contributes two steps: (i) CCD images of possibly crowded galactic fields are deblurred using automated edge-preserving deconvolution; (ii) once the reduced shear is estimated using standard techniques, the partially masked convergence map is also inverted via an edge-preserving method. The efficiency of the deblurring of the image is quantified by the relative gain in the quality factor of the reduced shear, as estimated by SExtractor. Cross-validation as a function of the number of stars removed yields an automatic estimate of the optimal level of regularization for the deconvolution of the galaxies. It is found that when the observed field is crowded, this gain can be quite significant for realistic ground-based 8-m class surveys. The most significant improvement occurs when both positivity and edge-preserving ℓ1−ℓ2 penalties are imposed during the iterative deconvolution. The quality of the convergence inversion is investigated on noisy maps derived from the horizon-4πN-body simulation with a signal-to-noise ratio (S/N) within the range ℓcut= 500-2500, with and without Galactic cuts, and quantified using one-point statistics (S3 and S4), power spectra, cluster counts, peak patches and the skeleton. It is found that (i) the reconstruction is able to interpolate and extrapolate within the Galactic cuts/non-uniform noise; (ii) its sharpness-preserving penalization avoids strong biasing near the clusters of the map; (iii) it reconstructs well the shape of the PDF as traced by its skewness and kurtosis; (iv) the geometry and topology of the reconstructed map are close to the initial map as traced by the peak patch distribution and the skeleton's differential length; (v) the two-point statistics of the recovered map are consistent with the corresponding smoothed version of the initial map; (vi) the distribution of point sources is also consistent with the corresponding smoothing, with a significant improvement when ℓ1−ℓ2 prior is applied. The contamination of B modes when realistic Galactic cuts are present is also investigated. Leakage mainly occurs on large scales. The non-linearities implemented in the model are significant on small scales near the peaks in the fiel

    Line detection as an inverse problem:application to lung ultrasound imaging

    Get PDF

    Enhancing Real-time Embedded Image Processing Robustness on Reconfigurable Devices for Critical Applications

    Get PDF
    Nowadays, image processing is increasingly used in several application fields, such as biomedical, aerospace, or automotive. Within these fields, image processing is used to serve both non-critical and critical tasks. As example, in automotive, cameras are becoming key sensors in increasing car safety, driving assistance and driving comfort. They have been employed for infotainment (non-critical), as well as for some driver assistance tasks (critical), such as Forward Collision Avoidance, Intelligent Speed Control, or Pedestrian Detection. The complexity of these algorithms brings a challenge in real-time image processing systems, requiring high computing capacity, usually not available in processors for embedded systems. Hardware acceleration is therefore crucial, and devices such as Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. These devices can assist embedded processors by significantly speeding-up computationally intensive software algorithms. Moreover, critical applications introduce strict requirements not only from the real-time constraints, but also from the device reliability and algorithm robustness points of view. Technology scaling is highlighting reliability problems related to aging phenomena, and to the increasing sensitivity of digital devices to external radiation events that can cause transient or even permanent faults. These faults can lead to wrong information processed or, in the worst case, to a dangerous system failure. In this context, the reconfigurable nature of FPGA devices can be exploited to increase the system reliability and robustness by leveraging Dynamic Partial Reconfiguration features. The research work presented in this thesis focuses on the development of techniques for implementing efficient and robust real-time embedded image processing hardware accelerators and systems for mission-critical applications. Three main challenges have been faced and will be discussed, along with proposed solutions, throughout the thesis: (i) achieving real-time performances, (ii) enhancing algorithm robustness, and (iii) increasing overall system's dependability. In order to ensure real-time performances, efficient FPGA-based hardware accelerators implementing selected image processing algorithms have been developed. Functionalities offered by the target technology, and algorithm's characteristics have been constantly taken into account while designing such accelerators, in order to efficiently tailor algorithm's operations to available hardware resources. On the other hand, the key idea for increasing image processing algorithms' robustness is to introduce self-adaptivity features at algorithm level, in order to maintain constant, or improve, the quality of results for a wide range of input conditions, that are not always fully predictable at design-time (e.g., noise level variations). This has been accomplished by measuring at run-time some characteristics of the input images, and then tuning the algorithm parameters based on such estimations. Dynamic reconfiguration features of modern reconfigurable FPGA have been extensively exploited in order to integrate run-time adaptivity into the designed hardware accelerators. Tools and methodologies have been also developed in order to increase the overall system dependability during reconfiguration processes, thus providing safe run-time adaptation mechanisms. In addition, taking into account the target technology and the environments in which the developed hardware accelerators and systems may be employed, dependability issues have been analyzed, leading to the development of a platform for quickly assessing the reliability and characterizing the behavior of hardware accelerators implemented on reconfigurable FPGAs when they are affected by such faults

    High-Throughput Image Analysis of Zebrafish Models of Parkinson’s Disease

    Get PDF

    Bayesian image restoration and bacteria detection in optical endomicroscopy

    Get PDF
    Optical microscopy systems can be used to obtain high-resolution microscopic images of tissue cultures and ex vivo tissue samples. This imaging technique can be translated for in vivo, in situ applications by using optical fibres and miniature optics. Fibred optical endomicroscopy (OEM) can enable optical biopsy in organs inaccessible by any other imaging systems, and hence can provide rapid and accurate diagnosis in a short time. The raw data the system produce is difficult to interpret as it is modulated by a fibre bundle pattern, producing what is called the “honeycomb effect”. Moreover, the data is further degraded due to the fibre core cross coupling problem. On the other hand, there is an unmet clinical need for automatic tools that can help the clinicians to detect fluorescently labelled bacteria in distal lung images. The aim of this thesis is to develop advanced image processing algorithms that can address the above mentioned problems. First, we provide a statistical model for the fibre core cross coupling problem and the sparse sampling by imaging fibre bundles (honeycomb artefact), which are formulated here as a restoration problem for the first time in the literature. We then introduce a non-linear interpolation method, based on Gaussian processes regression, in order to recover an interpretable scene from the deconvolved data. Second, we develop two bacteria detection algorithms, each of which provides different characteristics. The first approach considers joint formulation to the sparse coding and anomaly detection problems. The anomalies here are considered as candidate bacteria, which are annotated with the help of a trained clinician. Although this approach provides good detection performance and outperforms existing methods in the literature, the user has to carefully tune some crucial model parameters. Hence, we propose a more adaptive approach, for which a Bayesian framework is adopted. This approach not only outperforms the proposed supervised approach and existing methods in the literature but also provides computation time that competes with optimization-based methods
    corecore