9 research outputs found

    Dictionary Learning for Deblurring and Digital Zoom

    Get PDF
    This paper proposes a novel approach to image deblurring and digital zooming using sparse local models of image appearance. These models, where small image patches are represented as linear combinations of a few elements drawn from some large set (dictionary) of candidates, have proven well adapted to several image restoration tasks. A key to their success has been to learn dictionaries adapted to the reconstruction of small image patches. In contrast, recent works have proposed instead to learn dictionaries which are not only adapted to data reconstruction, but also tuned for a specific task. We introduce here such an approach to deblurring and digital zoom, using pairs of blurry/sharp (or low-/high-resolution) images for training, as well as an effective stochastic gradient algorithm for solving the corresponding optimization task. Although this learning problem is not convex, once the dictionaries have been learned, the sharp/high-resolution image can be recovered via convex optimization at test time. Experiments with synthetic and real data demonstrate the effectiveness of the proposed approach, leading to state-of-the-art performance for non-blind image deblurring and digital zoom

    PRAT: PRofiling Adversarial aTtacks

    Full text link
    Intrinsic susceptibility of deep learning to adversarial examples has led to a plethora of attack techniques with a broad common objective of fooling deep models. However, we find slight compositional differences between the algorithms achieving this objective. These differences leave traces that provide important clues for attacker profiling in real-life scenarios. Inspired by this, we introduce a novel problem of PRofiling Adversarial aTtacks (PRAT). Given an adversarial example, the objective of PRAT is to identify the attack used to generate it. Under this perspective, we can systematically group existing attacks into different families, leading to the sub-problem of attack family identification, which we also study. To enable PRAT analysis, we introduce a large Adversarial Identification Dataset (AID), comprising over 180k adversarial samples generated with 13 popular attacks for image specific/agnostic white/black box setups. We use AID to devise a novel framework for the PRAT objective. Our framework utilizes a Transformer based Global-LOcal Feature (GLOF) module to extract an approximate signature of the adversarial attack, which in turn is used for the identification of the attack. Using AID and our framework, we provide multiple interesting benchmark results for the PRAT problem

    A Parallel Inertial Proximal Optimization Method

    Get PDF
    International audienceThe Douglas-Rachford algorithm is a popular iterative method for finding a zero of a sum of two maximal monotone operators defined on a Hilbert space. In this paper, we propose an extension of this algorithm including inertia parameters and develop parallel versions to deal with the case of a sum of an arbitrary number of maximal operators. Based on this algorithm, parallel proximal algorithms are proposed to minimize over a linear subspace of a Hilbert space the sum of a finite number of proper, lower semicontinuous convex functions composed with linear operators. It is shown that particular cases of these methods are the simultaneous direction method of multipliers proposed by Stetzer et al., the parallel proximal algorithm developed by Combettes and Pesquet, and a parallelized version of an algorithm proposed by Attouch and Soueycatt

    영상 복원 문제의 변분법적 접근

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 수리과학부, 2013. 2. 강명주.Image restoration has been an active research area in image processing and computer vision during the past several decades. We explore variational partial differential equations (PDE) models in image restoration problem. We start our discussion by reviewing classical models, by which the works of this dissertation are highly motivated. The content of the dissertation is divided into two main subjects. First topic is on image denoising, where we propose non-convex hybrid total variation model, and then we apply iterative reweighted algorithm to solve the proposed model. Second topic is on image decomposition, in which we separate an image into structural component and oscillatory component using local gradient constraint.Abstract i 1 Introduction 1 1.1 Image restoration 2 1.2 Brief overview of the dissertation 3 2 Previous works 4 2.1 Image denoising 4 2.1.1 Fundamental model 4 2.1.2 Higher order model 7 2.1.3 Hybrid model 9 2.1.4 Non-convex model 12 2.2 Image decomposition 22 2.2.1 Meyers model 23 2.2.2 Nonlinear filter 24 3 Non-convex hybrid TV for image denoising 28 3.1 Variational model with non-convex hybrid TV 29 3.1.1 Non-convex TV model and non-convex HOTV model 29 3.1.2 The Proposed model: Non-convex hybrid TV model 31 3.2 Iterative reweighted hybrid Total Variation algorithm 33 3.3 Numerical experiments 35 3.3.1 Parameter values 37 3.3.2 Comparison between the non-convex TV model and the non-convex HOTV model 38 3.3.3 Comparison with other non-convex higher order regularizers 40 3.3.4 Comparison between two non-convex hybrid TV models 42 3.3.5 Comparison with Krishnan et al. [39] 43 3.3.6 Comparison with state-of-the-art 44 4 Image decomposition 59 4.1 Local gradient constraint 61 4.1.1 Texture estimator 62 4.2 The proposed model 65 4.2.1 Algorithm : Anisotropic TV-L2 67 4.2.2 Algorithm : Isotropic TV-L2 69 4.2.3 Algorithm : Isotropic TV-L1 71 4.3 Numerical experiments and discussion 72 5 Conclusion and future works 80 Abstract (in Korean) 92Docto

    Sparse and Redundant Representations for Inverse Problems and Recognition

    Get PDF
    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies
    corecore