497 research outputs found

    Dimensionality reduction and sparse representations in computer vision

    Get PDF
    The proliferation of camera equipped devices, such as netbooks, smartphones and game stations, has led to a significant increase in the production of visual content. This visual information could be used for understanding the environment and offering a natural interface between the users and their surroundings. However, the massive amounts of data and the high computational cost associated with them, encumbers the transfer of sophisticated vision algorithms to real life systems, especially ones that exhibit resource limitations such as restrictions in available memory, processing power and bandwidth. One approach for tackling these issues is to generate compact and descriptive representations of image data by exploiting inherent redundancies. We propose the investigation of dimensionality reduction and sparse representations in order to accomplish this task. In dimensionality reduction, the aim is to reduce the dimensions of the space where image data reside in order to allow resource constrained systems to handle them and, ideally, provide a more insightful description. This goal is achieved by exploiting the inherent redundancies that many classes of images, such as faces under different illumination conditions and objects from different viewpoints, exhibit. We explore the description of natural images by low dimensional non-linear models called image manifolds and investigate the performance of computer vision tasks such as recognition and classification using these low dimensional models. In addition to dimensionality reduction, we study a novel approach in representing images as a sparse linear combination of dictionary examples. We investigate how sparse image representations can be used for a variety of tasks including low level image modeling and higher level semantic information extraction. Using tools from dimensionality reduction and sparse representation, we propose the application of these methods in three hierarchical image layers, namely low-level features, mid-level structures and high-level attributes. Low level features are image descriptors that can be extracted directly from the raw image pixels and include pixel intensities, histograms, and gradients. In the first part of this work, we explore how various techniques in dimensionality reduction, ranging from traditional image compression to the recently proposed Random Projections method, affect the performance of computer vision algorithms such as face detection and face recognition. In addition, we discuss a method that is able to increase the spatial resolution of a single image, without using any training examples, according to the sparse representations framework. In the second part, we explore mid-level structures, including image manifolds and sparse models, produced by abstracting information from low-level features and offer compact modeling of high dimensional data. We propose novel techniques for generating more descriptive image representations and investigate their application in face recognition and object tracking. In the third part of this work, we propose the investigation of a novel framework for representing the semantic contents of images. This framework employs high level semantic attributes that aim to bridge the gap between the visual information of an image and its textual description by utilizing low level features and mid level structures. This innovative paradigm offers revolutionary possibilities including recognizing the category of an object from purely textual information without providing any explicit visual example

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Compressed Sensing in Resource-Constrained Environments: From Sensing Mechanism Design to Recovery Algorithms

    Get PDF
    Compressed Sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. It is promising that CS can be utilized in environments where the signal acquisition process is extremely difficult or costly, e.g., a resource-constrained environment like the smartphone platform, or a band-limited environment like visual sensor network (VSNs). There are several challenges to perform sensing due to the characteristic of these platforms, including, for example, needing active user involvement, computational and storage limitations and lower transmission capabilities. This dissertation focuses on the study of CS in resource-constrained environments. First, we try to solve the problem on how to design sensing mechanisms that could better adapt to the resource-limited smartphone platform. We propose the compressed phone sensing (CPS) framework where two challenging issues are studied, the energy drainage issue due to continuous sensing which may impede the normal functionality of the smartphones and the requirement of active user inputs for data collection that may place a high burden on the user. Second, we propose a CS reconstruction algorithm to be used in VSNs for recovery of frames/images. An efficient algorithm, NonLocal Douglas-Rachford (NLDR), is developed. NLDR takes advantage of self-similarity in images using nonlocal means (NL) filtering. We further formulate the nonlocal estimation as the low-rank matrix approximation problem and solve the constrained optimization problem using Douglas-Rachford splitting method. Third, we extend the NLDR algorithm to surveillance video processing in VSNs and propose recursive Low-rank and Sparse estimation through Douglas-Rachford splitting (rLSDR) method for recovery of the video frame into a low-rank background component and sparse component that corresponds to the moving object. The spatial and temporal low-rank features of the video frame, e.g., the nonlocal similar patches within the single video frame and the low-rank background component residing in multiple frames, are successfully exploited

    Patch-based Denoising Algorithms for Single and Multi-view Images

    Get PDF
    In general, all single and multi-view digital images are captured using sensors, where they are often contaminated with noise, which is an undesired random signal. Such noise can also be produced during transmission or by lossy image compression. Reducing the noise and enhancing those images is among the fundamental digital image processing tasks. Improving the performance of image denoising methods, would greatly contribute to single or multi-view image processing techniques, e.g. segmentation, computing disparity maps, etc. Patch-based denoising methods have recently emerged as the state-of-the-art denoising approaches for various additive noise levels. This thesis proposes two patch-based denoising methods for single and multi-view images, respectively. A modification to the block matching 3D algorithm is proposed for single image denoising. An adaptive collaborative thresholding filter is proposed which consists of a classification map and a set of various thresholding levels and operators. These are exploited when the collaborative hard-thresholding step is applied. Moreover, the collaborative Wiener filtering is improved by assigning greater weight when dealing with similar patches. For the denoising of multi-view images, this thesis proposes algorithms that takes a pair of noisy images captured from two different directions at the same time (stereoscopic images). The structural, maximum difference or the singular value decomposition-based similarity metrics is utilized for identifying locations of similar search windows in the input images. The non-local means algorithm is adapted for filtering these noisy multi-view images. The performance of both methods have been evaluated both quantitatively and qualitatively through a number of experiments using the peak signal-to-noise ratio and the mean structural similarity measure. Experimental results show that the proposed algorithm for single image denoising outperforms the original block matching 3D algorithm at various noise levels. Moreover, the proposed algorithm for multi-view image denoising can effectively reduce noise and assist to estimate more accurate disparity maps at various noise levels

    Super-resolution:A comprehensive survey

    Get PDF
    • …
    corecore