1,882 research outputs found

    Single-Image Deraining via Recurrent Residual Multiscale Networks.

    Full text link
    Existing deraining approaches represent rain streaks with different rain layers and then separate the layers from the background image. However, because of the complexity of real-world rain, such as various densities, shapes, and directions of rain streaks, it is very difficult to decompose a rain image into clean background and rain layers. In this article, we develop a novel single-image deraining method based on residual multiscale pyramid to mitigate the difficulty of rain image decomposition. To be specific, we progressively remove rain streaks in a coarse-to-fine fashion, where heavy rain is first removed in coarse-resolution levels and then light rain is eliminated in fine-resolution levels. Furthermore, based on the observation that residuals between a restored image and its corresponding rain image give critical clues of rain streaks, we regard the residuals as an attention map to remove rains in the consecutive finer level image. To achieve a powerful yet compact deraining framework, we construct our network by recurrent layers and remove rain with the same network in different pyramid levels. In addition, we design a multiscale kernel selection network (MSKSN) to facilitate our single network to remove rain streaks at different levels. In this manner, we reduce 81% of the model parameters without decreasing deraining performance compared with our prior work. Extensive experimental results on widely used benchmarks show that our approach achieves superior deraining performance compared with the state of the art

    Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery

    Full text link
    PCA is one of the most widely used dimension reduction techniques. A related easier problem is "subspace learning" or "subspace estimation". Given relatively clean data, both are easily solved via singular value decomposition (SVD). The problem of subspace learning or PCA in the presence of outliers is called robust subspace learning or robust PCA (RPCA). For long data sequences, if one tries to use a single lower dimensional subspace to represent the data, the required subspace dimension may end up being quite large. For such data, a better model is to assume that it lies in a low-dimensional subspace that can change over time, albeit gradually. The problem of tracking such data (and the subspaces) while being robust to outliers is called robust subspace tracking (RST). This article provides a magazine-style overview of the entire field of robust subspace learning and tracking. In particular solutions for three problems are discussed in detail: RPCA via sparse+low-rank matrix decomposition (S+LR), RST via S+LR, and "robust subspace recovery (RSR)". RSR assumes that an entire data vector is either an outlier or an inlier. The S+LR formulation instead assumes that outliers occur on only a few data vector indices and hence are well modeled as sparse corruptions.Comment: To appear, IEEE Signal Processing Magazine, July 201

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Video content analysis for intelligent forensics

    Get PDF
    The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild

    Adaptive Methods for Robust Document Image Understanding

    Get PDF
    A vast amount of digital document material is continuously being produced as part of major digitization efforts around the world. In this context, generic and efficient automatic solutions for document image understanding represent a stringent necessity. We propose a generic framework for document image understanding systems, usable for practically any document types available in digital form. Following the introduced workflow, we shift our attention to each of the following processing stages in turn: quality assurance, image enhancement, color reduction and binarization, skew and orientation detection, page segmentation and logical layout analysis. We review the state of the art in each area, identify current defficiencies, point out promising directions and give specific guidelines for future investigation. We address some of the identified issues by means of novel algorithmic solutions putting special focus on generality, computational efficiency and the exploitation of all available sources of information. More specifically, we introduce the following original methods: a fully automatic detection of color reference targets in digitized material, accurate foreground extraction from color historical documents, font enhancement for hot metal typesetted prints, a theoretically optimal solution for the document binarization problem from both computational complexity- and threshold selection point of view, a layout-independent skew and orientation detection, a robust and versatile page segmentation method, a semi-automatic front page detection algorithm and a complete framework for article segmentation in periodical publications. The proposed methods are experimentally evaluated on large datasets consisting of real-life heterogeneous document scans. The obtained results show that a document understanding system combining these modules is able to robustly process a wide variety of documents with good overall accuracy

    Medical image synthesis using generative adversarial networks: towards photo-realistic image synthesis

    Full text link
    This proposed work addresses the photo-realism for synthetic images. We introduced a modified generative adversarial network: StencilGAN. It is a perceptually-aware generative adversarial network that synthesizes images based on overlaid labelled masks. This technique can be a prominent solution for the scarcity of the resources in the healthcare sector

    The development of generative Bayesian models for classification of cell images

    Get PDF
    A generative model for shape recognition of biological cells in images is developed. The model is designed for analysing high throughput screens, and is tested on a genome wide morphology screen. The genome wide morphology screen contains order of 104 images of fluorescently stained cells with order of 102 cells per image. It was generated using automated techniques through knockdown of almost all putative genes in Drosphila melanogaster. A major step in the analysis of such a dataset is to classify cells into distinct classes: both phenotypic classes and cell cycle classes. However, the quantity of data produced presents a major time bottleneck for human analysis. Human analysis is also known to be subjective and variable. The development of a generalisable computational analysis tool is an important challenge for the field. Previously cell morphology has been characterized by automated measurement of user-defined biological features, often specific to one dataset. These methods are surveyed and discussed. Here a more ambitious approach is pursued. A novel generalisable classification method, applicable to our images, is developed and implemented. The algorithm decomposes training images into constituent patches to build Bayesian models of cell classes. The model contains probability distributions which are learnt via the Expectation Maximization algorithm. This provides a mechanism for comparing the similarity of the appearance of cell phenotypes. The method is evaluated by comparison with results of Support Vector Machines at the task of performing binary classification. This work provides the basis for clustering large sets of cell images into biologically meaningful classes

    The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis.

    Get PDF
    Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field
    corecore