7,238 research outputs found
On the application of reservoir computing networks for noisy image recognition
Reservoir Computing Networks (RCNs) are a special type of single layer recurrent neural networks, in which the input and the recurrent connections are randomly generated and only the output weights are trained. Besides the ability to process temporal information, the key points of RCN are easy training and robustness against noise. Recently, we introduced a simple strategy to tune the parameters of RCNs. Evaluation in the domain of noise robust speech recognition proved that this method was effective. The aim of this work is to extend that study to the field of image processing, by showing that the proposed parameter tuning procedure is equally valid in the field of image processing and conforming that RCNs are apt at temporal modeling and are robust with respect to noise. In particular, we investigate the potential of RCNs in achieving competitive performance on the well-known MNIST dataset by following the aforementioned parameter optimizing strategy. Moreover, we achieve good noise robust recognition by utilizing such a network to denoise images and supplying them to a recognizer that is solely trained on clean images. The experiments demonstrate that the proposed RCN-based handwritten digit recognizer achieves an error rate of 0.81 percent on the clean test data of the MNIST benchmark and that the proposed RCN-based denoiser can effectively reduce the error rate on the various types of noise. (c) 2017 Elsevier B.V. All rights reserved
Medical image denoising using convolutional denoising autoencoders
Image denoising is an important pre-processing step in medical image
analysis. Different algorithms have been proposed in past three decades with
varying denoising performances. More recently, having outperformed all
conventional methods, deep learning based models have shown a great promise.
These methods are however limited for requirement of large training sample size
and high computational costs. In this paper we show that using small sample
size, denoising autoencoders constructed using convolutional layers can be used
for efficient denoising of medical images. Heterogeneous images can be combined
to boost sample size for increased denoising performance. Simplest of networks
can reconstruct images with corruption levels so high that noise and signal are
not differentiable to human eye.Comment: To appear: 6 pages, paper to be published at the Fourth Workshop on
Data Mining in Biomedical Informatics and Healthcare at ICDM, 201
Learning how to be robust: Deep polynomial regression
Polynomial regression is a recurrent problem with a large number of
applications. In computer vision it often appears in motion analysis. Whatever
the application, standard methods for regression of polynomial models tend to
deliver biased results when the input data is heavily contaminated by outliers.
Moreover, the problem is even harder when outliers have strong structure.
Departing from problem-tailored heuristics for robust estimation of parametric
models, we explore deep convolutional neural networks. Our work aims to find a
generic approach for training deep regression models without the explicit need
of supervised annotation. We bypass the need for a tailored loss function on
the regression parameters by attaching to our model a differentiable hard-wired
decoder corresponding to the polynomial operation at hand. We demonstrate the
value of our findings by comparing with standard robust regression methods.
Furthermore, we demonstrate how to use such models for a real computer vision
problem, i.e., video stabilization. The qualitative and quantitative
experiments show that neural networks are able to learn robustness for general
polynomial regression, with results that well overpass scores of traditional
robust estimation methods.Comment: 18 pages, conferenc
Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation
Deep neural networks with alternating convolutional, max-pooling and
decimation layers are widely used in state of the art architectures for
computer vision. Max-pooling purposefully discards precise spatial information
in order to create features that are more robust, and typically organized as
lower resolution spatial feature maps. On some tasks, such as whole-image
classification, max-pooling derived features are well suited; however, for
tasks requiring precise localization, such as pixel level prediction and
segmentation, max-pooling destroys exactly the information required to perform
well. Precise localization may be preserved by shallow convnets without pooling
but at the expense of robustness. Can we have our max-pooled multi-layered cake
and eat it too? Several papers have proposed summation and concatenation based
methods for combining upsampled coarse, abstract features with finer features
to produce robust pixel level predictions. Here we introduce another model ---
dubbed Recombinator Networks --- where coarse features inform finer features
early in their formation such that finer features can make use of several
layers of computation in deciding how to use coarse features. The model is
trained once, end-to-end and performs better than summation-based
architectures, reducing the error from the previous state of the art on two
facial keypoint datasets, AFW and AFLW, by 30\% and beating the current
state-of-the-art on 300W without using extra data. We improve performance even
further by adding a denoising prediction model based on a novel convnet
formulation.Comment: accepted in CVPR 201
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
Wavelet Integrated CNNs for Noise-Robust Image Classification
Convolutional Neural Networks (CNNs) are generally prone to noise
interruptions, i.e., small image noise can cause drastic changes in the output.
To suppress the noise effect to the final predication, we enhance CNNs by
replacing max-pooling, strided-convolution, and average-pooling with Discrete
Wavelet Transform (DWT). We present general DWT and Inverse DWT (IDWT) layers
applicable to various wavelets like Haar, Daubechies, and Cohen, etc., and
design wavelet integrated CNNs (WaveCNets) using these layers for image
classification. In WaveCNets, feature maps are decomposed into the
low-frequency and high-frequency components during the down-sampling. The
low-frequency component stores main information including the basic object
structures, which is transmitted into the subsequent layers to extract robust
high-level features. The high-frequency components, containing most of the data
noise, are dropped during inference to improve the noise-robustness of the
WaveCNets. Our experimental results on ImageNet and ImageNet-C (the noisy
version of ImageNet) show that WaveCNets, the wavelet integrated versions of
VGG, ResNets, and DenseNet, achieve higher accuracy and better noise-robustness
than their vanilla versions.Comment: CVPR accepted pape
- …