77,730 research outputs found
Rejection-Cascade of Gaussians: Real-time adaptive background subtraction framework
Background-Foreground classification is a well-studied problem in computer
vision. Due to the pixel-wise nature of modeling and processing in the
algorithm, it is usually difficult to satisfy real-time constraints. There is a
trade-off between the speed (because of model complexity) and accuracy.
Inspired by the rejection cascade of Viola-Jones classifier, we decompose the
Gaussian Mixture Model (GMM) into an adaptive cascade of Gaussians(CoG). We
achieve a good improvement in speed without compromising the accuracy with
respect to the baseline GMM model. We demonstrate a speed-up factor of 4-5x and
17 percent average improvement in accuracy over Wallflowers surveillance
datasets. The CoG is then demonstrated to over the latent space representation
of images of a convolutional variational autoencoder(VAE). We provide initial
results over CDW-2014 dataset, which could speed up background subtraction for
deep architectures.Comment: Accepted for National Conference on Computer Vision, Pattern
Recognition, Image Processing and Graphics (NCVPRIPG 2019
Unmasking Clever Hans Predictors and Assessing What Machines Really Learn
Current learning machines have successfully solved hard application problems,
reaching high accuracy and displaying seemingly "intelligent" behavior. Here we
apply recent techniques for explaining decisions of state-of-the-art learning
machines and analyze various tasks from computer vision and arcade games. This
showcases a spectrum of problem-solving behaviors ranging from naive and
short-sighted, to well-informed and strategic. We observe that standard
performance evaluation metrics can be oblivious to distinguishing these diverse
problem solving behaviors. Furthermore, we propose our semi-automated Spectral
Relevance Analysis that provides a practically effective way of characterizing
and validating the behavior of nonlinear learning machines. This helps to
assess whether a learned model indeed delivers reliably for the problem that it
was conceived for. Furthermore, our work intends to add a voice of caution to
the ongoing excitement about machine intelligence and pledges to evaluate and
judge some of these recent successes in a more nuanced manner.Comment: Accepted for publication in Nature Communication
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Spiking Neural Networks (SNNs) claim to present many advantages in terms of
biological plausibility and energy efficiency compared to standard Deep Neural
Networks (DNNs). Recent works have shown that DNNs are vulnerable to
adversarial attacks, i.e., small perturbations added to the input data can lead
to targeted or random misclassifications. In this paper, we aim at
investigating the key research question: ``Are SNNs secure?'' Towards this, we
perform a comparative study of the security vulnerabilities in SNNs and DNNs
w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack
methodology, i.e., without the knowledge of the internal structure of the SNN,
which employs a greedy heuristic to automatically generate imperceptible and
robust adversarial examples (i.e., attack images) for the given SNN. We perform
an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN
having the same number of layers and neurons (to obtain a fair comparison), in
order to study the efficiency of our methodology and to understand the
differences between SNNs and DNNs w.r.t. the adversarial examples. Our work
opens new avenues of research towards the robustness of the SNNs, considering
their similarities to the human brain's functionality.Comment: Accepted for publication at the 2020 International Joint Conference
on Neural Networks (IJCNN
Neural network image reconstruction for magnetic particle imaging
We investigate neural network image reconstruction for magnetic particle
imaging. The network performance depends strongly on the convolution effects of
the spectrum input data. The larger convolution effect appearing at a
relatively smaller nanoparticle size obstructs the network training. The
trained single-layer network reveals the weighting matrix consisted of a basis
vector in the form of Chebyshev polynomials of the second kind. The weighting
matrix corresponds to an inverse system matrix, where an incoherency of basis
vectors due to a low convolution effects as well as a nonlinear activation
function plays a crucial role in retrieving the matrix elements. Test images
are well reconstructed through trained networks having an inverse kernel
matrix. We also confirm that a multi-layer network with one hidden layer
improves the performance. The architecture of a neural network overcoming the
low incoherence of the inverse kernel through the classification property will
become a better tool for image reconstruction.Comment: 9 pages, 11 figure
- …