3,556 research outputs found
SINVAD: Search-based Image Space Navigation for DNN Image Classifier Test Input Generation
The testing of Deep Neural Networks (DNNs) has become increasingly important
as DNNs are widely adopted by safety critical systems. While many test adequacy
criteria have been suggested, automated test input generation for many types of
DNNs remains a challenge because the raw input space is too large to randomly
sample or to navigate and search for plausible inputs. Consequently, current
testing techniques for DNNs depend on small local perturbations to existing
inputs, based on the metamorphic testing principle. We propose new ways to
search not over the entire image space, but rather over a plausible input space
that resembles the true training distribution. This space is constructed using
Variational Autoencoders (VAEs), and navigated through their latent vector
space. We show that this space helps efficiently produce test inputs that can
reveal information about the robustness of DNNs when dealing with realistic
tests, opening the field to meaningful exploration through the space of highly
structured images
On the Challenges of Physical Implementations of RBMs
Restricted Boltzmann machines (RBMs) are powerful machine learning models,
but learning and some kinds of inference in the model require sampling-based
approximations, which, in classical digital computers, are implemented using
expensive MCMC. Physical computation offers the opportunity to reduce the cost
of sampling by building physical systems whose natural dynamics correspond to
drawing samples from the desired RBM distribution. Such a system avoids the
burn-in and mixing cost of a Markov chain. However, hardware implementations of
this variety usually entail limitations such as low-precision and limited range
of the parameters and restrictions on the size and topology of the RBM. We
conduct software simulations to determine how harmful each of these
restrictions is. Our simulations are designed to reproduce aspects of the
D-Wave quantum computer, but the issues we investigate arise in most forms of
physical computation
Growth, reproduction, and toxin production of Phomopsis sojae Lehman in culture
The role of Phomopsis sojae in the soybean field and in seed decay has been well investigated. The production of toxins by this fungus, not previously reported, was examined in this study. Dilutions of P. sojae culture filtrates, derived from 14 - 29 day old cultures, significantly affected soybean seed germination; 1/10 and 1/100 dilutions inhibited soybean seedling root elongation. A dilutable phytotoxin was produced by P. sojae in culture.
The fungus grew well on six of seven media tested at temperatures ranging from 10 - 30°C. Pycnidial formation in culture occurred infrequently and depended on incubation periods of 35 days or longer
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks
Catastrophic forgetting is a problem faced by many machine learning models
and algorithms. When trained on one task, then trained on a second task, many
machine learning models "forget" how to perform the first task. This is widely
believed to be a serious problem for neural networks. Here, we investigate the
extent to which the catastrophic forgetting problem occurs for modern neural
networks, comparing both established and recent gradient-based training
algorithms and activation functions. We also examine the effect of the
relationship between the first task and the second task on catastrophic
forgetting. We find that it is always best to train using the dropout
algorithm--the dropout algorithm is consistently best at adapting to the new
task, remembering the old task, and has the best tradeoff curve between these
two extremes. We find that different tasks and relationships between tasks
result in very different rankings of activation function performance. This
suggests the choice of activation function should always be cross-validated
- …