11,792 research outputs found
Spectrum-Diverse Neuroevolution with Unified Neural Models
Learning algorithms are being increasingly adopted in various applications.
However, further expansion will require methods that work more automatically.
To enable this level of automation, a more powerful solution representation is
needed. However, by increasing the representation complexity a second problem
arises. The search space becomes huge and therefore an associated scalable and
efficient searching algorithm is also required. To solve both problems, first a
powerful representation is proposed that unifies most of the neural networks
features from the literature into one representation. Secondly, a new diversity
preserving method called Spectrum Diversity is created based on the new concept
of chromosome spectrum that creates a spectrum out of the characteristics and
frequency of alleles in a chromosome. The combination of Spectrum Diversity
with a unified neuron representation enables the algorithm to either surpass or
equal NeuroEvolution of Augmenting Topologies (NEAT) on all of the five classes
of problems tested. Ablation tests justifies the good results, showing the
importance of added new features in the unified neuron representation. Part of
the success is attributed to the novelty-focused evolution and good scalability
with chromosome size provided by Spectrum Diversity. Thus, this study sheds
light on a new representation and diversity preserving mechanism that should
impact algorithms and applications to come.
To download the code please access the following
https://github.com/zweifel/Physis-Shard
Universal Rules for Fooling Deep Neural Networks based Text Classification
Recently, deep learning based natural language processing techniques are
being extensively used to deal with spam mail, censorship evaluation in social
networks, among others. However, there is only a couple of works evaluating the
vulnerabilities of such deep neural networks. Here, we go beyond attacks to
investigate, for the first time, universal rules, i.e., rules that are sample
agnostic and therefore could turn any text sample in an adversarial one. In
fact, the universal rules do not use any information from the method itself (no
information from the method, gradient information or training dataset
information is used), making them black-box universal attacks. In other words,
the universal rules are sample and method agnostic. By proposing a
coevolutionary optimization algorithm we show that it is possible to create
universal rules that can automatically craft imperceptible adversarial samples
(only less than five perturbations which are close to misspelling are inserted
in the text sample). A comparison with a random search algorithm further
justifies the strength of the method. Thus, universal rules for fooling
networks are here shown to exist. Hopefully, the results from this work will
impact the development of yet more sample and model agnostic attacks as well as
their defenses, culminating in perhaps a new age for artificial intelligence
Araguaia Medical Vision Lab at ISIC 2017 Skin Lesion Classification Challenge
This paper describes the participation of Araguaia Medical Vision Lab at the
International Skin Imaging Collaboration 2017 Skin Lesion Challenge. We
describe the use of deep convolutional neural networks in attempt to classify
images of Melanoma and Seborrheic Keratosis lesions. With use of finetuned
GoogleNet and AlexNet we attained results of 0.950 and 0.846 AUC on Seborrheic
Keratosis and Melanoma respectively.Comment: Abstract submitted as a requirement to ISIC2017 challeng
Contingency Training
When applied to high-dimensional datasets, feature selection algorithms might
still leave dozens of irrelevant variables in the dataset. Therefore, even
after feature selection has been applied, classifiers must be prepared to the
presence of irrelevant variables. This paper investigates a new training method
called Contingency Training which increases the accuracy as well as the
robustness against irrelevant attributes. Contingency training is classifier
independent. By subsampling and removing information from each sample, it
creates a set of constraints. These constraints aid the method to automatically
find proper importance weights of the dataset's features. Experiments are
conducted with the contingency training applied to neural networks over
traditional datasets as well as datasets with additional irrelevant variables.
For all of the tests, contingency training surpassed the unmodified training on
datasets with irrelevant variables and even outperformed slightly when only a
few or no irrelevant variables were present
Self Training Autonomous Driving Agent
Intrinsically, driving is a Markov Decision Process which suits well the
reinforcement learning paradigm. In this paper, we propose a novel agent which
learns to drive a vehicle without any human assistance. We use the concept of
reinforcement learning and evolutionary strategies to train our agent in a 2D
simulation environment. Our model's architecture goes beyond the World Model's
by introducing difference images in the auto encoder. This novel involvement of
difference images in the auto-encoder gives better representation of the latent
space with respect to the motion of vehicle and helps an autonomous agent to
learn more efficiently how to drive a vehicle. Results show that our method
requires fewer (96% less) total agents, (87.5% less) agents per generations,
(70% less) generations and (90% less) rollouts than the original architecture
while achieving the same accuracy of the original
Imaginary Verma modules for the extended Affine Lie algebra
We consider one of the most natural extended affine Lie lagebras, the algebra
and begin a theory of its representations. In particular,
we study a class of imaginary Verma modules, obtain a criterion of
irreducibility and describe their submodule structure in "general position"
Novelty-organizing team of classifiers in noisy and dynamic environments
In the real world, the environment is constantly changing with the input
variables under the effect of noise. However, few algorithms were shown to be
able to work under those circumstances. Here, Novelty-Organizing Team of
Classifiers (NOTC) is applied to the continuous action mountain car as well as
two variations of it: a noisy mountain car and an unstable weather mountain
car. These problems take respectively noise and change of problem dynamics into
account. Moreover, NOTC is compared with NeuroEvolution of Augmenting
Topologies (NEAT) in these problems, revealing a trade-off between the
approaches. While NOTC achieves the best performance in all of the problems,
NEAT needs less trials to converge. It is demonstrated that NOTC achieves
better performance because of its division of the input space (creating easier
problems). Unfortunately, this division of input space also requires a bit of
time to bootstrap
Tackling Unit Commitment and Load Dispatch Problems Considering All Constraints with Evolutionary Computation
Unit commitment and load dispatch problems are important and complex problems
in power system operations that have being traditionally solved separately. In
this paper, both problems are solved together without approximations or
simplifications. In fact, the problem solved has a massive amount of
grid-connected photovoltaic units, four pump-storage hydro plants as energy
storage units and ten thermal power plants, each with its own set of operation
requirements that need to be satisfied. To face such a complex constrained
optimization problem an adaptive repair method is proposed. By including a
given repair method itself as a parameter to be optimized, the proposed
adaptive repair method avoid any bias in repair choices. Moreover, this results
in a repair method that adapt to the problem and will improve together with the
solution during optimization. Experiments are conducted revealing that the
proposed method is capable of surpassing exact method solutions on a simplified
version of the problem with approximations as well as solve the otherwise
intractable complete problem without simplifications. Moreover, since the
proposed approach can be applied to other problems in general and it may not be
obvious how to choose the constraint handling for a certain constraint, a
guideline is provided explaining the reasoning behind. Thus, this paper open
further possibilities to deal with the ever changing types of generation units
and other similarly complex operation/schedule optimization problems with many
difficult constraints
One pixel attack for fooling deep neural networks
Recent research has revealed that the output of Deep Neural Networks (DNN)
can be easily altered by adding relatively small perturbations to the input
vector. In this paper, we analyze an attack in an extremely limited scenario
where only one pixel can be modified. For that we propose a novel method for
generating one-pixel adversarial perturbations based on differential evolution
(DE). It requires less adversarial information (a black-box attack) and can
fool more types of networks due to the inherent features of DE. The results
show that 67.97% of the natural images in Kaggle CIFAR-10 test dataset and
16.04% of the ImageNet (ILSVRC 2012) test images can be perturbed to at least
one target class by modifying just one pixel with 74.03% and 22.91% confidence
on average. We also show the same vulnerability on the original CIFAR-10
dataset. Thus, the proposed attack explores a different take on adversarial
machine learning in an extreme limited scenario, showing that current DNNs are
also vulnerable to such low dimension attacks. Besides, we also illustrate an
important application of DE (or broadly speaking, evolutionary computation) in
the domain of adversarial machine learning: creating tools that can effectively
generate low-cost adversarial attacks against neural networks for evaluating
robustness
A Nonequilibrium Statistical Ensemble Formalism. Maxent-Nesom: Concepts, Construction, Application, Open Questions and Criticisms
We describe a particular approach for the construction of a nonequilibrium
statistical ensemble formalism for the treatment of dissipative many-body
systems. This is the so-called Nonequilibrium Statistical Operator Method,
based on the seminal and fundamental ideas set forward by Boltzmann and Gibbs.
The existing approaches can be unified under a unique variational principle,
namely, MaxEnt, which we consider here. The main six basic steps that are at
the foundations of the formalism are presented and the fundamental concepts are
discussed. The associated nonlinear quantum kinetic theory and the accompanying
Statistial Thermodynamics (the Informational Statistical Thermodynamics) are
very briefly described. The corresponding response function theory for systems
away from equilibrium allows to connected the theory with experiments, and some
examples are summarized; there follows a good agreement between theory and
experimental data in the cases in which the latter are presently available. We
also present an overview of some conceptual questions and associated
criticisms.Comment: 145 page
- β¦