340 research outputs found
Learning from distributed data sources using random vector functional-link networks
One of the main characteristics in many real-world big data scenarios is their distributed nature. In a machine learning context, distributed data, together with the requirements of preserving privacy and scaling up to large networks, brings the challenge of designing fully decentralized training protocols. In this paper, we explore the problem of distributed learning when the features of every pattern are available throughout multiple agents (as is happening, for example, in a distributed database scenario). We propose an algorithm for a particular class of neural networks, known as Random Vector Functional-Link (RVFL), which is based on the Alternating Direction Method of Multipliers optimization algorithm. The proposed algorithm allows to learn an RVFL network from multiple distributed data sources, while restricting communication to the unique operation of computing a distributed average. Our experimental simulations show that the algorithm is able to achieve a generalization accuracy comparable to a fully centralized solution, while at the same time being extremely efficient
Widely Linear Kernels for Complex-Valued Kernel Activation Functions
Complex-valued neural networks (CVNNs) have been shown to be powerful
nonlinear approximators when the input data can be properly modeled in the
complex domain. One of the major challenges in scaling up CVNNs in practice is
the design of complex activation functions. Recently, we proposed a novel
framework for learning these activation functions neuron-wise in a
data-dependent fashion, based on a cheap one-dimensional kernel expansion and
the idea of kernel activation functions (KAFs). In this paper we argue that,
despite its flexibility, this framework is still limited in the class of
functions that can be modeled in the complex domain. We leverage the idea of
widely linear complex kernels to extend the formulation, allowing for a richer
expressiveness without an increase in the number of adaptable parameters. We
test the resulting model on a set of complex-valued image classification
benchmarks. Experimental results show that the resulting CVNNs can achieve
higher accuracy while at the same time converging faster.Comment: Accepted at ICASSP 201
Pixle: a fast and effective black-box attack based on rearranging pixels
Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample. In this paper we focus on black-box adversarial attacks, that can be performed without knowing the inner structure of the attacked model, nor the training procedure, and we propose a novel attack that is capable of correctly attacking a high percentage of samples by rearranging a small number of pixels within the attacked image. We demonstrate that our attack works on a large number of datasets and models, that it requires a small number of iterations, and that the distance between the original sample and the adversarial one is negligible to the human eye
Bayesian Neural Networks With Maximum Mean Discrepancy Regularization
Bayesian Neural Networks (BNNs) are trained to optimize an entire
distribution over their weights instead of a single set, having significant
advantages in terms of, e.g., interpretability, multi-task learning, and
calibration. Because of the intractability of the resulting optimization
problem, most BNNs are either sampled through Monte Carlo methods, or trained
by minimizing a suitable Evidence Lower BOund (ELBO) on a variational
approximation. In this paper, we propose a variant of the latter, wherein we
replace the Kullback-Leibler divergence in the ELBO term with a Maximum Mean
Discrepancy (MMD) estimator, inspired by recent work in variational inference.
After motivating our proposal based on the properties of the MMD term, we
proceed to show a number of empirical advantages of the proposed formulation
over the state-of-the-art. In particular, our BNNs achieve higher accuracy on
multiple benchmarks, including several image classification tasks. In addition,
they are more robust to the selection of a prior over the weights, and they are
better calibrated. As a second contribution, we provide a new formulation for
estimating the uncertainty on a given prediction, showing it performs in a more
robust fashion against adversarial attacks and the injection of noise over
their inputs, compared to more classical criteria such as the differential
entropy
Continual Learning with Invertible Generative Models
Catastrophic forgetting (CF) happens whenever a neural network overwrites
past knowledge while being trained on new tasks. Common techniques to handle CF
include regularization of the weights (using, e.g., their importance on past
tasks), and rehearsal strategies, where the network is constantly re-trained on
past data. Generative models have also been applied for the latter, in order to
have endless sources of data. In this paper, we propose a novel method that
combines the strengths of regularization and generative-based rehearsal
approaches. Our generative model consists of a normalizing flow (NF), a
probabilistic and invertible neural network, trained on the internal embeddings
of the network. By keeping a single NF throughout the training process, we show
that our memory overhead remains constant. In addition, exploiting the
invertibility of the NF, we propose a simple approach to regularize the
network's embeddings with respect to past tasks. We show that our method
performs favorably with respect to state-of-the-art approaches in the
literature, with bounded computational power and memory overheads.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0244
- …