99 research outputs found
Predicting the expected behavior of agents that learn about agents: the CLRI framework
We describe a framework and equations used to model and predict the behavior
of multi-agent systems (MASs) with learning agents. A difference equation is
used for calculating the progression of an agent's error in its decision
function, thereby telling us how the agent is expected to fare in the MAS. The
equation relies on parameters which capture the agent's learning abilities,
such as its change rate, learning rate and retention rate, as well as relevant
aspects of the MAS such as the impact that agents have on each other. We
validate the framework with experimental results using reinforcement learning
agents in a market system, as well as with other experimental results gathered
from the AI literature. Finally, we use PAC-theory to show how to calculate
bounds on the values of the learning parameters
Ensemble learning with GSGP
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsThe purpose of this thesis is to conduct comparative research between Genetic Programming
(GP) and Geometric Semantic Genetic Programming (GSGP), with different
initialization (RHH and EDDA) and selection (Tournament and Epsilon-Lexicase)
strategies, in the context of a model-ensemble in order to solve regression optimization
problems.
A model-ensemble is a combination of base learners used in different ways to solve
a problem. The most common ensemble is the mean, where the base learners are combined
in a linear fashion, all having the same weights. However, more sophisticated
ensembles can be inferred, providing higher generalization ability.
GSGP is a variant of GP using different genetic operators. No previous research has
been conducted to see if GSGP can perform better than GP in model-ensemble learning.
The evolutionary process of GP and GSGP should allow us to learn about the strength
of each of those base models to provide a more accurate and robust solution. The
base-models used for this analysis were Linear Regression, Random Forest, Support
Vector Machine and Multi-Layer Perceptron. This analysis has been conducted using 7
different optimization problems and 4 real-world datasets. The results obtained with
GSGP are statistically significantly better than GP for most cases.O objetivo desta tese é realizar pesquisas comparativas entre Programação Genética
(GP) e Programação Genética Semântica Geométrica (GSGP), com diferentes estratégias
de inicialização (RHH e EDDA) e seleção (Tournament e Epsilon-Lexicase), no
contexto de um conjunto de modelos, a fim de resolver problemas de otimização de
regressão.
Um conjunto de modelos é uma combinação de alunos de base usados de diferentes
maneiras para resolver um problema. O conjunto mais comum é a média, na qual
os alunos da base são combinados de maneira linear, todos com os mesmos pesos.
No entanto, conjuntos mais sofisticados podem ser inferidos, proporcionando maior
capacidade de generalização.
O GSGP é uma variante do GP usando diferentes operadores genéticos. Nenhuma
pesquisa anterior foi realizada para verificar se o GSGP pode ter um desempenho
melhor que o GP no aprendizado de modelos. O processo evolutivo do GP e GSGP
deve permitir-nos aprender sobre a força de cada um desses modelos de base para
fornecer uma solução mais precisa e robusta. Os modelos de base utilizados para esta
análise foram: Regressão Linear, Floresta Aleatória, Máquina de Vetor de Suporte e
Perceptron de Camadas Múltiplas. Essa análise foi realizada usando 7 problemas de
otimização diferentes e 4 conjuntos de dados do mundo real. Os resultados obtidos
com o GSGP são estatisticamente significativamente melhores que o GP na maioria
dos casos
Recommended from our members
Data and Computation Efficient Meta-Learning
In order to make predictions with high accuracy, conventional deep learning systems require large training datasets consisting of thousands or millions of examples and long training times measured in hours or days, consuming high levels of electricity with a negative impact on our environment. It is desirable to have have machine learning systems that can emulate human behavior such that they can quickly learn new concepts from only a few examples. This is especially true if we need to quickly customize or personalize machine learning models to specific scenarios where it would be impractical to acquire a large amount of training data and where a mobile device is the means for computation. We define a data efficient machine learning system to be one that can learn a new concept from only a few examples (or shots) and a computation efficient machine learning system to be one that can learn a new concept rapidly without retraining on an everyday computing device such as a smart phone.
In this work, we design, develop, analyze, and extend the theory of machine learning systems that are both data efficient and computation efficient. We present systems that are trained using multiple tasks such that it "learns how to learn" to solve new tasks from only a few examples. These systems can efficiently solve new, unseen tasks drawn from a broad range of data distributions, in both the low and high data regimes, without the need for costly retraining. Adapting to a new task requires only a forward pass of the example task data through the trained network making the learning of new tasks possible on mobile devices. In particular, we focus on few-shot image classification systems, i.e. machine learning systems that can distinguish between numerous classes of objects depicted in digital images given only a few examples of each class of object to learn from.
To accomplish this, we first develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. We then introduce Versa, an instance of the framework employing a fast, flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of training examples, and outputs a distribution over task-specific parameters in a single forward pass of the network. We evaluate Versa on benchmark datasets, where at the time, the method achieved state-of-the-art results when compared to meta-learning approaches using similar training regimes and feature extractor capacity.
Next, we build on Versa and add a second amortized network to adapt key parameters in the feature extractor to the current task. To accomplish this, we introduce CNAPs, a conditional neural process based approach to multi-task classification. We demonstrate that, at the time, CNAPs achieved state-of-the-art results on the challenging Meta-Dataset benchmark indicating high-quality transfer-learning. Timing experiments reveal that CNAPs is computationally efficient when adapting to an unseen task as it does not involve gradient back propagation computations. We show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning.
Finally, we investigate the effects of different methods of batch normalization on meta-learning systems. Batch normalization has become an essential component of deep learning systems as it significantly accelerates the training of neural networks by allowing the use of higher learning rates and decreasing the sensitivity to network initialization. We show that the hierarchical nature of the meta-learning setting presents several challenges that can render conventional batch normalization ineffective. We evaluate a range of approaches to batch normalization for few-shot learning scenarios, and develop a novel approach that we call TaskNorm. Experiments demonstrate that the choice of batch normalization has a dramatic effect on both classification accuracy and training time for both gradient based- and gradient-free meta-learning approaches and that TaskNorm consistently improves performance
- …