13 research outputs found
Evolving parametrized Loss for Image Classification Learning on Small Datasets
This paper proposes a meta-learning approach to evolving a parametrized loss
function, which is called Meta-Loss Network (MLN), for training the image
classification learning on small datasets. In our approach, the MLN is embedded
in the framework of classification learning as a differentiable objective
function. The MLN is evolved with the Evolutionary Strategy algorithm (ES) to
an optimized loss function, such that a classifier, which optimized to minimize
this loss, will achieve a good generalization effect. A classifier learns on a
small training dataset to minimize MLN with Stochastic Gradient Descent (SGD),
and then the MLN is evolved with the precision of the small-dataset-updated
classifier on a large validation dataset. In order to evaluate our approach,
the MLN is trained with a large number of small sample learning tasks sampled
from FashionMNIST and tested on validation tasks sampled from FashionMNIST and
CIFAR10. Experiment results demonstrate that the MLN effectively improved
generalization compared to classical cross-entropy error and mean squared
error
Resource efficient boosting method for IoT security monitoring.
Machine learning (ML) methods are widely proposed for security monitoring of Internet of Things (IoT). However, these methods can be computationally expensive for resource constraint IoT devices. This paper proposes an optimized resource efficient ML method that can detect various attacks on IoT devices. It utilizes Light Gradient Boosting Machine (LGBM). The performance of this approach was evaluated against four realistic IoT benchmark datasets. Experimental results show that the proposed method can effectively detect attacks on IoT devices with limited resources, and outperforms the state of the art techniques
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
In this paper, we develop upon the emerging topic of loss function learning,
which aims to learn loss functions that significantly improve the performance
of the models trained under them. Specifically, we propose a new meta-learning
framework for learning model-agnostic loss functions via a hybrid
neuro-symbolic search approach. The framework first uses evolution-based
methods to search the space of primitive mathematical operations to find a set
of symbolic loss functions. Second, the set of learned loss functions are
subsequently parameterized and optimized via an end-to-end gradient-based
training procedure. The versatility of the proposed framework is empirically
validated on a diverse set of supervised learning tasks. Results show that the
meta-learned loss functions discovered by the newly proposed method outperform
both the cross-entropy loss and state-of-the-art loss function learning methods
on a diverse range of neural network architectures and datasets
Towards Run-time Efficient Hierarchical Reinforcement Learning
This paper investigates a novel method combining
Scalable Evolution Strategies (S-ES) and Hierarchical Reinforcement
Learning (HRL). S-ES, named for its excellent scalability,
was popularised with demonstrated performance comparable to
state-of-the-art policy gradient methods. However, S-ES has not
been tested in conjunction with HRL methods, which empower
temporal abstraction thus allowing agents to tackle more challenging
problems. We introduce a novel method merging S-ES
and HRL, which creates a highly scalable and efficient (compute
time) algorithm. We demonstrate that the proposed method
benefits from S-ES’s scalability and indifference to delayed
rewards. This results in our main contribution: significantly
higher learning speed and competitive performance compared
to gradient-based HRL methods, across a range of tasks