5,579 research outputs found
Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes
Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair
Combined Reinforcement Learning via Abstract Representations
In the quest for efficient and robust reinforcement learning methods, both
model-free and model-based approaches offer advantages. In this paper we
propose a new way of explicitly bridging both approaches via a shared
low-dimensional learned encoding of the environment, meant to capture
summarizing abstractions. We show that the modularity brought by this approach
leads to good generalization while being computationally efficient, with
planning happening in a smaller latent state space. In addition, this approach
recovers a sufficient low-dimensional representation of the environment, which
opens up new strategies for interpretable AI, exploration and transfer
learning.Comment: Accepted to the Thirty-Third AAAI Conference On Artificial
Intelligence, 201
EC3: Combining Clustering and Classification for Ensemble Learning
Classification and clustering algorithms have been proved to be successful
individually in different contexts. Both of them have their own advantages and
limitations. For instance, although classification algorithms are more powerful
than clustering methods in predicting class labels of objects, they do not
perform well when there is a lack of sufficient manually labeled reliable data.
On the other hand, although clustering algorithms do not produce label
information for objects, they provide supplementary constraints (e.g., if two
objects are clustered together, it is more likely that the same label is
assigned to both of them) that one can leverage for label prediction of a set
of unknown objects. Therefore, systematic utilization of both these types of
algorithms together can lead to better prediction performance. In this paper,
We propose a novel algorithm, called EC3 that merges classification and
clustering together in order to support both binary and multi-class
classification. EC3 is based on a principled combination of multiple
classification and multiple clustering methods using an optimization function.
We theoretically show the convexity and optimality of the problem and solve it
by block coordinate descent method. We additionally propose iEC3, a variant of
EC3 that handles imbalanced training data. We perform an extensive experimental
analysis by comparing EC3 and iEC3 with 14 baseline methods (7 well-known
standalone classifiers, 5 ensemble classifiers, and 2 existing methods that
merge classification and clustering) on 13 standard benchmark datasets. We show
that our methods outperform other baselines for every single dataset, achieving
at most 10% higher AUC. Moreover our methods are faster (1.21 times faster than
the best baseline), more resilient to noise and class imbalance than the best
baseline method.Comment: 14 pages, 7 figures, 11 table
Machine learning to analyze single-case data : a proof of concept
Visual analysis is the most commonly used method for interpreting data from singlecase designs, but levels of interrater agreement remain a concern. Although structured
aids to visual analysis such as the dual-criteria (DC) method may increase interrater
agreement, the accuracy of the analyses may still benefit from improvements. Thus, the
purpose of our study was to (a) examine correspondence between visual analysis and
models derived from different machine learning algorithms, and (b) compare the
accuracy, Type I error rate and power of each of our models with those produced by
the DC method. We trained our models on a previously published dataset and then
conducted analyses on both nonsimulated and simulated graphs. All our models
derived from machine learning algorithms matched the interpretation of the visual
analysts more frequently than the DC method. Furthermore, the machine learning
algorithms outperformed the DC method on accuracy, Type I error rate, and power.
Our results support the somewhat unorthodox proposition that behavior analysts may
use machine learning algorithms to supplement their visual analysis of single-case data,
but more research is needed to examine the potential benefits and drawbacks of such an
approach
- …