4,659 research outputs found
Relational Representations in Reinforcement Learning: Review and Open Problems
This paper is about representation in RL.We discuss some of the concepts in representation and generalization in reinforcement learning and argue for higher-order representations, instead of the commonly used propositional representations. The paper contains a small review of current reinforcement learning systems using higher-order representations, followed by a brief discussion. The paper ends with research directions and open problems.\u
Interpretable Clustering using Unsupervised Binary Trees
We herein introduce a new method of interpretable clustering that uses
unsupervised binary trees. It is a three-stage procedure, the first stage of
which entails a series of recursive binary splits to reduce the heterogeneity
of the data within the new subsamples. During the second stage (pruning),
consideration is given to whether adjacent nodes can be aggregated. Finally,
during the third stage (joining), similar clusters are joined together, even if
they do not descend from the same node originally. Consistency results are
obtained, and the procedure is used on simulated and real data sets.Comment: 25 pages, 6 figure
Narrowing the Gap: Random Forests In Theory and In Practice
Despite widespread interest and practical use, the theoretical properties of
random forests are still not well understood. In this paper we contribute to
this understanding in two ways. We present a new theoretically tractable
variant of random regression forests and prove that our algorithm is
consistent. We also provide an empirical evaluation, comparing our algorithm
and other theoretically tractable random forest models to the random forest
algorithm used in practice. Our experiments provide insight into the relative
importance of different simplifications that theoreticians have made to obtain
tractable models for analysis.Comment: Under review by the International Conference on Machine Learning
(ICML) 201
On The Stability of Interpretable Models
Interpretable classification models are built with the purpose of providing a
comprehensible description of the decision logic to an external oversight
agent. When considered in isolation, a decision tree, a set of classification
rules, or a linear model, are widely recognized as human-interpretable.
However, such models are generated as part of a larger analytical process. Bias
in data collection and preparation, or in model's construction may severely
affect the accountability of the design process. We conduct an experimental
study of the stability of interpretable models with respect to feature
selection, instance selection, and model selection. Our conclusions should
raise awareness and attention of the scientific community on the need of a
stability impact assessment of interpretable models
Implementation of the Random Forest Method for the Imaging Atmospheric Cherenkov Telescope MAGIC
The paper describes an application of the tree classification method Random
Forest (RF), as used in the analysis of data from the ground-based gamma
telescope MAGIC. In such telescopes, cosmic gamma-rays are observed and have to
be discriminated against a dominating background of hadronic cosmic-ray
particles. We describe the application of RF for this gamma/hadron separation.
The RF method often shows superior performance in comparison with traditional
semi-empirical techniques. Critical issues of the method and its implementation
are discussed. An application of the RF method for estimation of a continuous
parameter from related variables, rather than discrete classes, is also
discussed.Comment: 16 pages, 8 figure
- …