3,416 research outputs found
Robust Feature Selection by Mutual Information Distributions
Mutual information is widely used in artificial intelligence, in a
descriptive way, to measure the stochastic dependence of discrete random
variables. In order to address questions such as the reliability of the
empirical value, one must consider sample-to-population inferential approaches.
This paper deals with the distribution of mutual information, as obtained in a
Bayesian framework by a second-order Dirichlet prior distribution. The exact
analytical expression for the mean and an analytical approximation of the
variance are reported. Asymptotic approximations of the distribution are
proposed. The results are applied to the problem of selecting features for
incremental learning and classification of the naive Bayes classifier. A fast,
newly defined method is shown to outperform the traditional approach based on
empirical mutual information on a number of real data sets. Finally, a
theoretical development is reported that allows one to efficiently extend the
above methods to incomplete samples in an easy and effective way.Comment: 8 two-column page
Distribution of Mutual Information from Complete and Incomplete Data
Mutual information is widely used, in a descriptive way, to measure the
stochastic dependence of categorical random variables. In order to address
questions such as the reliability of the descriptive value, one must consider
sample-to-population inferential approaches. This paper deals with the
posterior distribution of mutual information, as obtained in a Bayesian
framework by a second-order Dirichlet prior distribution. The exact analytical
expression for the mean, and analytical approximations for the variance,
skewness and kurtosis are derived. These approximations have a guaranteed
accuracy level of the order O(1/n^3), where n is the sample size. Leading order
approximations for the mean and the variance are derived in the case of
incomplete samples. The derived analytical expressions allow the distribution
of mutual information to be approximated reliably and quickly. In fact, the
derived expressions can be computed with the same order of complexity needed
for descriptive mutual information. This makes the distribution of mutual
information become a concrete alternative to descriptive mutual information in
many applications which would benefit from moving to the inductive side. Some
of these prospective applications are discussed, and one of them, namely
feature selection, is shown to perform significantly better when inductive
mutual information is used.Comment: 26 pages, LaTeX, 5 figures, 4 table
Feedback MPC for Torque-Controlled Legged Robots
The computational power of mobile robots is currently insufficient to achieve
torque level whole-body Model Predictive Control (MPC) at the update rates
required for complex dynamic systems such as legged robots. This problem is
commonly circumvented by using a fast tracking controller to compensate for
model errors between updates. In this work, we show that the feedback policy
from a Differential Dynamic Programming (DDP) based MPC algorithm is a viable
alternative to bridge the gap between the low MPC update rate and the actuation
command rate. We propose to augment the DDP approach with a relaxed barrier
function to address inequality constraints arising from the friction cone. A
frequency-dependent cost function is used to reduce the sensitivity to
high-frequency model errors and actuator bandwidth limits. We demonstrate that
our approach can find stable locomotion policies for the torque-controlled
quadruped, ANYmal, both in simulation and on hardware.Comment: Paper accepted to IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2019
Cable-Driven Actuation for Highly Dynamic Robotic Systems
This paper presents design and experimental evaluations of an articulated
robotic limb called Capler-Leg. The key element of Capler-Leg is its
single-stage cable-pulley transmission combined with a high-gap radius motor.
Our cable-pulley system is designed to be as light-weight as possible and to
additionally serve as the primary cooling element, thus significantly
increasing the power density and efficiency of the overall system. The total
weight of active elements on the leg, i.e. the stators and the rotors,
contribute more than 60% of the total leg weight, which is an order of
magnitude higher than most existing robots. The resulting robotic leg has low
inertia, high torque transparency, low manufacturing cost, no backlash, and a
low number of parts. Capler-Leg system itself, serves as an experimental setup
for evaluating the proposed cable- pulley design in terms of robustness and
efficiency. A continuous jump experiment shows a remarkable 96.5 % recuperation
rate, measured at the battery output. This means that almost all the mechanical
energy output used during push-off returned back to the battery during
touch-down
Control of a Quadrotor with Reinforcement Learning
In this paper, we present a method to control a quadrotor with a neural
network trained using reinforcement learning techniques. With reinforcement
learning, a common network can be trained to directly map state to actuator
command making any predefined control structure obsolete for training.
Moreover, we present a new learning algorithm which differs from the existing
ones in certain aspects. Our algorithm is conservative but stable for
complicated tasks. We found that it is more applicable to controlling a
quadrotor than existing algorithms. We demonstrate the performance of the
trained policy both in simulation and with a real quadrotor. Experiments show
that our policy network can react to step response relatively accurately. With
the same policy, we also demonstrate that we can stabilize the quadrotor in the
air even under very harsh initialization (manually throwing it upside-down in
the air with an initial velocity of 5 m/s). Computation time of evaluating the
policy is only 7 {\mu}s per time step which is two orders of magnitude less
than common trajectory optimization algorithms with an approximated model
Limits of Learning about a Categorical Latent Variable under Prior Near-Ignorance
In this paper, we consider the coherent theory of (epistemic) uncertainty of
Walley, in which beliefs are represented through sets of probability
distributions, and we focus on the problem of modeling prior ignorance about a
categorical random variable. In this setting, it is a known result that a state
of prior ignorance is not compatible with learning. To overcome this problem,
another state of beliefs, called \emph{near-ignorance}, has been proposed.
Near-ignorance resembles ignorance very closely, by satisfying some principles
that can arguably be regarded as necessary in a state of ignorance, and allows
learning to take place. What this paper does, is to provide new and substantial
evidence that also near-ignorance cannot be really regarded as a way out of the
problem of starting statistical inference in conditions of very weak beliefs.
The key to this result is focusing on a setting characterized by a variable of
interest that is \emph{latent}. We argue that such a setting is by far the most
common case in practice, and we provide, for the case of categorical latent
variables (and general \emph{manifest} variables) a condition that, if
satisfied, prevents learning to take place under prior near-ignorance. This
condition is shown to be easily satisfied even in the most common statistical
problems. We regard these results as a strong form of evidence against the
possibility to adopt a condition of prior near-ignorance in real statistical
problems.Comment: 27 LaTeX page
Robust inference of trees
This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m 4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for tree
- …