25,151 research outputs found
Learning policies for Markov decision processes from data
We consider the problem of learning a policy for a Markov decision process consistent with data captured on the state-actions pairs followed by the policy. We assume that the policy belongs to a class of parameterized policies which are defined using features associated with the state-action pairs. The features are known a priori, however, only an unknown subset of them could be relevant. The policy parameters that correspond to an observed target policy are recovered using `1-regularized logistic regression that best fits the observed state-action samples. We establish bounds on the difference between the average reward of the estimated and the original policy (regret) in terms of the generalization error and the ergodic coefficient of the underlying Markov chain. To that end, we combine sample complexity theory and sensitivity analysis of the stationary distribution of Markov chains. Our analysis suggests that to achieve regret within order O( √ ), it suffices to use training sample size on the order of Ω(logn · poly(1/ )), where n is the number of the features. We demonstrate the effectiveness of our method on a synthetic robot navigation example
Learning policies for Markov decision processes from data
We consider the problem of learning a policy for a Markov decision process consistent with data captured on the state-actions pairs followed by the policy. We assume that the policy belongs to a class of parameterized policies which are defined using features associated with the state-action pairs. The features are known a priori, however, only an unknown subset of them could be relevant. The policy parameters that correspond to an observed target policy are recovered using `1-regularized logistic regression that best fits the observed state-action samples. We establish bounds on the difference between the average reward of the estimated and the original policy (regret) in terms of the generalization error and the ergodic coefficient of the underlying Markov chain. To that end, we combine sample complexity theory and sensitivity analysis of the stationary distribution of Markov chains. Our analysis suggests that to achieve regret within order O( √ ), it suffices to use training sample size on the order of Ω(logn · poly(1/ )), where n is the number of the features. We demonstrate the effectiveness of our method on a synthetic robot navigation example
SUBSTANTIATING THE OPTIMAL DISTRIBUTION POLICY USING MARKOV DECISION PROCESSES
The paper represents a means of substantiating the optimal politicy of transport and distribution of goods groupage using Markov’s decisional processes, respectively through R. Howard’s method of strategies’ space. This kind of method is based on an iterative optimization algorithm, whose structure permits that, with the successive covering of each method, to limit the number of ulterior repetitions. In this way, the optimal politicy that result, constituted from a number of decisions, finite or not, aims at optimizing the following decisions in a close connection with the first decision’s consequences, whatever that would be. Concretely, a logistic centre’s managers have the responsibility of identifying the optimal transport policy of the unit loads prepared for being shipped in three ways: palletized, containerized and packaged, according to the ways of transport which will be used: causeway, railway and airway, and the costs associated to each shipping means.distribution centre, optimal policy, decision, average cost, unit loads.
Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification
Gaussian processes are a natural way of defining prior distributions over
functions of one or more input variables. In a simple nonparametric regression
problem, where such a function gives the mean of a Gaussian distribution for an
observed response, a Gaussian process model can easily be implemented using
matrix computations that are feasible for datasets of up to about a thousand
cases. Hyperparameters that define the covariance function of the Gaussian
process can be sampled using Markov chain methods. Regression models where the
noise has a t distribution and logistic or probit models for classification
applications can be implemented by sampling as well for latent values
underlying the observations. Software is now available that implements these
methods using covariance functions with hierarchical parameterizations. Models
defined in this way can discover high-level properties of the data, such as
which inputs are relevant to predicting the response
Accelerating delayed-acceptance Markov chain Monte Carlo algorithms
Delayed-acceptance Markov chain Monte Carlo (DA-MCMC) samples from a
probability distribution via a two-stages version of the Metropolis-Hastings
algorithm, by combining the target distribution with a "surrogate" (i.e. an
approximate and computationally cheaper version) of said distribution. DA-MCMC
accelerates MCMC sampling in complex applications, while still targeting the
exact distribution. We design a computationally faster, albeit approximate,
DA-MCMC algorithm. We consider parameter inference in a Bayesian setting where
a surrogate likelihood function is introduced in the delayed-acceptance scheme.
When the evaluation of the likelihood function is computationally intensive,
our scheme produces a 2-4 times speed-up, compared to standard DA-MCMC.
However, the acceleration is highly problem dependent. Inference results for
the standard delayed-acceptance algorithm and our approximated version are
similar, indicating that our algorithm can return reliable Bayesian inference.
As a computationally intensive case study, we introduce a novel stochastic
differential equation model for protein folding data.Comment: 40 pages, 21 figures, 10 table
Massively-Parallel Feature Selection for Big Data
We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for
feature selection (FS) in Big Data settings (high dimensionality and/or sample
size). To tackle the challenges of Big Data FS PFBP partitions the data matrix
both in terms of rows (samples, training examples) as well as columns
(features). By employing the concepts of -values of conditional independence
tests and meta-analysis techniques PFBP manages to rely only on computations
local to a partition while minimizing communication costs. Then, it employs
powerful and safe (asymptotically sound) heuristics to make early, approximate
decisions, such as Early Dropping of features from consideration in subsequent
iterations, Early Stopping of consideration of features within the same
iteration, or Early Return of the winner in each iteration. PFBP provides
asymptotic guarantees of optimality for data distributions faithfully
representable by a causal network (Bayesian network or maximal ancestral
graph). Our empirical analysis confirms a super-linear speedup of the algorithm
with increasing sample size, linear scalability with respect to the number of
features and processing cores, while dominating other competitive algorithms in
its class
- …