757 research outputs found

    Empirical Evaluation of Contextual Policy Search with a Comparison-based Surrogate Model and Active Covariance Matrix Adaptation

    Full text link
    Contextual policy search (CPS) is a class of multi-task reinforcement learning algorithms that is particularly useful for robotic applications. A recent state-of-the-art method is Contextual Covariance Matrix Adaptation Evolution Strategies (C-CMA-ES). It is based on the standard black-box optimization algorithm CMA-ES. There are two useful extensions of CMA-ES that we will transfer to C-CMA-ES and evaluate empirically: ACM-ES, which uses a comparison-based surrogate model, and aCMA-ES, which uses an active update of the covariance matrix. We will show that improvements with these methods can be impressive in terms of sample-efficiency, although this is not relevant any more for the robotic domain.Comment: Supplementary material for poster paper accepted at GECCO 2019; https://doi.org/10.1145/3319619.332193

    Information theoretic stochastic search

    Get PDF
    The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and PortoOptimization is the research field that studies the design of algorithms for finding the best solutions to problems we may throw at them. While the whole domain is practically important, the present thesis will focus on the subfield of continuous black-box optimization, presenting a collection of novel, state-of-the-art algorithms for solving problems in that class. In this thesis, we introduce two novel general-purpose stochastic search algorithms for black box optimisation. Stochastic search algorithms aim at repeating the type of mutations that led to fittest search points in a population. We can model those mutations by a stochastic distribution. Typically the stochastic distribution is modelled as a multivariate Gaussian distribution. The key idea is to iteratively change the parameters of the distribution towards higher expected fitness. However we leverage information theoretic trust regions and limit the change of the new distribution. We show how plain maximisation of the fitness expectation without bounding the change of the distribution is destined to fail because of overfitting and the results in premature convergence. Being derived from first principles, the proposed methods can be elegantly extended to contextual learning setting which allows for learning context dependent stochastic distributions that generates optimal individuals for a given context, i.e, instead of learning one task at a time, we can learn multiple related tasks at once. However, the search distribution typically uses a parametric model using some hand-defined context features. Finding good context features is a challenging task, and hence, non-parametric methods are often preferred over their parametric counter-parts. Therefore, we further propose a non-parametric contextual stochastic search algorithm that can learn a non-parametric search distribution for multiple tasks simultaneously.Otimização é área de investigação que estuda o projeto de algoritmos para encontrar as melhores soluções, tendo em conta um conjunto de critérios, para problemas complexos. Embora todo o domínio de otimização tenha grande importância, este trabalho está focado no subcampo da otimização contínua de caixa preta, apresentando uma coleção de novos algoritmos novos de última geração para resolver problemas nessa classe. Nesta tese, apresentamos dois novos algoritmos de pesquisa estocástica de propósito geral para otimização de caixa preta. Os algoritmos de pesquisa estocástica visam repetir o tipo de mutações que levaram aos melhores pontos de pesquisa numa população. Podemos modelar essas mutações por meio de uma distribuição estocástica e, tipicamente, a distribuição estocástica é modelada como uma distribuição Gaussiana multivariada. A ideia chave é mudar iterativamente os parâmetros da distribuição incrementando a avaliação. No entanto, alavancamos as regiões de confiança teóricas de informação e limitamos a mudança de distribuição. Deste modo, demonstra-se como a maximização simples da expectativa de “fitness”, sem limites da mudança da distribuição, está destinada a falhar devido ao “overfitness” e à convergência prematura resultantes. Sendo derivado dos primeiros princípios, as abordagens propostas podem ser ampliadas, de forma elegante, para a configuração de aprendizagem contextual que permite a aprendizagem de distribuições estocásticas dependentes do contexto que geram os indivíduos ideais para um determinado contexto. No entanto, a distribuição de pesquisa geralmente usa um modelo paramétrico linear em algumas das características contextuais definidas manualmente. Encontrar uma contextos bem definidos é uma tarefa desafiadora e, portanto, os métodos não paramétricos são frequentemente preferidos em relação às seus semelhantes paramétricos. Portanto, propomos um algoritmo não paramétrico de pesquisa estocástica contextual que possa aprender uma distribuição de pesquisa não-paramétrica para várias tarefas simultaneamente.FCT - Fundação para a Ciência e a Tecnologia. As well as fundings by European Union’s FP7 under EuRoC grant agreement CP-IP 608849 and by LIACC (UID/CEC/00027/2015) and IEETA (UID/CEC/00127/2015)

    An Online Actor Critic Algorithm and a Statistical Decision Procedure for Personalizing Intervention.

    Full text link
    Increasing technological sophistication and widespread use of smartphones and wearable devices provide opportunities for innovative health interventions. An Adaptive Intervention (AI) personalizes the type, mode and dose of intervention based on users' ongoing performances and changing needs. A Just-In-Time Adaptive Intervention (JITAI) employs the real-time data collection and communication capabilities that modern mobile devices provide to adapt and deliver interventions in real-time. The lack of methodological guidance in constructing data-based high quality JITAI remains a hurdle in advancing JITAI research despite its increasing popularity. In the first part of the dissertation, we make a first attempt to bridge this methodological gap by formulating the task of tailoring interventions in real-time as a contextual bandit problem. Under the linear reward assumption, we choose the reward function (the ``critic") parameterization separately from a lower dimensional parameterization of stochastic JITAIs (the ``actor"). We provide an online actor critic algorithm that guides the construction and refinement of a JITAI. Asymptotic properties of the actor critic algorithm, including consistency, asymptotic distribution and regret bound of the optimal JITAI parameters are developed and tested by numerical experiments. We also present numerical experiment to test performance of the algorithm when assumptions in the contextual bandits are broken. In the second part of the dissertation, we propose a statistical decision procedure that identifies whether a patient characteristic is useful for AI. We define a discrete-valued characteristic as useful in adaptive intervention if for some values of the characteristic, there is sufficient evidence to recommend a particular intervention, while for other values of the characteristic, either there is sufficient evidence to recommend a different intervention, or there is insufficient evidence to recommend a particular intervention.PhDStatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133223/1/ehlei_1.pd

    Regularized covariance estimation for weighted maximum likelihood policy search methods

    Get PDF
    Many episode-based (or direct) policy search algorithms, maintain a multivariate Gaussian distribution as search distribution over the parameter space of some objective function. One class of algorithms, such as episodic REPS, PoWER or PI2 uses, a weighted maximum likelihood estimate (WMLE) to update the mean and covariance matrix of this distribution in each iteration. However, due to high dimensionality of covariance matrices and limited number of samples, the WMLE is an unreliable estimator. The use of WMLE leads to over-fitted covariance estimates, and, hence the variance/entropy of the search distribution decreases too quickly, which may cause premature convergence. In order to alleviate this problem, the estimated covariance matrix can be regularized in different ways, for example by using a convex combination of the diagonal covariance estimate and the sample covariance estimate. In this paper, we propose a new covariance matrix regularization technique for policy search methods that uses the convex combination of the sample covariance matrix and the old covariance matrix used in last iteration. The combination weighting is determined by specifying the desired entropy of the new search distribution. With this mechanism, the entropy of the search distribution can be gradually decreased without damage from the maximum likelihood estimate

    Data-driven integration of norm-penalized mean-variance portfolios

    Full text link
    Mean-variance optimization (MVO) is known to be sensitive to estimation error in its inputs. Norm penalization of MVO programs is a regularization technique that can mitigate the adverse effects of estimation error. We augment the standard MVO program with a convex combination of parameterized L1L_1 and L2L_2-norm penalty functions. The resulting program is a parameterized quadratic program (QP) whose dual is a box-constrained QP. We make use of recent advances in neural network architecture for differentiable QPs and present a data-driven framework for optimizing parameterized norm-penalties to minimize the downstream MVO objective. We present a novel technique for computing the derivative of the optimal primal solution with respect to the parameterized L1L_1-norm penalty by implicit differentiation of the dual program. The primal solution is then recovered from the optimal dual variables. Historical simulations using US stocks and global futures data demonstrate the benefit of the data-driven optimization approach

    Policy search with high-dimensional context variables

    Get PDF
    Direct contextual policy search methods learn to improve policy parameters and simultaneously generalize these parameters to different context or task variables. However, learning from high-dimensional context variables, such as camera images, is still a prominent problem in many real-world tasks. A naive application of unsupervised dimensionality reduction methods to the context variables, such as principal component analysis, is insufficient as task-relevant input may be ignored. In this paper, we propose a contextual policy search method in the model-based relative entropy stochastic search framework with integrated dimensionality reduction. We learn a model of the reward that is locally quadratic in both the policy parameters and the context variables. Furthermore, we perform supervised linear dimensionality reduction on the context variables by nuclear norm regularization. The experimental results show that the proposed method outperforms naive dimensionality reduction via principal component analysis and a state-of-the-art contextual policy search method
    corecore