221,245 research outputs found

    Towards Efficient MPPI Trajectory Generation with Unscented Guidance: U-MPPI Control Strategy

    Full text link
    The classical Model Predictive Path Integral (MPPI) control framework lacks reliable safety guarantees since it relies on a risk-neutral trajectory evaluation technique, which can present challenges for safety-critical applications such as autonomous driving. Additionally, if the majority of MPPI sampled trajectories concentrate in high-cost regions, it may generate an infeasible control sequence. To address this challenge, we propose the U-MPPI control strategy, a novel methodology that can effectively manage system uncertainties while integrating a more efficient trajectory sampling strategy. The core concept is to leverage the Unscented Transform (UT) to propagate not only the mean but also the covariance of the system dynamics, going beyond the traditional MPPI method. As a result, it introduces a novel and more efficient trajectory sampling strategy, significantly enhancing state-space exploration and ultimately reducing the risk of being trapped in local minima. Furthermore, by leveraging the uncertainty information provided by UT, we incorporate a risk-sensitive cost function that explicitly accounts for risk or uncertainty throughout the trajectory evaluation process, resulting in a more resilient control system capable of handling uncertain conditions. By conducting extensive simulations of 2D aggressive autonomous navigation in both known and unknown cluttered environments, we verify the efficiency and robustness of our proposed U-MPPI control strategy compared to the baseline MPPI. We further validate the practicality of U-MPPI through real-world demonstrations in unknown cluttered environments, showcasing its superior ability to incorporate both the UT and local costmap into the optimization problem without introducing additional complexity.Comment: This paper has 15 pages, 10 figures, 4 table

    Universal Convexification via Risk-Aversion

    Full text link
    We develop a framework for convexifying a fairly general class of optimization problems. Under additional assumptions, we analyze the suboptimality of the solution to the convexified problem relative to the original nonconvex problem and prove additive approximation guarantees. We then develop algorithms based on stochastic gradient methods to solve the resulting optimization problems and show bounds on convergence rates. %We show a simple application of this framework to supervised learning, where one can perform integration explicitly and can use standard (non-stochastic) optimization algorithms with better convergence guarantees. We then extend this framework to apply to a general class of discrete-time dynamical systems. In this context, our convexification approach falls under the well-studied paradigm of risk-sensitive Markov Decision Processes. We derive the first known model-based and model-free policy gradient optimization algorithms with guaranteed convergence to the optimal solution. Finally, we present numerical results validating our formulation in different applications

    Game-theoretic approach to risk-sensitive benchmarked asset management

    Get PDF
    In this article we consider a game theoretic approach to the Risk-Sensitive Benchmarked Asset Management problem (RSBAM) of Davis and Lleo \cite{DL}. In particular, we consider a stochastic differential game between two players, namely, the investor who has a power utility while the second player represents the market which tries to minimize the expected payoff of the investor. The market does this by modulating a stochastic benchmark that the investor needs to outperform. We obtain an explicit expression for the optimal pair of strategies as for both the players.Comment: Forthcoming in Risk and Decision Analysis. arXiv admin note: text overlap with arXiv:0905.4740 by other author
    • …
    corecore