1,306 research outputs found

    Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces

    Full text link
    Bayesian optimization is known to be difficult to scale to high dimensions, because the acquisition step requires solving a non-convex optimization problem in the same search space. In order to scale the method and keep its benefits, we propose an algorithm (LineBO) that restricts the problem to a sequence of iteratively chosen one-dimensional sub-problems that can be solved efficiently. We show that our algorithm converges globally and obtains a fast local rate when the function is strongly convex. Further, if the objective has an invariant subspace, our method automatically adapts to the effective dimension without changing the algorithm. When combined with the SafeOpt algorithm to solve the sub-problems, we obtain the first safe Bayesian optimization algorithm with theoretical guarantees applicable in high-dimensional settings. We evaluate our method on multiple synthetic benchmarks, where we obtain competitive performance. Further, we deploy our algorithm to optimize the beam intensity of the Swiss Free Electron Laser with up to 40 parameters while satisfying safe operation constraints

    Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

    Get PDF
    Optimizing lower-body exoskeleton walking gaits for user comfort requires understanding users’ preferences over a high-dimensional gait parameter space. However, existing preference-based learning methods have only explored low-dimensional domains due to computational limitations. To learn user preferences in high dimensions, this work presents LINECOSPAR, a human-in-the-loop preference-based framework that enables optimization over many parameters by iteratively exploring one-dimensional subspaces. Additionally, this work identifies gait attributes that characterize broader preferences across users. In simulations and human trials, we empirically verify that LINECOSPAR is a sample-efficient approach for high-dimensional preference optimization. Our analysis of the experimental data reveals a correspondence between human preferences and objective measures of dynamicity, while also highlighting differences in the utility functions underlying individual users’ gait preferences. This result has implications for exoskeleton gait synthesis, an active field with applications to clinical use and patient rehabilitation

    High-Dimensional Bayesian Optimization via Tree-Structured Additive Models

    Full text link
    Bayesian Optimization (BO) has shown significant success in tackling expensive low-dimensional black-box optimization problems. Many optimization problems of interest are high-dimensional, and scaling BO to such settings remains an important challenge. In this paper, we consider generalized additive models in which low-dimensional functions with overlapping subsets of variables are composed to model a high-dimensional target function. Our goal is to lower the computational resources required and facilitate faster model learning by reducing the model complexity while retaining the sample-efficiency of existing methods. Specifically, we constrain the underlying dependency graphs to tree structures in order to facilitate both the structure learning and optimization of the acquisition function. For the former, we propose a hybrid graph learning algorithm based on Gibbs sampling and mutation. In addition, we propose a novel zooming-based algorithm that permits generalized additive models to be employed more efficiently in the case of continuous domains. We demonstrate and discuss the efficacy of our approach via a range of experiments on synthetic functions and real-world datasets.Comment: To appear in AAAI 202

    Human Preference-Based Learning for High-dimensional Optimization of Exoskeleton Walking Gaits

    Get PDF
    Understanding users' gait preferences of a lower-body exoskeleton requires optimizing over the high-dimensional gait parameter space. However, existing preference-based learning methods have only explored low-dimensional domains due to computational limitations. To learn user preferences in high dimensions, this work presents LineCoSpar, a human-in-the-loop preference-based framework that enables optimization over many parameters by iteratively exploring one-dimensional subspaces. Additionally, this work identifies gait attributes that characterize broader preferences across users. In simulations and human trials, we empirically verify that LineCoSpar is a sample-efficient approach for high-dimensional preference optimization. Our analysis of the experimental data reveals a correspondence between human preferences and objective measures of dynamic stability, while also highlighting inconsistencies in the utility functions underlying different users' gait preferences. This has implications for exoskeleton gait synthesis, an active field with applications to clinical use and patient rehabilitation

    Preference-Based Learning for Exoskeleton Gait Optimization

    Get PDF
    This paper presents a personalized gait optimization framework for lower-body exoskeletons. Rather than optimizing numerical objectives such as the mechanical cost of transport, our approach directly learns from user preferences, e.g., for comfort. Building upon work in preference-based interactive learning, we present the CoSpar algorithm. CoSpar prompts the user to give pairwise preferences between trials and suggest improvements; as exoskeleton walking is a non-intuitive behavior, users can provide preferences more easily and reliably than numerical feedback. We show that CoSpar performs competitively in simulation and demonstrate a prototype implementation of CoSpar on a lower-body exoskeleton to optimize human walking trajectory features. In the experiments, CoSpar consistently found user-preferred parameters of the exoskeleton’s walking gait, which suggests that it is a promising starting point for adapting and personalizing exoskeletons (or other assistive devices) to individual users

    Tuning Particle Accelerators with Safety Constraints using Bayesian Optimization

    Full text link
    Tuning machine parameters of particle accelerators is a repetitive and time-consuming task, that is challenging to automate. While many off-the-shelf optimization algorithms are available, in practice their use is limited because most methods do not account for safety-critical constraints that apply to each iteration, including loss signals or step-size limitations. One notable exception is safe Bayesian optimization, which is a data-driven tuning approach for global optimization with noisy feedback. We propose and evaluate a step size-limited variant of safe Bayesian optimization on two research faculties of the Paul Scherrer Institut (PSI): a) the Swiss Free Electron Laser (SwissFEL) and b) the High-Intensity Proton Accelerator (HIPA). We report promising experimental results on both machines, tuning up to 16 parameters subject to more than 200 constraints

    Benchmark of Bayesian Optimization and Metaheuristics for Control Engineering Tuning Problems with Crash Constraints

    Full text link
    Controller tuning based on black-box optimization allows to automatically tune performance-critical parameters w.r.t. mostly arbitrary high-level closed-loop control objectives. However, a comprehensive benchmark of different black-box optimizers for control engineering problems has not yet been conducted. Therefore, in this contribution, 11 different versions of Bayesian optimization (BO) are compared with seven metaheuristics and other baselines on a set of ten deterministic simulative single-objective tuning problems in control. Results indicate that deterministic noise, low multimodality, and substantial areas with infeasible parametrizations (crash constraints) characterize control engineering tuning problems. Therefore, a flexible method to handle crash constraints with BO is presented. A resulting increase in sample efficiency is shown in comparison to standard BO. Furthermore, benchmark results indicate that pattern search (PS) performs best on a budget of 25 d objective function evaluations and a problem dimensionality d of d = 2. Bayesian adaptive direct search, a combination of BO and PS, is shown to be most sample efficient for 3 <= d <= 5. Using these optimizers instead of random search increases controller performance by on average 6.6% and up to 16.1%.Comment: 13 pages, 9 figure
    • …
    corecore