93 research outputs found

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    Full text link
    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences

    Experimental Designs, Meta-Modeling, and Meta-learning for Mixed-Factor Systems with Large Decision Spaces

    Get PDF
    Many Air Force studies require a design and analysis process that can accommodate for the computational challenges associated with complex systems, simulations, and real-world decisions. For systems with large decision spaces and a mixture of continuous, discrete, and categorical factors, nearly orthogonal-and-balanced (NOAB) designs can be used as efficient, representative subsets of all possible design points for system evaluation, where meta-models are then fitted to act as surrogates to system outputs. The mixed-integer linear programming (MILP) formulations used to construct first-order NOAB designs are extended to solve for low correlation between second-order model terms (i.e., two-way interactions and quadratics). The resulting second-order approaches are shown to improve design performance measures for second-order model parameter estimation and prediction variance as well as for protection from bias due to model misspecification with respect to second-order terms. Further extensions are developed to construct batch sequential NOAB designs, giving experimenters more flexibility by creating multiple stages of design points using different NOAB approaches, where simultaneous construction of stages is shown to outperform design augmentation overall. To reduce cost and add analytical rigor, meta-learning frameworks are developed for accurate and efficient selection of first-order NOAB designs as well as of meta-models that approximate mixed-factor systems

    STATISTICAL MACHINE LEARNING BASED MODELING FRAMEWORK FOR DESIGN SPACE EXPLORATION AND RUN-TIME CROSS-STACK ENERGY OPTIMIZATION FOR MANY-CORE PROCESSORS

    Get PDF
    The complexity of many-core processors continues to grow as a larger number of heterogeneous cores are integrated on a single chip. Such systems-on-chip contains computing structures ranging from complex out-of-order cores, simple in-order cores, digital signal processors (DSPs), graphic processing units (GPUs), application specific processors, hardware accelerators, I/O subsystems, network-on-chip interconnects, and large caches arranged in complex hierarchies. While the industry focus is on putting higher number of cores on a single chip, the key challenge is to optimally architect these many-core processors such that performance, energy and area constraints are satisfied. The traditional approach to processor design through extensive cycle accurate simulations are ill-suited for designing many-core processors due to the large microarchitecture design space that must be explored. Additionally it is hard to optimize such complex processors and the applications that run on them statically at design time such that performance and energy constraints are met under dynamically changing operating conditions. The dissertation establishes statistical machine learning based modeling framework that enables the efficient design and operation of many-core processors that meets performance, energy and area constraints. We apply the proposed framework to rapidly design the microarchitecture of a many-core processor for multimedia, computer graphics rendering, finance, and data mining applications derived from the Parsec benchmark. We further demonstrate the application of the framework in the joint run-time adaptation of both the application and microarchitecture such that energy availability constraints are met

    Acta Cybernetica : Volume 17. Number 3.

    Get PDF

    Multipartite entanglement and quantum algorithms

    Get PDF
    [eng] Quantum information science has grown from being a very small subfield in the 70s until being one of the most dynamic fields in physics, both in fundamentals and applications. In the theoretical section, perhaps the feature that has attracted most interest is the notion of entanglement, the ghostly relation between particles that dazzled Einstein and has provided fabulous challenges to build a coherent interpretation of quantum mechanics. While not completely solved, we have today learned enough to feel less uneasy with this fundamental problem, and the focus has shifted towards its potential powerful applications. Entanglement is now being studied from different perspectives as a resource for performing information processing tasks. With bipartite entanglement being largely understood nowadays, many questions remain unanswered in the multipartite case. The first part of this thesis deals with multipartite entanglement in different contexts. In the first chapters it is studied within the whole corresponding Hilbert space, and we investigate several entanglement measures searching for states that maximize them, including violations of Bell inequalities. Later, focus is shifted towards hamiltonians that have entangled ground states, and we investigate entanglement as a way to establish a distance between theories and we study frustration and methods to efficiently solve hamiltonians that exhibit it. In the practical section, the most promised upcoming technological advance is the advent of quantum computers. In the 90s some quantum algorithms improving the performance of all known classical algorithms for certain problems started to appear, while in the 2000s the first universal computers of few atoms began to be built, allowing implementation of those algorithms in small scales. The D-Wave machine already performs quantum annealing in thousands of qubits, although some controversy over the true quantumness of its internal workings surrounds it. Many countries in the planet are devoting large amounts of money to this field, with the recent European flagship and the involvement of the largest US technological companies giving reasons for optimism. The second part of this thesis deals with some aspects of quantum computation, starting with the creation of the field of cloud quantum computation with the appearance of the first computer available to the general public through internet, which we have used and analysed extensively. Also small incursions in quantum adiabatic computation and quantum thermodynamics are present in this second part.[cat] La informació quàntica ha crescut des d'un petit subcamp als anys setanta fins a esdevenir un dels camps més dinàmics de la física actualment, tant en aspectes fonamentals com en les seves aplicacions. En la secció teòrica, potser la propietat que ha atret més interès és la noció d'entrellaçament, la relació fantasmagòrica entre partícules que va deixar estupefacte Einstein i que ha suposat un enorme desafiament per a construir una interpretació coherent de la mecànica quàntica. Sense estar totalment solucionat, hem après prou per sentir-nos menys incòmodes amb aquest problema fonamental i el focus s'ha desplaçat a les seves aplicacions potencials. L'entrellaçament s'estudia avui en dia des de diferents perspectives com a recurs per realitzar tasques de processament de la informació. L'entrellaçament bipartit està ja molt ben comprès, però en el cas multipartit queden moltes qüestions obertes. La primera part d'aquesta tesi tracta de l'entrellaçament multipartit en diferents contextos. Estudiem l'hiperdeterminant com a mesura d'entrellaçament el cas de 4 qubits, analitzem l'existència i les propietats matemàtiques dels estats absolutament màximament entrellaçats, trobem noves desigualtats de Bell, estudiem l'espectre d'entrellaçament com a mesura de distància entre teories i estudiem xarxes tensorials per tractar eficientment sistemes frustrats. En l'apartat pràctic, el més prometedor avenç tecnològic del camp és l'adveniment dels ordinadors quàntics. La segona part de la tesi tracta d'alguns aspectes de computació quàntica, començant per la creació del camp de la computació quàntica al núvol, amb l'aparició del primer ordinador disponible per al públic general, que hem usat extensament. També fem petites incursions a la computació quàntica adiabàtica i a la termodinàmica quàntica en aquesta segona par

    Doctor of Philosophy

    Get PDF
    dissertationScale-bridging models are created to capture desired characteristics of high-fidelity models within low-fidelity model-forms for the purpose of allowing models to function at required spacial and/or temporal scales. The development, analysis, and application of scale-bridging models will be the focus of this dissertation. The applications dictating scales herein are large-scale computational fluid dynamics codes. Three unique scale-bridging models will be presented. First, the development and validation of a multiple-polymorph, particle precipitation modeling framework for highly supersaturated CaCO3 systems will be presented. This precipitation framework is validated against literature data, as well as explored for additional avenues of validation and potential future applications. Following this will be an introduction to the concepts of validation and uncertainty quantification and an approach for credible simulation development based upon those concepts. The credible simulation development approach is demonstrated through a spring-mass-damper pedagogical example. Bayesian statistical methods are commonly applied to validation and uncertainty quantification issues and the well-known Kennedy O'Hagan approach towards model-form uncertainty will be explored thoroughly using a chemical kinetics pedagogical example. Additional issues and ideas surrounding model-form uncertainty such as the identification problem will also be considered. Bayesian methods will then be applied towards the creation of a scale-bridging model for coal particle heat capacity and enthalpy modeling. Lastly, an alternative validation and uncertainty quantification technique, known as consistency testing, will be utilized to create a scale-bridging model for coal particle devolatilization. The credibility of the devolatilization scale-bridging model due to the model development process is assessed and found to have benefited from the use of validation and uncertainty quantification practices
    corecore