36 research outputs found

    Trieste: Efficiently Exploring The Depths of Black-box Functions with TensorFlow

    Full text link
    We present Trieste, an open-source Python package for Bayesian optimization and active learning benefiting from the scalability and efficiency of TensorFlow. Our library enables the plug-and-play of popular TensorFlow-based models within sequential decision-making loops, e.g. Gaussian processes from GPflow or GPflux, or neural networks from Keras. This modular mindset is central to the package and extends to our acquisition functions and the internal dynamics of the decision-making loop, both of which can be tailored and extended by researchers or engineers when tackling custom use cases. Trieste is a research-friendly and production-ready toolkit backed by a comprehensive test suite, extensive documentation, and available at https://github.com/secondmind-labs/trieste

    Project files accompanying the paper "An empirical evaluation of active inference in multi-armed bandits"

    No full text
    Preprint can be found at: https://arxiv.org/abs/2101.0869

    The collective dynamics of sequential search in markets for cultural products

    No full text
    The collective dynamics of sequential search in markets for cultural product

    Not everything looks like a nail: Learning to select appropriate decision strategies in multiple environments

    No full text
    How do people choose which decision strategy to use? When facing single tasks, research shows that people can learn to select appropriate strategies. However, what happens when, as is typical outside the psychological laboratory, they face multiple tasks? Participants were presented with two interleaved decision tasks, one from a nonlinear environment, the other from a linear environment. The environments were initially unknown and participants had to learn their properties. Through cognitive modeling, we examined the types of strategies adopted in both tasks. Based on out of sample predictions, most participants adopted a cue-based strategy in the linear environment and an exemplar-based strategy in the nonlinear environment. A context-sensitive reinforcement learning model accounts for this process. Thus, people associated different strategies to different types of environments through a trial-and-error type of process, and learned to flexibly switch between the strategies as needed. This evidence further supports the strategy selection approach to decision making which assumes that people pick and apply strategies available to them according to task demands

    No adaptive strategy selection without outcome feedback

    No full text
    <p>This work presents results from a project that aimed to replicate results from Dieckmann & Rieskamp (2007) article involving strategy selection in decision making, and to point out some crucial flaws in their experimental design. It was presented at CogSci 2013 and SPUDM 2013.</p
    corecore