218 research outputs found

    Variational Bayesian inference for linear and logistic regression

    Full text link
    The article describe the model, derivation, and implementation of variational Bayesian inference for linear and logistic regression, both with and without automatic relevance determination. It has the dual function of acting as a tutorial for the derivation of variational Bayesian inference for simple models, as well as documenting, and providing brief examples for the MATLAB/Octave functions that implement this inference. These functions are freely available online.Comment: 28 pages, 6 figure

    Optimal decision bounds for probabilistic population codes and time varying evidence

    Get PDF
    Decision making under time constraints requires the decision maker to trade off between making quick, inaccurate decisions and gathering more evidence for more accurate, but slower decisions. We have previously shown that, under rather general settings, optimal behavior can be described by a time-dependent decision bound on the decision maker’s belief of being correct (Drugowitsch, Moreno-Bote, Pouget, 2009). In cases where the reliability of sensory information remains constant over time, we have shown how to design diffusion models (DMs) with time-changing boundaries that feature such behavior. Such theories can be easily mapped onto simple neural models of decision making with two perfectly anti-correlated neurons, where they predict the existence of a stopping bound on the most active neurons. It is unclear however how the stopping bound would be implemented with more realistic neural population codes, particularly when the reliability of the evidence changes over time.
Here we show that, under certain realistic conditions, we can apply the theory of optimal decision making to the biologically more plausible probabilistic population codes (PPCs; Ma et al. 2006). Our analysis shows that, with population codes, the optimal decision bounds are a function of the neural activity of all neurons in the population, rather than a previously postulated bound on its maximum activity. This theory predicts that the bound on the most active neurons would appear to shift depending on the firing rate of other neurons in the population, a puzzling behavior under the drift diffusion model as it would wrongly suggest that subjects change their stopping rule across conditions. This theory also applies to the case of time varying evidence, a case that cannot be handled by drift diffusion models without unrealistic assumptions

    Maximizing decision rate in multisensory integration

    Get PDF
    Effective decision-making in an uncertain world requires making use of all available information, even if distributed across different sensory modalities, as well as trading off the speed of a decision with its accuracy. In tasks with a fixed stimulus presentation time, animal and human subjects have previously been shown to combine information from several modalities in a statistically optimal manner. Furthermore, for easily discriminable stimuli and under the assumption that reaction times result from a race-to-threshold mechanism, multimodal reaction times are typically faster than predicted from unimodal conditions when assuming independent (parallel) races for each modality. However, due to a lack of adequate ideal observer models, it has remained unclear whether subjects perform optimal cue combination when they are allowed to choose their response times freely.
Based on data collected from human subjects performing a visual/vestibular heading discrimination task, we show that the subjects exhibit worse discrimination performance in the multimodal condition than predicted by standard cue combination criteria, which relate multimodal discrimination performance to sensitivity in the unimodal conditions. Furthermore, multimodal reaction times are slower than those predicted by a parallel race model, opposite to what is commonly observed for easily discriminable stimuli.
Despite violating the standard criteria for optimal cue combination, we show that subjects still accumulate evidence optimally across time and cues, even when the strength of the evidence varies with time. Additionally, subjects adjust their decision bounds, controlling the trade-off between speed and accuracy of a decision, such that they feature correct decision rates close to the maximum achievable value

    Learning classifier systems from first principles: A probabilistic reformulation of learning classifier systems from the perspective of machine learning

    Get PDF
    Learning Classifier Systems (LCS) are a family of rule-based machine learning methods. They aim at the autonomous production of potentially human readable results that are the most compact generalised representation whilst also maintaining high predictive accuracy, with a wide range of application areas, such as autonomous robotics, economics, and multi-agent systems. Their design is mainly approached heuristically and, even though their performance is competitive in regression and classification tasks, they do not meet their expected performance in sequential decision tasks despite being initially designed for such tasks. It is out contention that improvement is hindered by a lack of theoretical understanding of their underlying mechanisms and dynamics.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Learning classifier systems from first principles

    Get PDF

    Learning classifier systems from first principles

    Get PDF

    Generalised mixtures of experts, independent expert training, and learning classifier systems

    Get PDF

    Generalised mixtures of experts, independent expert training, and learning classifier systems

    Get PDF
    • ā€¦
    corecore