1,245 research outputs found
Use of computer modeling to investigate a dynamic interaction problem in the Skylab TACS quad-valve package
A valve opening-response problem encountered during development of a control valve for the Skylab thruster attitude control system (TACS) is described. The problem involved effects of dynamic interaction among valves in the quad-redundant valve package. Also described is a detailed computer simulation of the quad-valve package which was helpful in resolving the problem
A Tutorial on Bayesian Nonparametric Models
A key problem in statistical modeling is model selection, how to choose a
model at an appropriate level of complexity. This problem appears in many
settings, most prominently in choosing the number ofclusters in mixture models
or the number of factors in factor analysis. In this tutorial we describe
Bayesian nonparametric methods, a class of methods that side-steps this issue
by allowing the data to determine the complexity of the model. This tutorial is
a high-level introduction to Bayesian nonparametric methods and contains
several examples of their application.Comment: 28 pages, 8 figure
Updated, expanded, fluid properties handbook
Revised handbook presents quantitative data, in the form of graphs and charts, pertaining to thermodynamic properties of specific cryogenic fluids and several metals. References to sources of data are cited
What does the free energy principle tell us about the brain?
The free energy principle has been proposed as a unifying account of brain
function. It is closely related, and in some cases subsumes, earlier unifying
ideas such as Bayesian inference, predictive coding, and active learning. This
article clarifies these connections, teasing apart distinctive and shared
predictions.Comment: Accepted for publication in Neurons, Behavior, Data Analysis, and
Theor
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Representation learning with reward prediction errors
The Reward Prediction Error hypothesis proposes that phasic activity in the
midbrain dopaminergic system reflects prediction errors needed for learning in
reinforcement learning. Besides the well-documented association between
dopamine and reward processing, dopamine is implicated in a variety of
functions without a clear relationship to reward prediction error. Fluctuations
in dopamine levels influence the subjective perception of time, dopamine bursts
precede the generation of motor responses, and the dopaminergic system
innervates regions of the brain, including hippocampus and areas in prefrontal
cortex, whose function is not uniquely tied to reward. In this manuscript, we
propose that a common theme linking these functions is representation, and that
prediction errors signaled by the dopamine system, in addition to driving
associative learning, can also support the acquisition of adaptive state
representations. In a series of simulations, we show how this extension can
account for the role of dopamine in temporal and spatial representation, motor
response, and abstract categorization tasks. By extending the role of dopamine
signals to learning state representations, we resolve a critical challenge to
the Reward Prediction Error hypothesis of dopamine function
Where do hypotheses come from?
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model's prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216
- …