5,822 research outputs found
The Fine-Tuning Argument
Our laws of nature and our cosmos appear to be delicately fine-tuned for life
to emerge, in a way that seems hard to attribute to chance. In view of this,
some have taken the opportunity to revive the scholastic Argument from Design,
whereas others have felt the need to explain this apparent fine-tuning of the
clockwork of the Universe by proposing the existence of a `Multiverse'. We
analyze this issue from a sober perspective. Having reviewed the literature and
having added several observations of our own, we conclude that cosmic
fine-tuning supports neither Design nor a Multiverse, since both of these fail
at an explanatory level as well as in a more quantitative context of Bayesian
confirmation theory (although there might be other reasons to believe in these
ideas, to be found in religion and in inflation and/or string theory,
respectively). In fact, fine-tuning and Design even seem to be at odds with
each other, whereas the inference from fine-tuning to a Multiverse only works
if the latter is underwritten by an additional metaphysical hypothesis we
consider unwarranted. Instead, we suggest that fine-tuning requires no special
explanation at all, since it is not the Universe that is fine-tuned for life,
but life that has been fine-tuned to the Universe.Comment: 16 pages, written for a general audienc
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
The propositional nature of human associative learning
The past 50 years have seen an accumulation of evidence suggesting that associative learning depends oil high-level cognitive processes that give rise to propositional knowledge. Yet, many learning theorists maintain a belief in a learning mechanism in which links between mental representations are formed automatically. We characterize and highlight the differences between the propositional and link approaches, and review the relevant empirical evidence. We conclude that learning is the consequence of propositional reasoning processes that cooperate with the unconscious processes involved in memory retrieval and perception. We argue that this new conceptual framework allows many of the important recent advances in associative learning research to be retained, but recast in a model that provides a firmer foundation for both immediate application and future research
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Simple trees in complex forests: Growing Take The Best by Approximate Bayesian Computation
How can heuristic strategies emerge from smaller building blocks? We propose
Approximate Bayesian Computation as a computational solution to this problem.
As a first proof of concept, we demonstrate how a heuristic decision strategy
such as Take The Best (TTB) can be learned from smaller, probabilistically
updated building blocks. Based on a self-reinforcing sampling scheme, different
building blocks are combined and, over time, tree-like non-compensatory
heuristics emerge. This new algorithm, coined Approximately Bayesian Computed
Take The Best (ABC-TTB), is able to recover a data set that was generated by
TTB, leads to sensible inferences about cue importance and cue directions, can
outperform traditional TTB, and allows to trade-off performance and
computational effort explicitly
Usage-based and emergentist approaches to language acquisition
It was long considered to be impossible to learn grammar based on linguistic experience alone. In the past decade, however, advances in usage-based linguistic theory, computational linguistics, and developmental psychology changed the view on this matter. So-called usage-based and emergentist approaches to language acquisition state that language can be learned from language use itself, by means of social skills like joint attention, and by means of powerful generalization mechanisms. This paper first summarizes the assumptions regarding the nature of linguistic representations and processing. Usage-based theories are nonmodular and nonreductionist, i.e., they emphasize the form-function relationships, and deal with all of language, not just selected levels of representations. Furthermore, storage and processing is considered to be analytic as well as holistic, such that there is a continuum between children's unanalyzed chunks and abstract units found in adult language. In the second part, the empirical evidence is reviewed. Children's linguistic competence is shown to be limited initially, and it is demonstrated how children can generalize knowledge based on direct and indirect positive evidence. It is argued that with these general learning mechanisms, the usage-based paradigm can be extended to multilingual language situations and to language acquisition under special circumstances
Recommended from our members
A Goal-Directed Bayesian Framework for Categorization
Categorization is a fundamental ability for efficient behavioral control. It allows organisms to remember the correct responses to categorical cues and not for every stimulus encountered (hence eluding computational cost or complexity), and to generalize appropriate responses to novel stimuli dependant on category assignment. Assuming the brain performs Bayesian inference, based on a generative model of the external world and future goals, we propose a computational model of categorization in which important properties emerge. These properties comprise the ability to infer latent causes of sensory experience, a hierarchical organization of latent causes, and an explicit inclusion of context and action representations. Crucially, these aspects derive from considering the environmental statistics that are relevant to achieve goals, and from the fundamental Bayesian principle that any generative model should be preferred over alternative models based on an accuracy-complexity trade-off. Our account is a step toward elucidating computational principles of categorization and its role within the Bayesian brain hypothesis
- …