22,310 research outputs found
Building machines that adapt and compute like brains
Building machines that learn and think like humans is essential not only for
cognitive science, but also for computational neuroscience, whose ultimate goal
is to understand how cognition is implemented in biological brains. A new
cognitive computational neuroscience should build cognitive-level and neural-
level models, understand their relationships, and test both types of models
with both brain and behavioral data.Comment: Commentary on: Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ. (2017)
Building machines that learn and think like people. Behavioral and Brain
Sciences, 4
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Take another little piece of my heart: a note on bridging cognition and emotions
Science urges philosophy to be more empirical and philosophy urges science to be more reflective. This markedly occurred along the “discovery of the artificial” (CORDESCHI 2002): in the early days of Cybernetics and Artificial Intelligence (AI) researchers aimed at making machines more cognizant while setting up a framework to better understand human intelligence.
By and large, those genuine goals still hold today, whereas AI has become more concerned with specific aspects of intelligence, such as (machine) learning, reasoning, vision, and action. As a matter of fact, the field suffers from a chasm between two formerly integrated aspects. One is the engineering endeavour involving the development of tools, e.g., autonomous systems for driving cars as well as software for semantic information retrieval. The other is the philosophical debate that tries to answer questions concerning the nature of intelligence. Bridging these two levels can indeed be crucial in developing a deeper understanding of minds.
An opportunity might be offered by the cogent theme of emotions. Traditionally, computer science, psychological and philosophical research have been compelled to investigate mental processes that do not involve mood, emotions and feelings, in spite of Simon’s early caveat (SIMON 1967) that a general theory of cognition must incorporate the influences of emotion.
Given recent neurobiological findings and technological advances, the time is ripe to seriously weigh this promising, albeit controversial, opportunity
Masking: A New Perspective of Noisy Supervision
It is important to learn various types of classifiers given training data
with noisy labels. Noisy labels, in the most popular noise model hitherto, are
corrupted from ground-truth labels by an unknown noise transition matrix. Thus,
by estimating this matrix, classifiers can escape from overfitting those noisy
labels. However, such estimation is practically difficult, due to either the
indirect nature of two-step approaches, or not big enough data to afford
end-to-end approaches. In this paper, we propose a human-assisted approach
called Masking that conveys human cognition of invalid class transitions and
naturally speculates the structure of the noise transition matrix. To this end,
we derive a structure-aware probabilistic model incorporating a structure
prior, and solve the challenges from structure extraction and structure
alignment. Thanks to Masking, we only estimate unmasked noise transition
probabilities and the burden of estimation is tremendously reduced. We conduct
extensive experiments on CIFAR-10 and CIFAR-100 with three noise structures as
well as the industrial-level Clothing1M with agnostic noise structure, and the
results show that Masking can improve the robustness of classifiers
significantly.Comment: NIPS 2018 camera-ready versio
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
- …