13,698 research outputs found
Solving Bongard Problems with a Visual Language and Pragmatic Reasoning
More than 50 years ago Bongard introduced 100 visual concept learning
problems as a testbed for intelligent vision systems. These problems are now
known as Bongard problems. Although they are well known in the cognitive
science and AI communities only moderate progress has been made towards
building systems that can solve a substantial subset of them. In the system
presented here, visual features are extracted through image processing and then
translated into a symbolic visual vocabulary. We introduce a formal language
that allows representing complex visual concepts based on this vocabulary.
Using this language and Bayesian inference, complex visual concepts can be
induced from the examples that are provided in each Bongard problem. Contrary
to other concept learning problems the examples from which concepts are induced
are not random in Bongard problems, instead they are carefully chosen to
communicate the concept, hence requiring pragmatic reasoning. Taking pragmatic
reasoning into account we find good agreement between the concepts with high
posterior probability and the solutions formulated by Bongard himself. While
this approach is far from solving all Bongard problems, it solves the biggest
fraction yet
Normalized Information Distance
The normalized information distance is a universal distance measure for
objects of all kinds. It is based on Kolmogorov complexity and thus
uncomputable, but there are ways to utilize it. First, compression algorithms
can be used to approximate the Kolmogorov complexity if the objects have a
string representation. Second, for names and abstract concepts, page count
statistics from the World Wide Web can be used. These practical realizations of
the normalized information distance can then be applied to machine learning
tasks, expecially clustering, to perform feature-free and parameter-free data
mining. This chapter discusses the theoretical foundations of the normalized
information distance and both practical realizations. It presents numerous
examples of successful real-world applications based on these distance
measures, ranging from bioinformatics to music clustering to machine
translation.Comment: 33 pages, 12 figures, pdf, in: Normalized information distance, in:
Information Theory and Statistical Learning, Eds. M. Dehmer, F.
Emmert-Streib, Springer-Verlag, New-York, To appea
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
- …