1,376 research outputs found
Computational and Robotic Models of Early Language Development: A Review
We review computational and robotics models of early language learning and
development. We first explain why and how these models are used to understand
better how children learn language. We argue that they provide concrete
theories of language learning as a complex dynamic system, complementing
traditional methods in psychology and linguistics. We review different modeling
formalisms, grounded in techniques from machine learning and artificial
intelligence such as Bayesian and neural network approaches. We then discuss
their role in understanding several key mechanisms of language development:
cross-situational statistical learning, embodiment, situated social
interaction, intrinsically motivated learning, and cultural evolution. We
conclude by discussing future challenges for research, including modeling of
large-scale empirical data about language acquisition in real-world
environments.
Keywords: Early language learning, Computational and robotic models, machine
learning, development, embodiment, social interaction, intrinsic motivation,
self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J.
Horst and J. von Koss Torkildsen, Routledg
Literal Perceptual Inference
In this paper, I argue that theories of perception that appeal to Helmholtz’s idea of unconscious inference (“Helmholtzian” theories) should be taken literally, i.e. that the inferences appealed to in such theories are inferences in the full sense of the term, as employed elsewhere in philosophy and in ordinary discourse.
In the course of the argument, I consider constraints on inference based on the idea that inference is a deliberate acton, and on the idea that inferences depend on the syntactic structure of representations. I argue that inference is a personal-level but sometimes unconscious process that cannot in general be distinguished from association on the basis of the structures of the representations over which it’s defined. I also critique arguments against representationalist interpretations of Helmholtzian theories, and argue against the view that perceptual inference is encapsulated in a module
The role of representation in Bayesian reasoning: Correcting common misconceptions
The terms nested sets, partitive frequencies, inside-outside view, and dual processes add little but confusion to our original analysis (Gigerenzer & Hoffrage 1995; 1999). The idea of nested set was introduced because of an oversight; it simply rephrases two of our equations. Representation in terms of chances, in contrast, is a novel contribution yet consistent with our computational analysis - it uses exactly the same numbers as natural frequencies. We show that non-Bayesian reasoning in children, laypeople, and physicians follows multiple rules rather than a general-purpose associative process in a vaguely specified "System 1.” It is unclear what the theory in "dual process theory” is: Unless the two processes are defined, this distinction can account post hoc for almost everything. In contrast, an ecological view of cognition helps to explain how insight is elicited from the outside (the external representation of information) and, more generally, how cognitive strategies match with environmental structure
Modeling Option and Strategy Choices with Connectionist Networks: Towards an Integrative Model of Automatic and Deliberate Decision Making
We claim that understanding human decisions requires that both automatic and deliberate processes be considered. First, we sketch the qualitative differences between two hypothetical processing systems, an automatic and a deliberate system. Second, we show the potential that connectionism offers for modeling processes of decision making and discuss some empirical evidence. Specifically, we posit that the integration of information and the application of a selection rule are governed by the automatic system. The deliberate system is assumed to be responsible for information search, inferences and the modification of the network that the automatic processes act on. Third, we critically evaluate the multiple-strategy approach to decision making. We introduce the basic assumption of an integrative approach stating that individuals apply an all-purpose rule for decisions but use different strategies for information search. Fourth, we develop a connectionist framework that explains the interaction between automatic and deliberate processes and is able to account for choices both at the option and at the strategy level.System 1, Intuition, Reasoning, Control, Routines, Connectionist Model, Parallel Constraint Satisfaction
Principles of Human Learning
What are the general principles that drive human learning in different situations? I argue that much of human learning can be understood with just three principles. These are generalization, adaptation, and simplicity. To verify this conjecture, I introduce a modeling framework based on the same principles. This framework combines the idea of meta-learning -- also known as learning-to-learn -- with the minimum description length principle. The models that result from this framework capture many aspects of human learning across different domains, including decision-making, associative learning, function learning, multi-task learning, and reinforcement learning. In the context of decision-making, they explain why different heuristic decision-making strategies emerge and how appropriate strategies are selected. The same models furthermore capture order effects found in associative learning, function learning and multi-task learning. In the reinforcement learning context, they resemble individual differences between human exploration strategies and explain empirical data better than any other strategy under consideration. The proposed modeling framework -- together with its accompanying empirical evidence -- may therefore be viewed as a first step towards the identification of a minimal set of principles from which all human behavior derives
Maps of Bounded Rationality
The work cited by the Nobel committee was done jointly with the late Amos Tversky (1937-1996) during a long and unusually close collaboration. Together, we explored the psychology of intuitive beliefs and choices and examined their bounded rationality. This essay presents a current perspective on the three major topics of our joint work: heuristics of judgment, risky choice, and framing effects. In all three domains we studied intuitions - thoughts and preferences that come to mind quickly and without much reflection. I review the older research and some recent developments in light of two ideas that have become central to social-cognitive psychology in the intervening decades: the notion that thoughts differ in a dimension of accessibility - some come to mind much more easily than others - and the distinction between intuitive and deliberate thought processes.behavioral economics; experimental economics
The sampling brain
Understanding the algorithmic nature of mental processes is of vital importance to psychology, neuroscience, and artificial intelligence. In response to a rapidly changing world and computational demanding cognitive tasks, evolution may have endowed us with brains that are approximating rational solutions, such that our performance is close to optimal. This thesis suggests one instance of the approximation algorithms, sample-based approximation, to be implemented by the brain to tackle complex cognitive tasks. Knowing that certain types of sampling is used to generate mental samples, the brain could also actively correct for the uncertainty comes along with the sampling process. This correction process for samples left traces in human probability estimates, suggesting a more rational account of sample-based estimations. In addition, these mental samples can come from both observed experiences (memory) and synthesised experiences (imagination). Each source of mental samples has unique role in learning tasks and the classical error-correction principle of learning can be generalised when mental-sampling processes are considered
von Neumann-Morgenstern and Savage Theorems for Causal Decision Making
Causal thinking and decision making under uncertainty are fundamental aspects
of intelligent reasoning. Decision making under uncertainty has been well
studied when information is considered at the associative (probabilistic)
level. The classical Theorems of von Neumann-Morgenstern and Savage provide a
formal criterion for rational choice using purely associative information.
Causal inference often yields uncertainty about the exact causal structure, so
we consider what kinds of decisions are possible in those conditions. In this
work, we consider decision problems in which available actions and consequences
are causally connected. After recalling a previous causal decision making
result, which relies on a known causal model, we consider the case in which the
causal mechanism that controls some environment is unknown to a rational
decision maker. In this setting we state and prove a causal version of Savage's
Theorem, which we then use to develop a notion of causal games with its
respective causal Nash equilibrium. These results highlight the importance of
causal models in decision making and the variety of potential applications.Comment: Submitted to Journal of Causal Inferenc
- …