44 research outputs found
A Cognitive and Affective Architecture for Social Human-Robot Interaction
International audienceRobots show up frequently in new applications in our daily lives where they interact more and more closely with the human user. Despite a long history of research, existing cognitive architectures are still too generic and hence not tailored enough to meet the specific needs demanded by social HRI. In particular, interaction-oriented architectures require handling emotions, language, social norms, etc, which is quite a handful. In this paper, we present an overview of a Cognitive and Affective Interaction-Oriented Architecture for social human-robot interactions abbreviated CAIO. This architecture is parallel to the BDI (Belief, Desire, Intention) architecture that comes from philosophy of actions by Bratman. CAIO integrates complex emotions and planning techniques. It aims to contribute to cognitive architectures for HRI by enabling the robot to reason on mental states (including emotions) of the interlocutors, and to act physically, emotionally and verbally
Arithmetic Notation…now in 3D!
When people reason formally, they often make use of special notations—algebra and arithmetic are familiar examples. These notations are often treated as mere shorthand—a concise way of referring to meaningful mathematical concepts. Other authors have argued that people treat notations as pictures—literal diagrams of an imagined set of objects (Dörfler, 2003; Landy Goldstone, 2009). If notations depict objects that exist in space, then it makes sense to wonder how they are arranged not just in the two visible dimensions, but in depth. In four experiments, we find a consistent pattern: properties that increase mathematical precedence also tend to make objects appear closer in space. This alignment of formal pressures and informal pressures suggests that perceived depth may play a role in supporting computational reasoning processes. Although our primary focus is documenting the existence of depth illusions in notations, we also evaluate several sources of information that might guide depth judgments: availability of an object for computational actions, formal syntactic structure, relative symbol salience and voluntary attention shifts. We consider relationships between these nonexclusive possible sources of information in guiding how people judge depth in mathematics
WHO CAN SOLVE 2x=1? AN ANALYSIS OF COGNITIVE LOAD RELATED TO LEARNING LINEAR EQUATION SOLVING
Using 2x = 1 as an example, we discuss the cognitive load related to learning linear equation solving. In the framework of the Cognitive Load Theory we consider especially the intrinsic cognitive load needed in arithmetical, geometrical and real analytical approach to linear equation solving. This will be done e.g. from the point of view of the conceptual and procedural knowledge of mathematics and the APOS Theory. Basing on our observations, in the end of the paper we design a setting for teaching linear equation solving
The role of falsification in the development of cognitive architectures: insights from a Lakatosian analysis
It has been suggested that the enterprise of developing mechanistic theories of the human cognitive architecture is flawed because the theories produced are not directly falsifiable. Newell attempted to sidestep this criticism by arguing for a Lakatosian model of scientific progress in which cognitive architectures should be understood as theories that develop over time. However, Newell’s own candidate cognitive architecture adhered only loosely to Lakatosian principles. This paper reconsiders the role of falsification and the potential utility of Lakatosian principles in the development of cognitive architectures. It is argued that a lack of direct falsifiability need not undermine the scientific development of a cognitive architecture if broadly Lakatosian principles are adopted. Moreover, it is demonstrated that the Lakatosian concepts of positive and negative heuristics for theory development and of general heuristic power offer methods for guiding the development of an architecture and for evaluating the contribution and potential of an architecture’s research program
Multivariate determinants of self-management in Health Care: assessing Health Empowerment Model by comparison between structural equation and graphical models approaches
Backgroung. In public health one debated issue is related to consequences of improper self-management in health care. Some theoretical models have been proposed in Health Communication theory which highlight how components such general literacy and specific knowledge of the disease might be very important for effective actions in healthcare system.
Methods. This paper aims at investigating the consistency of Health Empowerment Model by means of both graphical models approach, which is a “data driven” method and a Structural Equation Modeling (SEM) approach, which is instead “theory driven”, showing the different information pattern that can be revealed in a health care research context.
The analyzed dataset provides data on the relationship between the Health Empowerment Model constructs and the behavioral and health status in 263 chronic low back pain (cLBP) patients. We used the graphical models approach to evaluate the dependence structure in a “blind” way, thus learning the structure from the data.
Results. From the estimation results dependence structure confirms links design assumed in SEM approach directly from researchers, thus validating the hypotheses which generated the Health Empowerment Model constructs.
Conclusions. This models comparison helps in avoiding confirmation bias. In Structural Equation Modeling, we used SPSS AMOS 21 software. Graphical modeling algorithms were implemented in a R software environment
Beyond single-level accounts: the role of cognitive architectures in cognitive scientific explanation
We consider approaches to explanation within the cognitive sciences that begin with Marr’s computational level (e.g., purely Bayesian accounts of cognitive phenomena) or Marr’s implementational level (e.g., reductionist accounts of cognitive phenomena based only on neural level evidence) and argue that each is subject to fundamental limitations which impair their ability to provide adequate explanations of cognitive phenomena. For this reason, it is argued, explanation cannot proceed at either level without tight coupling to the algorithmic and representation level. Even at this level, however, we argue that additional constraints relating to the decomposition of the cognitive system into a set of interacting subfunctions (i.e., a cognitive architecture) are required. Integrated cognitive architectures that permit abstract specification of the functions of components and that make contact with the neural level provide a powerful bridge for linking the algorithmic and representational level to both the computational level and the implementational level
Implicit Chain of Thought Reasoning via Knowledge Distillation
To augment language models with the ability to reason, researchers usually
prompt or finetune them to produce chain of thought reasoning steps before
producing the final answer. However, although people use natural language to
reason effectively, it may be that LMs could reason more effectively with some
intermediate computation that is not in natural language. In this work, we
explore an alternative reasoning approach: instead of explicitly producing the
chain of thought reasoning steps, we use the language model's internal hidden
states to perform implicit reasoning. The implicit reasoning steps are
distilled from a teacher model trained on explicit chain-of-thought reasoning,
and instead of doing reasoning "horizontally" by producing intermediate words
one-by-one, we distill it such that the reasoning happens "vertically" among
the hidden states in different layers. We conduct experiments on a multi-digit
multiplication task and a grade school math problem dataset and find that this
approach enables solving tasks previously not solvable without explicit
chain-of-thought, at a speed comparable to no chain-of-thought