96 research outputs found
Representational fluidity in embodied (artificial) cognition
Theories of embodied cognition agree that the body plays some role in human cognition, but disagree on the precise nature of this role. While it is (together with the environment) fundamentally engrained in the so-called 4E (or multi-E) cognition stance, there also exists interpretations wherein the body is merely an input/output interface for cognitive processes that are entirely computational.
In the present paper, we show that even if one takes such a strong computationalist position, the role of the body must be more than an interface to the world. To achieve human cognition, the computational mechanisms of a cognitive agent must be capable not only of appropriate reasoning over a given set of symbolic representations; they must in addition be capable of updating the representational framework itself (leading to the titular representational fluidity). We demonstrate this by considering the necessary properties that an artificial agent with these abilities need to possess.
The core of the argument is that these updates must be falsifiable in the Popperian sense while simultaneously directing representational shifts in a direction that benefits
the agent. We show that this is achieved by the progressive, bottom-up symbolic abstraction of low-level sensorimotor connections followed by top-down instantiation of testable
perception-action hypotheses.
We then discuss the fundamental limits of this representational updating capacity, concluding that only fully embodied learners exhibiting such a priori perception-action linkages are able to sufficiently ground spontaneously-generated symbolic representations and exhibit the full range of human cognitive capabilities. The present paper therefore has consequences both for the theoretical understanding of human cognition, and for the design of autonomous artificial agents
Deriving Motor Primitives Through Action Segmentation
The purpose of the present experiment is to further understand the effect of levels of processing (top-down vs. bottom-up) on the perception of movement kinematics and primitives for grasping actions in order to gain insight into possible primitives used by the mirror system. In the present study, we investigated the potential of identifying such primitives using an action segmentation task. Specifically, we investigated whether or not segmentation was driven primarily by the kinematics of the action, as opposed to high-level top-down information about the action and the object used in the action. Participants in the experiment were shown 12 point-light movies of object-centered hand/arm actions that were either presented in their canonical orientation together with the object in question (top-down condition) or upside down (inverted) without information about the object (bottom-up condition). The results show that (1) despite impaired high-level action recognition for the inverted actions participants were able to reliably segment the actions according to lower-level kinematic variables, (2) segmentation behavior in both groups was significantly related to the kinematic variables of change in direction, velocity, and acceleration of the wrist (thumb and finger tips) for most of the included actions. This indicates that top-down activation of an action representation leads to similar segmentation behavior for hand/arm actions compared to bottom-up, or local, visual processing when performing a fairly unconstrained segmentation task. Motor primitives as parts of more complex actions may therefore be reliably derived through visual segmentation based on movement kinematics
On the utility of dreaming: a general model for how learning in artificial agents can benefit from data hallucination
We consider the benefits of dream mechanisms â that is, the ability to simulate new experiences based on past ones â in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize âdreamingâ as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data.
We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism.
We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete
Generating Annotated Training Data for 6D Object Pose Estimation in Operational Environments with Minimal User Interaction
Recently developed deep neural networks achieved state-of-the-art results in
the subject of 6D object pose estimation for robot manipulation. However, those
supervised deep learning methods require expensive annotated training data.
Current methods for reducing those costs frequently use synthetic data from
simulations, but rely on expert knowledge and suffer from the "domain gap" when
shifting to the real world. Here, we present a proof of concept for a novel
approach of autonomously generating annotated training data for 6D object pose
estimation. This approach is designed for learning new objects in operational
environments while requiring little interaction and no expertise on the part of
the user. We evaluate our autonomous data generation approach in two grasping
experiments, where we archive a similar grasping success rate as related work
on a non autonomously generated data set.Comment: This is a preprint and currently under peer review at IROS 202
Towards a full spectrum diagnosis of autistic behaviours using human robot interactions
Autism Spectrum Disorder (ASD) is conceptualised by the Diag-nostic and Statistical Manual of Mental Disorders (DSM-V) [1] asa spectrum, and diagnosis involves scoring behaviours in termsof a severity scale. Whilst the application of automated systemsand socially interactive robots to ASD diagnosis would increase ob-jectivity and standardisation, most of the existing systems classifybehaviours in a binary fashion (ASD vs. non-ASD). To be useful ininterventions, and to overcome ethical concerns regarding overlysimplied diagnostic measures, a robot therefore needs to be ableto classify target behaviours along a continuum, rather than indiscrete groups. Here we discuss an approach toward this goalwhich has the potential to identify the full spectrum of observableASD traits
Affordances, Adaptive Tool Use and Grounded Cognition
addendum: [OA] owner: teslar timestamp: 2011.03.2
Learning Policies for Continuous Control via Transition Models
It is doubtful that animals have perfect inverse models of their limbs (e.g.,
what muscle contraction must be applied to every joint to reach a particular
location in space). However, in robot control, moving an arm's end-effector to
a target position or along a target trajectory requires accurate forward and
inverse models. Here we show that by learning the transition (forward) model
from interaction, we can use it to drive the learning of an amortized policy.
Hence, we revisit policy optimization in relation to the deep active inference
framework and describe a modular neural network architecture that
simultaneously learns the system dynamics from prediction errors and the
stochastic policy that generates suitable continuous control commands to reach
a desired reference position. We evaluated the model by comparing it against
the baseline of a linear quadratic regulator, and conclude with additional
steps to take toward human-like motor control
Ethical perceptions towards real-world use of companion robots with older people and people with dementia: Survey opinions among younger adults
Background:
Use of companion robots may reduce older peopleâs depression, loneliness and agitation. This benefit has to be contrasted against possible ethical concerns raised by philosophers in the field around issues such as deceit, infantilisation, reduced human contact and accountability. Research directly assessing prevalence of such concerns among relevant stakeholders, however, remains limited, even though their views clearly have relevance in the debate. For example, any discrepancies between ethicists and stakeholders might in itself be a relevant ethical consideration while concerns perceived by stakeholders might identify immediate barriers to successful implementation.
Methods:
We surveyed 67 younger adults after they had live interactions with companion robot pets while attending an exhibition on intimacy, including the context of intimacy for older people. We asked about their perceptions of ethical issues. Participants generally had older family members, some with dementia.
Results:
Most participants (40/67, 60%) reported having no ethical concerns towards companion robot use when surveyed with an open question. Twenty (30%) had some concern, the most common being reduced human contact (10%), followed by deception (6%). However, when choosing from a list, the issue perceived as most concerning was equality of access to devices based on socioeconomic factors (m=4.72 on a scale 1-7), exceeding more commonly hypothesized issues such as infantilising (m=3.45), and deception (m=3.44). The lowest-scoring issues were potential for injury or harm (m=2.38) and privacy concerns (m=2.17). Over half (39/67 (58%)) would have bought a device for an older relative. Cost was a common reason for choosing not to purchase a device.
Conclusions:
Although a relatively small study we demonstrated discrepancies between ethical concerns raised in the philosophical literature and those likely to make the decision to buy a companion robot. Such discrepancies, between philosophers and âend-usersâ in care of older people, and in methods of ascertainment, are worthy of further empirical research and discussion. Our participants were more concerned about economic issues and equality of access, an important consideration for those involved with care of older people. On the other hand the concerns proposed by ethicists seem unlikely to be a barrier to use of companion robots
- âŠ