41,606 research outputs found
Ongoing Emergence: A Core Concept in Epigenetic Robotics
We propose ongoing emergence as a core concept in
epigenetic robotics. Ongoing emergence refers to the
continuous development and integration of new skills
and is exhibited when six criteria are satisfied: (1)
continuous skill acquisition, (2) incorporation of new
skills with existing skills, (3) autonomous development
of values and goals, (4) bootstrapping of initial skills, (5)
stability of skills, and (6) reproducibility. In this paper
we: (a) provide a conceptual synthesis of ongoing
emergence based on previous theorizing, (b) review
current research in epigenetic robotics in light of ongoing
emergence, (c) provide prototypical examples of ongoing
emergence from infant development, and (d) outline
computational issues relevant to creating robots
exhibiting ongoing emergence
Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics
Developmental robotics is an emerging field located
at the intersection of developmental psychology
and robotics, that has lately attracted
quite some attention. This paper gives a survey of
a variety of research projects dealing with or inspired
by developmental issues, and outlines possible
future directions
Enabling Depth-driven Visual Attention on the iCub Humanoid Robot: Instructions for Use and New Perspectives
The importance of depth perception in the interactions that humans have
within their nearby space is a well established fact. Consequently, it is also
well known that the possibility of exploiting good stereo information would
ease and, in many cases, enable, a large variety of attentional and interactive
behaviors on humanoid robotic platforms. However, the difficulty of computing
real-time and robust binocular disparity maps from moving stereo cameras often
prevents from relying on this kind of cue to visually guide robots' attention
and actions in real-world scenarios. The contribution of this paper is
two-fold: first, we show that the Efficient Large-scale Stereo Matching
algorithm (ELAS) by A. Geiger et al. 2010 for computation of the disparity map
is well suited to be used on a humanoid robotic platform as the iCub robot;
second, we show how, provided with a fast and reliable stereo system,
implementing relatively challenging visual behaviors in natural settings can
require much less effort. As a case of study we consider the common situation
where the robot is asked to focus the attention on one object close in the
scene, showing how a simple but effective disparity-based segmentation solves
the problem in this case. Indeed this example paves the way to a variety of
other similar applications
Emerging Linguistic Functions in Early Infancy
This paper presents results from experimental
studies on early language acquisition in infants and
attempts to interpret the experimental results within
the framework of the Ecological Theory of
Language Acquisition (ETLA) recently proposed
by (Lacerda et al., 2004a). From this perspective,
the infant’s first steps in the acquisition of the
ambient language are seen as a consequence of the
infant’s general capacity to represent sensory input
and the infant’s interaction with other actors in its
immediate ecological environment. On the basis of
available experimental evidence, it will be argued
that ETLA offers a productive alternative to
traditional descriptive views of the language
acquisition process by presenting an operative
model of how early linguistic function may emerge
through interaction
Developmental Robots - A New Paradigm
It has been proved to be extremely challenging for humans to program a robot to such a sufficient degree that it acts properly in a typical unknown human environment. This is especially true for a humanoid robot due to the very large number of redundant degrees of freedom and a large number of sensors that are required for a humanoid to work safely and effectively in the human environment. How can we address this fundamental problem? Motivated by human mental development from infancy to adulthood, we present a theory, an architecture, and some experimental results showing how to enable a robot to develop its mind automatically, through online, real time interactions with its environment. Humans mentally “raise” the robot through “robot sitting” and “robot schools” instead of task-specific robot programming
Aplikácia kognitivného modelu vizuálnej pozornosti v automatizovanej montáži
Zásobovacie zariadenia a podsystémy v štruktúrach montážnych systémov majú významné postavenie. Technickú zložitosť klasických zásobovacích zariadení a podsystémov je možné eliminovať pružnými programovateľnými automatizovanými zariadeniami. Informácie o spomínanom objekte zabezpečované senzorovými modulmi sa spracovávajú v riadiacom systéme zariadenia resp. na vyššej úrovni riadenia montážneho systému. Spracované informácie sú distribuované ako riadiace informácie výkonným jednotkám a prvkom, ktoré vykonávajú príslušné funkcie. Riadiace systémy programovateľných zásobovacích zariadení a podsystémov plnia viaceré funkcie napr. spracovanie informácií od senzorových jednotiek a modulov, správne vyhodnotenie polohy súčiastky a určenie postupu činnosti výkonných jednotiek a prvkov, distribúcia výkonných inštrukcií pohonovým jednotkám, atď. Programové vybavenie založené na využívaní kognitívneho modelu vizuálnej pozornosti charakterizuje nový prístup k riešeniu uvádzaných problémov. Pri vizuálnom vnímání scény obsahujúcej rôzne objekty a pre potrebu interakcie s určitým cieľovým objektom nachádzajúcim sa v tejto scéne je nutné aby systém upriamil svoju pozornosť na tento (cieľový) objekt. Tento mechanizmus je jedným z principiálnych prvkov videnia a podobne ako mnoho biologicky motivovaných systémov je veľmi výhodne využiteľný v praxi. Navrhovaný model je implementáciou mechanizmu vizuálnej pozornosti vo vytvorenom počítačom simulovanom prostredí.Logistic devices and sub - systems in the structures of assembly systems have significant position. Technical complexity of classical devices and sub - systems can be decreased by using of flexible programmable automated devices. Information's about objects provided by sensor modules are handled in processing system of the device, respective on the higher level of the assembly system. Executed information is distributed like processing information to executive units and elements. Control systems of programmable supply devices and sub - systems take handle of many functions, for example: processing information from sensor devices and modules, right calculating of the bearing of the component, distributing of executive instructions to actuating units, and many others. Software accessories based on the using of cognitive model of visual attention featured a new way of solving former problems. By visual reception the scenes contains miscellaneous objects and for the demand of the interaction with the target object is necessary that the system is need to be focused to this object. This mechanism is one of the pricipally elements of vision, and like many biologically motivated systems is very useful in practice. Designed model is an implementation of the mechanism of visual attention in the computer created simulation environment
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
How does an infant acquire the ability of joint attention?: A Constructive Approach
This study argues how a human infant acquires
the ability of joint attention through
interactions with its caregiver from the viewpoint
of a constructive approach. This paper
presents a constructive model by which a
robot acquires a sensorimotor coordination for
joint attention based on visual attention and
learning with self-evaluation. Since visual attention
does not always correspond to joint attention,
the robot may have incorrect learning
situations for joint attention as well as correct
ones. However, the robot is expected to statistically
lose the data of the incorrect ones
as outliers through the learning, and consequently
acquires the appropriate sensorimotor
coordination for joint attention even if the
environment is not controlled nor the caregiver
provides any task evaluation. The experimental
results suggest that the proposed
model could explain the developmental mechanism
of the infant’s joint attention because
the learning process of the robot’s joint attention
can be regarded as equivalent to the
developmental process of the infant’s one
The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling
- …