383 research outputs found
Computational and Robotic Models of Early Language Development: A Review
We review computational and robotics models of early language learning and
development. We first explain why and how these models are used to understand
better how children learn language. We argue that they provide concrete
theories of language learning as a complex dynamic system, complementing
traditional methods in psychology and linguistics. We review different modeling
formalisms, grounded in techniques from machine learning and artificial
intelligence such as Bayesian and neural network approaches. We then discuss
their role in understanding several key mechanisms of language development:
cross-situational statistical learning, embodiment, situated social
interaction, intrinsically motivated learning, and cultural evolution. We
conclude by discussing future challenges for research, including modeling of
large-scale empirical data about language acquisition in real-world
environments.
Keywords: Early language learning, Computational and robotic models, machine
learning, development, embodiment, social interaction, intrinsic motivation,
self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J.
Horst and J. von Koss Torkildsen, Routledg
Efficient learning of sequential tasks for collaborative robots: a neurodynamic approach
Dissertação de mestrado integrado em Engenharia Eletrónica, Industrial e ComputadoresIn the recent years, there has been an increasing demand for collaborative robots able to interact and co operate with ordinary people in several human environments, sharing physical space and working closely with
people in joint tasks, both within industrial and domestic environments. In some scenarios, these robots will
come across tasks that cannot be fully designed beforehand, resulting in a need for flexibility and adaptation to
the changing environments.
This dissertation aims to endow robots with the ability to acquire knowledge of sequential tasks using the
Programming by Demonstration (PbD) paradigm. Concretely, it extends the learning models - based on Dynamic
Neural Fields (DNFs) - previously developed in the Mobile and Anthropomorphic Robotics Laboratory (MARLab), at
the University of Minho, to the collaborative robot Sawyer, which is amongst the newest collaborative robots on the
market. The main goal was to endow Sawyer with the ability to learn a sequential task from tutors’ demonstrations,
through a natural and efficient process.
The developed work can be divided into three main tasks: (1) first, a previously developed neuro-cognitive
control architecture for extracting the sequential structure of a task was implemented and tested in Sawyer,
combined with a Short-Term Memory (STM) mechanism to memorize a sequence in one-shot, aiming to reduce
the number of demonstration trials; (2) second, the previous model was extended to incorporate workspace
information and action selection in a Human-Robot Collaboration (HRC) scenario where robot and human co worker coordinate their actions to construct the structure; and (3) third, the STM mechanism was also extended
to memorize ordinal and temporal aspects of the sequence, demonstrated by tutors with different behavior time
scales.
The models implemented contributed to a more intuitive and practical interaction with the robot for human
co-workers. The STM model made the learning possible from few demonstrations to comply with the requirement
of being an efficient method for learning. Moreover, the recall of the memorized information allowed Sawyer to
evolve from being in a learning position to be in a teaching one, obtaining the capability of assisting inexperienced
co-workers.Nos últimos anos, tem havido uma crescente procura por robôs colaborativos capazes de interagir e cooperar
com pessoas comuns em vários ambientes, partilhando espaço físico e trabalhando em conjunto, tanto em
ambientes industriais como domésticos. Em alguns cenários, estes robôs serão confrontados com tarefas que
não podem ser previamente planeadas, o que resulta numa necessidade de existir flexibilidade e adaptação ao ambiente que se encontra em constante mudança.
Esta dissertação pretende dotar robôs com a capacidade de adquirir conhecimento de tarefas sequenciais
utilizando técnicas de Programação por Demonstração. De forma a continuar o trabalho desenvolvido no Laboratório de Robótica Móvel e Antropomórfica da Universidade do Minho, esta dissertação visa estender os modelos
de aprendizagem previamente desenvolvidos ao robô colaborativo Sawyer, que é um dos mais recentes no mercado. O principal objetivo foi dotar o robô com a capacidade de aprender tarefas sequenciais por demonstração,
através de um processo natural e eficiente.
O trabalho desenvolvido pode ser dividido em três tarefas principais: (1) em primeiro lugar, uma arquitetura
de controlo baseada em modelos neurocognitivos, desenvolvida anteriormente, para aprender a estrutura de
uma tarefa sequencial foi implementada e testada no robô Sawyer, conjugada com um mecanismo de Short Term Memory que permitiu memorizar uma sequência apenas com uma demonstração, para reduzir o número
de demonstrações necessárias; (2) em segundo lugar, o modelo anterior foi estendido para englobar informação
acerca do espaço de trabalho e seleção de ações num cenário de Colaboração Humano-Robô em que ambos
coordenam as suas ações para construir a tarefa; (3) em terceiro lugar, o mecanismo de Short-Term Memory foi
também estendido para memorizar informação ordinal e temporal de uma sequência de passos demonstrada
por tutores com comportamentos temporais diferentes.
Os modelos implementados contribuíram para uma interação com o robô mais intuitiva e prática para os
co-workers humanos. O mecanismo de Short-Term Memory permitiu que a aprendizagem fosse realizada a
partir de poucas demonstrações, para cumprir com o requisito de ser um método de aprendizagem eficiente.
Além disso, a informação memorizada permitiu ao Sawyer evoluir de uma posição de aprendizagem para uma
posição em que é capaz de instruir co-workers inexperientes.This work was carried out within the scope of the project “PRODUTECH SIF - Soluções para a Indústria
do Futuro”, reference POCI-01-0247-FEDER-024541, cofunded by “Fundo Europeu de Desenvolvimento Regional (FEDER)”, through “Programa Operacional Competitividade e Internacionalização (POCI)”
Advances in Human-Robot Interaction
Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …