7 research outputs found
Elevator‘s External Button Recognition and Detection for Vision-based System
Recently, autonomous transporter offers the assistance and delivery for user but they are only focusing on single floor environment. To widen up fields of robotic, they teach robot to use an elevator because elevator provides an essential means of faster movement across level. However, most of the mobile service robot failed to detect elevator’s position due to the complex background and reflection on the elevator door and button panel itself. This paper presents a new strategy for recognition method to detect elevator by detecting their external button efficiently. Sobel is use as edge detection operator to find the estimated absolute gradient magnitude at each point in an input grayscale image. Then, but we enhanced the technique by combining it with wiener filter to reduce the amount of noise present in a signal by comparing the signal with an estimation of the desired noiseless signal. This filter helps to eliminate the reflection image on elevator’s button panel before it can be converted to black and white image (binarization). The process followed by some morphological and structuring elements process. Tests have been done and the results shown that elevator’s external button can be recognized and detected by those entire framework
Efficient learning of sequential tasks for collaborative robots: a neurodynamic approach
Dissertação de mestrado integrado em Engenharia Eletrónica, Industrial e ComputadoresIn the recent years, there has been an increasing demand for collaborative robots able to interact and co operate with ordinary people in several human environments, sharing physical space and working closely with
people in joint tasks, both within industrial and domestic environments. In some scenarios, these robots will
come across tasks that cannot be fully designed beforehand, resulting in a need for flexibility and adaptation to
the changing environments.
This dissertation aims to endow robots with the ability to acquire knowledge of sequential tasks using the
Programming by Demonstration (PbD) paradigm. Concretely, it extends the learning models - based on Dynamic
Neural Fields (DNFs) - previously developed in the Mobile and Anthropomorphic Robotics Laboratory (MARLab), at
the University of Minho, to the collaborative robot Sawyer, which is amongst the newest collaborative robots on the
market. The main goal was to endow Sawyer with the ability to learn a sequential task from tutors’ demonstrations,
through a natural and efficient process.
The developed work can be divided into three main tasks: (1) first, a previously developed neuro-cognitive
control architecture for extracting the sequential structure of a task was implemented and tested in Sawyer,
combined with a Short-Term Memory (STM) mechanism to memorize a sequence in one-shot, aiming to reduce
the number of demonstration trials; (2) second, the previous model was extended to incorporate workspace
information and action selection in a Human-Robot Collaboration (HRC) scenario where robot and human co worker coordinate their actions to construct the structure; and (3) third, the STM mechanism was also extended
to memorize ordinal and temporal aspects of the sequence, demonstrated by tutors with different behavior time
scales.
The models implemented contributed to a more intuitive and practical interaction with the robot for human
co-workers. The STM model made the learning possible from few demonstrations to comply with the requirement
of being an efficient method for learning. Moreover, the recall of the memorized information allowed Sawyer to
evolve from being in a learning position to be in a teaching one, obtaining the capability of assisting inexperienced
co-workers.Nos últimos anos, tem havido uma crescente procura por robôs colaborativos capazes de interagir e cooperar
com pessoas comuns em vários ambientes, partilhando espaço físico e trabalhando em conjunto, tanto em
ambientes industriais como domésticos. Em alguns cenários, estes robôs serão confrontados com tarefas que
não podem ser previamente planeadas, o que resulta numa necessidade de existir flexibilidade e adaptação ao ambiente que se encontra em constante mudança.
Esta dissertação pretende dotar robôs com a capacidade de adquirir conhecimento de tarefas sequenciais
utilizando técnicas de Programação por Demonstração. De forma a continuar o trabalho desenvolvido no Laboratório de Robótica Móvel e Antropomórfica da Universidade do Minho, esta dissertação visa estender os modelos
de aprendizagem previamente desenvolvidos ao robô colaborativo Sawyer, que é um dos mais recentes no mercado. O principal objetivo foi dotar o robô com a capacidade de aprender tarefas sequenciais por demonstração,
através de um processo natural e eficiente.
O trabalho desenvolvido pode ser dividido em três tarefas principais: (1) em primeiro lugar, uma arquitetura
de controlo baseada em modelos neurocognitivos, desenvolvida anteriormente, para aprender a estrutura de
uma tarefa sequencial foi implementada e testada no robô Sawyer, conjugada com um mecanismo de Short Term Memory que permitiu memorizar uma sequência apenas com uma demonstração, para reduzir o número
de demonstrações necessárias; (2) em segundo lugar, o modelo anterior foi estendido para englobar informação
acerca do espaço de trabalho e seleção de ações num cenário de Colaboração Humano-Robô em que ambos
coordenam as suas ações para construir a tarefa; (3) em terceiro lugar, o mecanismo de Short-Term Memory foi
também estendido para memorizar informação ordinal e temporal de uma sequência de passos demonstrada
por tutores com comportamentos temporais diferentes.
Os modelos implementados contribuíram para uma interação com o robô mais intuitiva e prática para os
co-workers humanos. O mecanismo de Short-Term Memory permitiu que a aprendizagem fosse realizada a
partir de poucas demonstrações, para cumprir com o requisito de ser um método de aprendizagem eficiente.
Além disso, a informação memorizada permitiu ao Sawyer evoluir de uma posição de aprendizagem para uma
posição em que é capaz de instruir co-workers inexperientes.This work was carried out within the scope of the project “PRODUTECH SIF - Soluções para a Indústria
do Futuro”, reference POCI-01-0247-FEDER-024541, cofunded by “Fundo Europeu de Desenvolvimento Regional (FEDER)”, through “Programa Operacional Competitividade e Internacionalização (POCI)”
MULTI-MODAL TASK INSTRUCTIONS TO ROBOTS BY NAIVE USERS
This thesis presents a theoretical framework for the design of user-programmable
robots. The objective of the work is to investigate multi-modal unconstrained natural
instructions given to robots in order to design a learning robot. A corpus-centred
approach is used to design an agent that can reason, learn and interact with a human in a
natural unconstrained way. The corpus-centred design approach is formalised and
developed in detail. It requires the developer to record a human during interaction and
analyse the recordings to find instruction primitives. These are then implemented into a
robot. The focus of this work has been on how to combine speech and gesture using
rules extracted from the analysis of a corpus. A multi-modal integration algorithm is
presented, that can use timing and semantics to group, match and unify gesture and
language. The algorithm always achieves correct pairings on a corpus and initiates
questions to the user in ambiguous cases or missing information. The domain of card
games has been investigated, because of its variety of games which are rich in rules and
contain sequences. A further focus of the work is on the translation of rule-based
instructions. Most multi-modal interfaces to date have only considered sequential
instructions. The combination of frame-based reasoning, a knowledge base organised as
an ontology and a problem solver engine is used to store these rules. The understanding
of rule instructions, which contain conditional and imaginary situations require an agent
with complex reasoning capabilities. A test system of the agent implementation is also
described. Tests to confirm the implementation by playing back the corpus are
presented. Furthermore, deployment test results with the implemented agent and human
subjects are presented and discussed. The tests showed that the rate of errors that are
due to the sentences not being defined in the grammar does not decrease by an
acceptable rate when new grammar is introduced. This was particularly the case for
complex verbal rule instructions which have a large variety of being expressed
Recommended from our members
Optimizing for Robot Transparency
As robots become more capable and commonplace, it becomes increasingly important that they are transparent to humans. People need to have accurate mental models of a robot, so that they can anticipate what it will do, know when and where not to rely it, and understand why it failed. This helps engineers ensure safety and robustness of the robot systems they develop, and enables human end-users to interact more safely and seamlessly with robots.This thesis introduces a framework for producing robot behavior that increases transparency. Our key insight is that a robot's actions do not just influence the physical world; they also inevitably influence a human observer's mental model of the robot. We attempt to model the latter---how humans might make inferences about a robot's objectives, policy, and capabilities from observations of its behavior---so that we can then present examples of robot behavior that optimally bring the human's understanding closer to the true robot model. In this way, our framework casts transparency as an optimization problem.Part I introduces our framework of optimizing for robot transparency, and applies it in three ways: communicating a robot's objectives, which situations it can handle, and why it is incapable of performing a task. Part II investigates how transparency is useful not just for safe and seamless interaction, but also for learning. When humans teach a robot, giving human teachers transparency regarding what the robot has learned so far makes it easier for them to select informative teaching examples
Interactive Teaching of a Mobile Robot
Abstract — Personal service robots are expected to help people in their everyday life in the near future. Such robots must be able to not only move around but also perform various operations such as carrying a user-specified object or turning a TV on. Robots working in houses and offices have to deal with a vast variety of environments and operations. Since it is almost impossible to give the robots complete knowledge in advance, on-site robot teaching will be important. We are developing a novel teaching framework called task modelbased interactive teaching. A task model describes what knowledge is necessary for achieving a task. A robot examines the task model to determine missing pieces of knowledge, and asks the user to teach them. By leading the interaction with the user in this way, the user can teach important (focal) point easily and efficiently. This paper deals with a task of moving to a destination at a different floor; the task includes not only the movement but also the operation of recognizing and pushing elevator buttons. Experimental results show the feasibility of the proposed teaching framework. I