263 research outputs found
Exploring Robot Teleoperation in Virtual Reality
This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality.
A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices.
Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation.
Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload.
The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators.
Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework.
The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
Autonomous Navigation of Mobile Robots: Marker-based Localization System and On-line Path
Traditional wheelchairs are controlled mainly by joystick, which is not suitable solution with major disabilities. Current thesis aiming to create a human-machine interface and create a software, which performs indoor autonomous navigation of the commercial wheelchair RoboEye, developed at the Measurements Instrumentations Robotic Laboratory at the University of Trento in collaboration with Robosense and Xtrensa,. RoboEye is an intelligent wheelchair that aims to support people by providing independence and autonomy of movement, affected by serious mobility problems from impairing pathologies (for example ALS – amyotrophic lateral sclerosis).
This thesis is divided into two main parts – human machine interface creation plus integration of existing services into developed solution, and performing possible solution how given wheelchair can navigate manually utilizing eye-tracking technologies, TOF cameras, odometric localization and Aruco markers.
Developed interface supports manual, semi-autonomous and autonomous navigation. In addition to that following user experience specific for eye-tracking devices and people with major disabilities. Application delevoped on Unity 3D software using C# script following state-machine approach with multiple scenes and components.
In the current master thesis, suggested solution satisfies user’s need to navigate hands-free, as less tiring as possible. Moreover, user can choose the destination point from defined in advance points of interests and reach it with no further input needed. User interface is intuitive and clear for experienced and inexperienced users. The user can choose UI’s icons image, scale and font size. Software performs in a state machine module, which is tested among users using test cases. Path planning routine is solved using Dijkstra approach and proved to be efficient
Recommended from our members
Mobile localization : approach and applications
textLocalization is critical to a number of wireless network applications. In many situations GPS is not suitable. This dissertation (i) develops novel localization schemes for wireless networks by explicitly incorporating mobility information and (ii) applies localization to physical analytics i.e., understanding shoppers' behavior within retail spaces by leveraging inertial sensors, Wi-Fi and vision enabled by smart glasses. More specifically, we first focus on multi-hop mobile networks, analyze real mobility traces and observe that they exhibit temporal stability and low-rank structure. Motivated by these observations, we develop novel localization algorithms to effectively capture and also adapt to different degrees of these properties. Using extensive simulations and testbed experiments, we demonstrate the accuracy and robustness of our new schemes. Second, we focus on localizing a single mobile node, which may not be connected with multiple nodes (e.g., without network connectivity or only connected with an access point). We propose trajectory-based localization using Wi-Fi or magnetic field measurements. We show that these measurements have the potential to uniquely identify a trajectory. We then develop a novel approach that leverages multi-level wavelet coefficients to first identify the trajectory and then localize to a point on the trajectory. We show that this approach is highly accurate and power efficient using indoor and outdoor experiments. Finally, localization is a critical step in enabling a lot of applications --- an important one is physical analytics. Physical analytics has the potential to provide deep-insight into shoppers' interests and activities and therefore better advertisements, recommendations and a better shopping experience. To enable physical analytics, we build ThirdEye system which first achieves zero-effort localization by leveraging emergent devices like the Google-Glass to build AutoLayout that fuses video, Wi-Fi, and inertial sensor data, to simultaneously localize the shoppers while also constructing and updating the product layout in a virtual coordinate space. Further, ThirdEye comprises of a range of schemes that use a combination of vision and inertial sensing to study mobile users' behavior while shopping, namely: walking, dwelling, gazing and reaching-out. We show the effectiveness of ThirdEye through an evaluation in two large retail stores in the United States.Computer Science
Exploring Robot Teleoperation in Virtual Reality
This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours
Investigating the influence of situations and expectations on user behavior : empirical analyses in human-robot interaction
Lohse M. Investigating the influence of situations and expectations on user behavior : empirical analyses in human-robot interaction. Bielefeld (Germany): Bielefeld University; 2010.Social sciences are becoming increasingly important for robotics research as work goes on to enable service robots to interact with inexperienced users. This endeavor can only be successful if the robots learn to interpret the users' behavior reliably and, in turn, provide feedback for the users, which enables them to understand the robot.
In order to achieve this goal, the thesis introduces an approach to describe the interaction situation as a dynamic construct with different levels of specificity. The situation concept is the starting point for a model which aims to explain the users' behavior. The second important component of the model is the expectations of the users with respect to the robot. Both the situation and the expectations are shown to be the main determinants of the users' behaviors.
With this theoretical background in mind, the thesis examines interactions from a home tour scenario in which a human teaches a robot about rooms and objects within them. To analyze the human expectations and behaviors in this situation, two main novel methods have been developed. In particular, a quantitative method for the analysis of the users' behavior repertoires (speech, gesture, eye gaze, body orientation, etc.) is introduced. The approach focuses on the interaction level, which describes the interplay between the robot and the user. In the second novel method, also the system level is taken into account, which includes the robot components and their interplay. This method serves for a detailed task analysis and helps to identify problems that occur in the interaction.
By applying these methods, the thesis contributes to the identification of underlying expectations that allow future behavior of the users to be predicted in particular situations. Knowledge about the users' behavior repertoires serves as a cue for the robot about the state of the interaction and the task the users aim to accomplish. Therefore, it enables robot developers to adapt the interaction models of the components to the situation, actual user expectations, and behaviors. The work provides a deeper understanding of the role of expectations in human-robot interaction and contributes to the interaction and system design of interactive robots
A Retro-Projected Robotic Head for Social Human-Robot Interaction
As people respond strongly to faces and facial features, both con-
sciously and subconsciously, faces are an essential aspect of social
robots. Robotic faces and heads until recently belonged to one of the
following categories: virtual, mechatronic or animatronic. As an orig-
inal contribution to the field of human-robot interaction, I present the
R-PAF technology (Retro-Projected Animated Faces): a novel robotic
head displaying a real-time, computer-rendered face, retro-projected
from within the head volume onto a mask, as well as its driving soft-
ware designed with openness and portability to other hybrid robotic
platforms in mind.
The work constitutes the first implementation of a non-planar mask
suitable for social human-robot interaction, comprising key elements
of social interaction such as precise gaze direction control, facial ex-
pressions and blushing, and the first demonstration of an interactive
video-animated facial mask mounted on a 5-axis robotic arm. The
LightHead robot, a R-PAF demonstrator and experimental platform,
has demonstrated robustness both in extended controlled and uncon-
trolled settings. The iterative hardware and facial design, details of the
three-layered software architecture and tools, the implementation of
life-like facial behaviours, as well as improvements in social-emotional
robotic communication are reported. Furthermore, a series of evalua-
tions present the first study on human performance in reading robotic
gaze and another first on user’s ethnic preference towards a robot face
Affective robotics for socio-emotional development in children with autism spectrum disorders
Tese de doutoramento do Programa Doutoral em Engenharia Eletrónica e de ComputadoresAutism Spectrum Disorders (ASD) are a group of complex developmental disorders of
the brain. Individuals affected by this disorder are characterized by repetitive patterns of
behaviour, restricted activities or interests, and impairments in social communication.
The use of robots had already been proven to encourage the promotion of social
interaction and skills lacking in children with ASD. The main goal of this thesis is to
study the influence of humanoid robots to develop socio-emotional skills in children
with ASD. The investigation demonstrates the potential benefits a robotic tool provides
to attract the attention of children with ASD, and therefore use that focus to develop
further skills.
The main focus of this thesis is divided into three topics. The first topic concerns
the use of a robot to encourage learning appropriate physical social engagement, and
to facilitate the ability to acquire knowledge about human body parts. The results
show that the robot proved to be a useful tool, attracting the children’s attention and
improving their knowledge about human body parts. The second topic regards the
process of designing game scenarios to be used with children with ASD, targeting the
promotion of emotion recognition skills. Three game scenarios were developed based on
the expertise of professionals and they were successfully tested in pilot studies. Finally,
the last topic presents two child-robot interaction studies with a large sample. They
examine the use of a humanoid robot as a tool to teach recognition and labelling of
emotions. The first study focuses on verbal and non-verbal communicative behaviours
as measures to evaluate the social interaction and children interacting with the robot
displayed more non-verbal behaviours indicating social engagement. The second study
analyses the children’s attention patterns, and the children’s performance in the game
scenarios previously designed. Along the sessions, the children increased their eye
contact with the experimenter and in the study comparing the use of the robot with
a traditional intervention, children who performed the game scenarios with the robot
and the experimenter had a significantly better performance than the children who
performed the game scenarios without the robot.
The main conclusions of this research support that a humanoid robot is a useful tool
to develop socio-emotional skills in the intervention of children with ASD, due to the
engagement and positive learning outcome observed.As Perturbações do Espectro do Autismo (PEA) são um distúrbio complexo do desenvolvimento
do cérebro. Os indivíduos afetados por esse transtorno são caracterizados
por padrões repetitivos do comportamento, atividades ou interesses restritos e dificuldades
na comunicação social. A utilização de robôs já provou ser um estímulo
promovendo a interação social e competências em falta nestes indivíduos. O objetivo
principal desta tese é estudar a influência de robôs humanoides para desenvolver
competências sócio emocionais em crianças com PEA. A investigação demonstra os
potenciais benefícios de uma ferramenta robótica para atrair a atenção de crianças
com PEA e utilizar esta atenção para desenvolver outras competências.
O foco principal desta tese está dividido em três tópicos. O primeiro tópico consiste
na utilização de um robô para incentivar a aprendizagem sobre a interação físico-social
apropriada e para facilitar a aquisição de conhecimento sobre partes do corpo. Os
resultados mostram que o robô provou ser uma ferramenta útil, atraindo a atenção
das crianças e melhorando o seu conhecimento sobre partes do corpo. A segunda parte
refere-se ao processo de construção de atividades para serem utilizadas com crianças
com PEA, promovendo competências de reconhecimento de emoções. Três atividades
foram desenvolvidas com base na opinião de profissionais e foram testadas em estudo
piloto com sucesso. Finalmente, o último tópico apresenta dois estudos de interação
criança-robô examinando a utilização de um robô humanoide como ferramenta para
ensinar reconhecimento e identificação de emoções. O primeiro estudo foca a comunicação
verbal e não-verbal como medidas de avaliação da interação social e as crianças
que interagiram com o robô mostraram mais comportamentos não-verbais que indicam
interação social. O segundo estudo analisa os padrões de atenção e o desempenho das
crianças nas atividades concebidas anteriormente. Ao longo das sessões, as crianças
aumentaram o contacto ocular com o experimentador e no estudo que comparou a
utilização do robô com intervenção tradicional, as crianças que realizaram as atividades
com o robô e o experimentador tiveram um desempenho significativamente melhor do
que as crianças que realizaram as ativdades sem o robô.
As conclusões principais desta investigação suportam que um robô humanoide foi uma
ferramenta útil para desenvolver competências sócio emocionais na intervenção de crianças
com PEA, devido à interação e resultados positivos de aprendizagem observados.Fundação para a Ciência e Tecnologia (FCT) in the scope of the project: PEst-OE/EEI/UI0319/2014.This work was performed in part under the R&D project RIPD/ADA/109407/2009.SFRH/BD/71600/2010 scholarship
Towards adaptive and autonomous humanoid robots: from vision to actions
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions
- …