93 research outputs found
Development of new intelligent autonomous robotic assistant for hospitals
Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload.
The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations.
In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing.
The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces
Developing a Semi-autonomous Robot to Engage Children with Special Needs and Their Peers in Robot-Assisted Play
Despite the wide variety of robots used in human-robot interaction (HRI) scenarios, the potential of robots as connectors whilst acting as play mediators has not been fully explored.
Robots present an opportunity to redefine traditional game scenarios by being physical embodiments of agents/game elements. Robot assisted play has been used to reduce the barriers that children with physical special needs experience. However, many projects focus on child-robot interaction rather than child-child interaction. In an attempt to address this gap, a semi-autonomous mobile robot, MyJay, was created. This thesis discusses the successful development of MyJay and its potential contribution in future HRI studies.
MyJay is an open-source robot that plays a basketball-like game. It features light and color for communicative feedback, omni-directional mobility, robust mechanisms, adjustable levels of autonomy for dynamic interaction, and a child-friendly aesthetically-pleasing outer shell.
The design process included target users such as children with special needs and therapists in order to create a robot that ensures repeated use, engagement, and long-term interaction. A hybrid approach was taken to involve stakeholders, combining user-centered design and co-design, exemplifying that children can be included in the creation process even when it is not possible to hold in-person co-design sessions due to COVID-19.
Aside from the care taken to meet user requirements, the robot was designed with researchers in mind, featuring extensible software and ROS compatibility. The frame is constructed from aluminum to ensure rigidity, and most functional parts related to gameplay are 3D printed to allow for quick swapping, should a need to change game mechanics arise. The modularity in software and in mechanical aspects should increase the potential of MyJay as a valuable research tool for future HRI studies.
Finally, a novel framework to simulate teleoperation difficulties for individuals with upper-limb mobility challenges is proposed, along with a dynamic assistance algorithm to aid in the teleoperation process
Classificação de pacientes para adaptação de cadeira de rodas inteligente
Doutoramento em Engenharia InformáticaA importância e preocupação dedicadas à autonomia e independência das
pessoas idosas e dos pacientes que sofrem de algum tipo de deficiência tem
vindo a aumentar significativamente ao longo das últimas décadas. As
cadeiras de rodas inteligentes (CRI) são tecnologias que podem ajudar este
tipo de população a aumentar a sua autonomia, sendo atualmente uma área
de investigação bastante ativa. Contudo, a adaptação das CRIs a pacientes
específicos e a realização de experiências com utilizadores reais são assuntos
de estudo ainda muito pouco aprofundados.
A cadeira de rodas inteligente, desenvolvida no âmbito do Projeto IntellWheels,
é controlada a alto nível utilizando uma interface multimodal flexível,
recorrendo a comandos de voz, expressões faciais, movimentos de cabeça e
através de joystick. Este trabalho teve como finalidade a adaptação automática
da CRI atendendo às características dos potenciais utilizadores.
Foi desenvolvida uma metodologia capaz de criar um modelo do utilizador. A
investigação foi baseada num sistema de recolha de dados que permite obter
e armazenar dados de voz, expressões faciais, movimentos de cabeça e do
corpo dos pacientes. A utilização da CRI pode ser efetuada em diferentes
situações em ambiente real e simulado e um jogo sério foi desenvolvido
permitindo especificar um conjunto de tarefas a ser realizado pelos
utilizadores. Os dados foram analisados recorrendo a métodos de extração de
conhecimento, de modo a obter o modelo dos utilizadores. Usando os
resultados obtidos pelo sistema de classificação, foi criada uma metodologia
que permite selecionar a melhor interface e linguagem de comando da cadeira
para cada utilizador.
A avaliação para validação da abordagem foi realizada no âmbito do Projeto
FCT/RIPD/ADA/109636/2009 - "IntellWheels - Intelligent Wheelchair with
Flexible Multimodal Interface". As experiências envolveram um vasto conjunto
de indivíduos que sofrem de diversos níveis de deficiência, em estreita
colaboração com a Escola Superior de Tecnologia de Saúde do Porto e a
Associação do Porto de Paralisia Cerebral. Os dados recolhidos através das
experiências de navegação na CRI foram acompanhados por questionários
preenchidos pelos utilizadores. Estes dados foram analisados estatisticamente,
a fim de provar a eficácia e usabilidade na adequação da interface da CRI ao
utilizador. Os resultados mostraram, em ambiente simulado, um valor de
usabilidade do sistema de 67, baseado na opinião de uma amostra de
pacientes que apresentam os graus IV e V (os mais severos) de Paralisia
Cerebral. Foi também demonstrado estatisticamente que a interface atribuída
automaticamente pela ferramenta tem uma avaliação superior à sugerida pelos
técnicos de Terapia Ocupacional, mostrando a possibilidade de atribuir
automaticamente uma linguagem de comando adaptada a cada utilizador.
Experiências realizadas com distintos modos de controlo revelaram a
preferência dos utilizadores por um controlo compartilhado com um nível de
ajuda associado ao nível de constrangimento do paciente. Em conclusão, este
trabalho demonstra que é possível adaptar automaticamente uma CRI ao
utilizador com claros benefícios a nível de usabilidade e segurança.The importance and concern given to the autonomy and independence of
elderly people and patients suffering from some kind of disability has been
growing significantly in the last few decades. Intelligent wheelchairs (IW) are
technologies that can increase the autonomy and independence of this kind of
population and are nowadays a very active research area. However, the
adaptations to users’ specificities and experiments with real users are topics
that lack deeper studies.
The intelligent wheelchair, developed in the context of the IntellWheels project,
is controlled at a high-level through a flexible multimodal interface, using voice
commands, facial expressions, head movements and joystick as its main input
modalities. This work intended to develop a system enabling the automatic
adaptation, to the user characteristics, of the previously developed intelligent
wheelchair.
A methodology was created enabling the creation of a user model. The
research was based on the development of a data gathering system, enabling
the collection and storage of data from voice commands, facial expressions,
head and body movements from several patients with distinct disabilities such
as Cerebral Palsy. The wheelchair can be used in different situations in real
and simulated environments and a serious game was developed where
different tasks may be performed by users.
Data was analysed using knowledge discovery methods in order to create an
automatic patient classification system. Based on the classification system, a
methodology was developed enabling to select the best wheelchair interface
and command language for each patient.
Evaluation was performed in the context of Project FCT/RIPD/ADA/109636/
2009 – “IntellWheels – Intelligent Wheelchair with Flexible Multimodal
Interface”. Experiments were conducted, using a large set of patients suffering
from severe physical constraints in close collaboration with Escola Superior de
Tecnologia de Saúde do Porto and Associação do Porto de Paralisia Cerebral.
The experiments using the intelligent wheelchair were followed by user
questionnaires. The results were statistically analysed in order to prove the
effectiveness and usability of the adaptation of the Intelligent Wheelchair
multimodal interface to the user characteristics. The results obtained in a
simulated environment showed a 67 score on the system usability scale based
in the opinion of a sample of cerebral palsy patients with the most severe cases
IV and V of the Gross Motor Function Scale. It was also statistically
demonstrated that the data analysis system advised the use of an adapted
interface with higher evaluation than the one suggested by the occupational
therapists, showing the usefulness of defining a command language adapted to
each user. Experiments conducted with distinct control modes revealed the
users' preference for a shared control with an aid level taking into account the
level of constraint of the patient. In conclusion, this work demonstrates that it is
possible to adapt an intelligent wheelchair to the user with clear usability and
safety benefits
Mobile Robots Navigation
Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
Robotics 2010
Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development
飛行ロボットにおける人間・ロボットインタラクションの実現に向けて : ユーザー同伴モデルとセンシングインターフェース
学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 矢入 健久, 東京大学教授 堀 浩一, 東京大学教授 岩崎 晃, 東京大学教授 土屋 武司, 東京理科大学教授 溝口 博University of Tokyo(東京大学
Multi-sensor fusion for human-robot interaction in crowded environments
For challenges associated with the ageing population, robot assistants are becoming a promising solution. Human-Robot Interaction (HRI) allows a robot to understand the intention of humans in an environment and react accordingly. This thesis proposes HRI techniques to facilitate the transition of robots from lab-based research to real-world environments. The HRI aspects addressed in this thesis are illustrated in the following scenario: an elderly person, engaged in conversation with friends, wishes to attract a robot's attention. This composite task consists of many problems. The robot must detect and track the subject in a crowded environment. To engage with the user, it must track their hand movement. Knowledge of the subject's gaze would ensure that the robot doesn't react to the wrong person. Understanding the subject's group participation would enable the robot to respect existing human-human interaction. Many existing solutions to these problems are too constrained for natural HRI in crowded environments. Some require initial calibration or static backgrounds. Others deal poorly with occlusions, illumination changes, or real-time operation requirements. This work proposes algorithms that fuse multiple sensors to remove these restrictions and increase the accuracy over the state-of-the-art. The main contributions of this thesis are: A hand and body detection method, with a probabilistic algorithm for their real-time association when multiple users and hands are detected in crowded environments; An RGB-D sensor-fusion hand tracker, which increases position and velocity accuracy by combining a depth-image based hand detector with Monte-Carlo updates using colour images; A sensor-fusion gaze estimation system, combining IR and depth cameras on a mobile robot to give better accuracy than traditional visual methods, without the constraints of traditional IR techniques; A group detection method, based on sociological concepts of static and dynamic interactions, which incorporates real-time gaze estimates to enhance detection accuracy.Open Acces
- …