4,457 research outputs found
Talking About Task Progress: Towards Integrating Task Planning and Dialog for Assistive Robotic Services
The use of service robots to assist ageing people in their own homes has the potential to allow people to maintain their independence, increasing their health and quality of life. In many assistive applications, robots perform tasks on people’s behalf that they are unable or unwilling to monitor directly. It is important that users be given useful and appropriate information about task progress. People being assisted in homes and other realworld environments are likely be engaged in other activities while they wait for a service, so information should also be presented in an appropriate, nonintrusive manner. This paper presents a human-robot interaction experiment investigatingwhat type of feedback people prefer in verbal updates by a service robot about distributed assistive services. People found feedback about time until task completion more useful than feedback about events in task progress or no feedback. We also discuss future research directions that involve giving non-expert users more input into the task planning process when delays or failures occur that necessitate replanning or modifying goals
Assistive robotics: research challenges and ethics education initiatives
Assistive robotics is a fast growing field aimed at helping healthcarers in hospitals, rehabilitation centers and nursery homes, as well as empowering people with reduced mobility at home, so that they can autonomously fulfill their daily living activities. The need to function in dynamic human-centered environments poses new research challenges: robotic assistants need to have friendly interfaces, be highly adaptable and customizable, very compliant and intrinsically safe to people, as well as able to handle deformable materials.
Besides technical challenges, assistive robotics raises also ethical defies, which have led to the emergence of a new discipline: Roboethics. Several institutions are developing regulations and standards, and many ethics education initiatives include contents on human-robot interaction and human dignity in assistive situations.
In this paper, the state of the art in assistive robotics is briefly reviewed, and educational materials from a university course on Ethics in Social Robotics and AI focusing on the assistive context are presented.Peer ReviewedPostprint (author's final draft
Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks
Social intelligence is an important requirement for enabling robots to
collaborate with people. In particular, human path prediction is an essential
capability for robots in that it prevents potential collision with a human and
allows the robot to safely make larger movements. In this paper, we present a
method for predicting the trajectory of a human who follows a haptic robotic
guide without using sight, which is valuable for assistive robots that aid the
visually impaired. We apply a deep learning method based on recurrent neural
networks using multimodal data: (1) human trajectory, (2) movement of the
robotic guide, (3) haptic input data measured from the physical interaction
between the human and the robot, (4) human depth data. We collected actual
human trajectory and multimodal response data through indoor experiments. Our
model outperformed the baseline result while using only the robot data with the
observed human trajectory, and it shows even better results when using
additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201
Empowering and assisting natural human mobility: The simbiosis walker
This paper presents the complete development of the Simbiosis Smart Walker. The device is equipped with a set of sensor subsystems to acquire user-machine interaction forces and the temporal evolution of user's feet during gait. The authors present an adaptive filtering technique used for the identification and separation of different components found on the human-machine interaction forces. This technique allowed isolating the components related with the navigational commands and developing a Fuzzy logic controller to guide the device. The Smart Walker was clinically validated at the Spinal Cord Injury Hospital of Toledo - Spain, presenting great acceptability by spinal chord injury patients and clinical staf
In-home and remote use of robotic body surrogates by people with profound motor deficits
By controlling robots comparable to the human body, people with profound
motor deficits could potentially perform a variety of physical tasks for
themselves, improving their quality of life. The extent to which this is
achievable has been unclear due to the lack of suitable interfaces by which to
control robotic body surrogates and a dearth of studies involving substantial
numbers of people with profound motor deficits. We developed a novel, web-based
augmented reality interface that enables people with profound motor deficits to
remotely control a PR2 mobile manipulator from Willow Garage, which is a
human-scale, wheeled robot with two arms. We then conducted two studies to
investigate the use of robotic body surrogates. In the first study, 15 novice
users with profound motor deficits from across the United States controlled a
PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a
simulated self-care task. Participants achieved clinically meaningful
improvements on the ARAT and 12 of 15 participants (80%) successfully completed
the simulated self-care task. Participants agreed that the robotic system was
easy to use, was useful, and would provide a meaningful improvement in their
lives. In the second study, one expert user with profound motor deficits had
free use of a PR2 in his home for seven days. He performed a variety of
self-care and household tasks, and also used the robot in novel ways. Taking
both studies together, our results suggest that people with profound motor
deficits can improve their quality of life using robotic body surrogates, and
that they can gain benefit with only low-level robot autonomy and without
invasive interfaces. However, methods to reduce the rate of errors and increase
operational speed merit further investigation.Comment: 43 Pages, 13 Figure
Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach
We explore the probabilistic foundations of shared control in complex dynamic
environments. In order to do this, we formulate shared control as a random
process and describe the joint distribution that governs its behavior. For
tractability, we model the relationships between the operator, autonomy, and
crowd as an undirected graphical model. Further, we introduce an interaction
function between the operator and the robot, that we call "agreeability"; in
combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend
a cooperative collision avoidance autonomy to shared control. We therefore
quantify the notion of simultaneously optimizing over agreeability (between the
operator and autonomy), and safety and efficiency in crowded environments. We
show that for a particular form of interaction function between the autonomy
and the operator, linear blending is recovered exactly. Additionally, to
recover linear blending, unimodal restrictions must be placed on the models
describing the operator and the autonomy. In turn, these restrictions raise
questions about the flexibility and applicability of the linear blending
framework. Additionally, we present an extension of linear blending called
"operator biased linear trajectory blending" (which formalizes some recent
approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that
not only is this also a restrictive special case of our probabilistic approach,
but more importantly, is statistically unsound, and thus, mathematically,
unsuitable for implementation. Instead, we suggest a statistically principled
approach that guarantees data is used in a consistent manner, and show how this
alternative approach converges to the full probabilistic framework. We conclude
by proving that, in general, linear blending is suboptimal with respect to the
joint metric of agreeability, safety, and efficiency
Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges
In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices
Video prototyping of dog-inspired non-verbal affective communication for an appearance constrained robot
Original article can be found at: http://ieeexplore.ieee.org “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This paper presents results from a video human-robot interaction (VHRI) study in which participants viewed a video in which an appearance-constrained Pioneer robot used dog-inspired affective cues to communicate affinity and relationship with its owner and a guest using proxemics, body movement and orientation and camera orientation. The findings suggest that even with the limited modalities for non-verbal expression offered by a Pioneer robot, which does not have a dog-like appearance, these cues were effective for non-verbal affective communication
- …