6 research outputs found

    'Elbows Out' - Predictive tracking of partially occluded pose for Robot-Assisted dressing

    Get PDF
    © 2016 IEEE. Robots that can assist in the activities of daily living, such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking, which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this letter, tracking of the arm is lost when the user's hand enters the jacket, which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human-human interaction study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation-based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position-based data, the elbow position could be predicted with a 4.1 cm error but adding engineered features reduced the error to 2.4 cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking

    Multidimensional Capacitive Sensing for Robot-Assisted Dressing and Bathing

    Get PDF
    Robotic assistance presents an opportunity to benefit the lives of many people with physical disabilities, yet accurately sensing the human body and tracking human motion remain difficult for robots. We present a multidimensional capacitive sensing technique that estimates the local pose of a human limb in real time. A key benefit of this sensing method is that it can sense the limb through opaque materials, including fabrics and wet cloth. Our method uses a multielectrode capacitive sensor mounted to a robot's end effector. A neural network model estimates the position of the closest point on a person's limb and the orientation of the limb's central axis relative to the sensor's frame of reference. These pose estimates enable the robot to move its end effector with respect to the limb using feedback control. We demonstrate that a PR2 robot can use this approach with a custom six electrode capacitive sensor to assist with two activities of daily living-dressing and bathing. The robot pulled the sleeve of a hospital gown onto able-bodied participants' right arms, while tracking human motion. When assisting with bathing, the robot moved a soft wet washcloth to follow the contours of able-bodied participants' limbs, cleaning their surfaces. Overall, we found that multidimensional capacitive sensing presents a promising approach for robots to sense and track the human body during assistive tasks that require physical human-robot interaction.Comment: 8 pages, 16 figures, International Conference on Rehabilitation Robotics 201

    "Elbows out": predictive tracking of partially occluded pose for robot-assisted dressing

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Robots that can assist in the Activities of Daily Living (ADL), such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this work, tracking of the arm is lost when the user’s hand enters the jacket which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human-human interaction (HHI) study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position based data the elbow position could be predicted with a 4.1cm error but adding engineered features reduced the error to 2.4cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking.Peer Reviewe

    "Elbows out": predictive tracking of partially occluded pose for robot-assisted dressing

    No full text
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Robots that can assist in the Activities of Daily Living (ADL), such as dressing, may support older adults, addressing the needs of an aging population in the face of a growing shortage of care professionals. Using depth cameras during robot-assisted dressing can lead to occlusions and loss of user tracking which may result in unsafe trajectory planning or prevent the planning task proceeding altogether. For the dressing task of putting on a jacket, which is addressed in this work, tracking of the arm is lost when the user’s hand enters the jacket which may lead to unsafe situations for the user and a poor interaction experience. Using motion tracking data, free from occlusions, gathered from a human-human interaction (HHI) study on an assisted dressing task, recurrent neural network models were built to predict the elbow position of a single arm based on other features of the user pose. The best features for predicting the elbow position were explored by using regression trees indicating the hips and shoulder as possible predictors. Engineered features were also created based on observations of real dressing scenarios and their effectiveness explored. Comparison between position and orientation based datasets was also included in this study. A 12-fold cross-validation was performed for each feature set and repeated 20 times to improve statistical power. Using position based data the elbow position could be predicted with a 4.1cm error but adding engineered features reduced the error to 2.4cm. Adding orientation information to the data did not improve the accuracy and aggregating univariate response models failed to make significant improvements. The model was evaluated on Kinect data for a robot dressing task and although not without issues, demonstrates potential for this application. Although this has been demonstrated for jacket dressing, the technique could be applied to a number of different situations during occluded tracking.Peer Reviewe

    Enhancing tele-operation - Investigating the effect of sensory feedback on performance

    Get PDF
    The decline in the number of healthcare service providers in comparison to the growing numbers of service users prompts the development of technologies to improve the efficiency of healthcare services. One such technology which could offer support are assistive robots, remotely tele-operated to provide assistive care and support for older adults with assistive care needs and people living with disabilities. Tele-operation makes it possible to provide human-in-the-loop robotic assistance while also addressing safety concerns in the use of autonomous robots around humans. Unlike many other applications of robot tele-operation, safety is particularly significant as the tele-operated assistive robots will be used in close proximity to vulnerable human users. It is therefore important to provide as much information about the robot (and the robot workspace) as possible to the tele-operators to ensure safety, as well as efficiency. Since robot tele-operation is relatively unexplored in the context of assisted living, this thesis explores different feedback modalities that may be employed to communicate sensor information to tele-operators. The thesis presents research as it transitioned from identifying and evaluating additional feedback modalities that may be used to supplement video feedback, to exploring different strategies for communicating the different feedback modalities. Due to the fact that some of the sensors and feedback needed are not readily available, different design iterations were carried out to develop the necessary hardware and software for the studies carried out. The first human study was carried out to investigate the effect of feedback on tele-operator performance. Performance was measured in terms of task completion time, ease of use of the system, number of robot joint movements, and success or failure of the task. The effect of verbal feedback between the tele-operator and service users was also investigated. Feedback modalities have differing effects on performance metrics and as a result, the choice of optimal feedback may vary from task to task. Results show that participants preferred scenarios with verbal feedback relative to scenarios without verbal feedback, which also reflects in their performance. Gaze metrics from the study also showed that it may be possible to understand how tele-operators interact with the system based on their areas of interest as they carry out tasks. This findings suggest that such studies can be used to improve the design of tele-operation systems.The need for social interaction between the tele-operator and service user suggests that visual and auditory feedback modalities will be engaged as tasks are carried out. This further reduces the number of available sensory modalities through which information can be communicated to tele-operators. A wrist-worn Wi-Fi enabled haptic feedback device was therefore developed and a study was carried out to investigate haptic sensitivities across the wrist. Results suggest that different locations on the wrist have varying sensitivities to haptic stimulation with and without video distraction, duration of haptic stimulation, and varying amplitudes of stimulation. This suggests that dynamic control of haptic feedback can be used to improve haptic perception across the wrist, and it may also be possible to display more than one type of sensor data to tele-operators during a task. The final study carried out was designed to investigate if participants can differentiate between different types of sensor data conveyed through different locations on the wrist via haptic feedback. The effect of increased number of attempts on performance was also investigated. Total task completion time decreased with task repetition. Participants with prior gaming and robot experience had a more significant reduction in total task completion time when compared to participants without prior gaming and robot experience. Reduction in task completion time was noticed for all stages of the task but participants with additional feedback had higher task completion time than participants without supplementary feedback. Reduction in task completion time varied for different stages of the task. Even though gripper trajectory reduced with task repetition, participants with supplementary feedback had longer gripper trajectories than participants without supplementary feedback, while participants with prior gaming experience had shorter gripper trajectories than participants without prior gaming experience. Perceived workload was also found to reduce with task repetition but perceived workload was higher for participants with feedback reported higher perceived workload than participants without feedback. However participants without feedback reported higher frustration than participants without feedback.Results show that the effect of feedback may not be significant where participants can get necessary information from video feedback. However, participants were fully dependent on feedback when video feedback could not provide requisite information needed.The findings presented in this thesis have potential applications in healthcare, and other applications of robot tele-operation and feedback. Findings can be used to improve feedback designs for tele-operation systems to ensure safe and efficient tele-operation. The thesis also provides ways visual feedback can be used with other feedback modalities. The haptic feedback designed in this research may also be used to provide situational awareness for the visually impaired
    corecore