3,433 research outputs found

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Designing for physically disabled users: benefits from human motion capture – a case study

    Get PDF
    International audiencePurpose: The present study aimed to improve the design of an interface that may help disabled children to play a musical instrument. The main point is to integrate human motion capture in the design process. Method: The participant performed 20 pointing movements toward four selected locations. A three one-way analysis of variance (ANOVA) was performed in order to determine the most efficient input location. For each button position, we compared (1) the reaction time (RT), (2) the movement time (MT), and (3) the spatial variability of the movements. Results: According to the results obtained for RT and MT, one position was the most efficient button location in order to produce efficient movements. Conclusions: As the case study showed, combining the 3D motion capture system and the statistical analysis led to help the designers their design methodology and crucial choices

    RGB-D-based Action Recognition Datasets: A Survey

    Get PDF
    Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-\'{a}-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols

    Autonomous and scalable control for remote inspection with multiple aerial vehicles

    Get PDF
    © 2016 Elsevier B.V.A novel approach to the autonomous generation of trajectories for multiple aerial vehicles is presented, whereby an artificial kinematic field provides autonomous control in a distributed and highly scalable manner. The kinematic field is generated relative to a central target and is modified when a vehicle is in close proximity of another to avoid collisions. This control scheme is then applied to the mock visual inspection of a nuclear intermediate level waste storage drum. The inspection is completed using two commercially available quadcopters, in a laboratory environment, with the acquired visual inspection data processed and photogrammetrically meshed to generate a three-dimensional surface-meshed model of the drum. This paper contributes to the field of multi-agent coverage path planning for structural inspection and provides experimental validation of the control and inspection results

    Rehabilitation of Stroke Patients with Sensor-based Systems

    Full text link

    Gesture Recognition and Control Part 1 - Basics, Literature Review & Different Techniques

    Get PDF
    This Exploratory paper series reveals the technological aspects of Gesture Controlled User Interface (GCUI), and identifies trends in technology, application and usability. It is found that GCUI now affords realistic opportunities for specific application are as, and especially for use rs who are uncomfortable with more commonly used input devices. It further explored collated chronograph research information on which covers the past research work in Literature Review . Researchers investigated different types of gestures, its uses, applic ations, technology, issues and results from existing research

    Haptic control of eye movements.

    Get PDF
    Eye-hand coordination is crucial to many important tasks. A NLDS framework assumes that eyes and hands are interacting facets of one complex oculo-motor system in which physiological and task constraints interact to shape overall system behavior. Participants (N=13) in this study played a first-person video game with either a traditional GameCube controller or a motion-sensing Wiimote controller. Eye movement and hand movement time series data were analyzed with nonlinear statistical methods in the search for evidence of multifractal structure. Multiple Ho?ĂȘlder exponents were obtained for both conditions, indicating that eye and hand movements were multifractal. Hand movement data in both conditions contained brown noise indicative of short-term correlations in the time series. Eye movements in both conditions contained pink noise indicative of long-term correlations although the signal in the Wiimote condition was pinker, suggesting perhaps more orderly eye movements. Mean eye movement Ho?ĂȘlder exponents in the Wiimote condition were pinker than in the GameCube condition. Eye movements change depending on the constraints of the hand
    • 

    corecore