991 research outputs found

    Multimodal Control of a Robotic Wheelchair: Using Contextual Information for Usability Improvement

    No full text
    International audienceIn this paper, a method to perform semi-autonomous navigation on a wheelchair is presented. The wheelchair could be controlled in semi-autonomous mode estimating the user's intention by using a face pose recognition system or in manual mode. The estimator was performed within a Bayesian network approach. To switch these two modes, a speech interface was used. The user's intention was modeled as a set of typical destinations visited by the user. The algorithm was implemented to one experimental wheelchair robot. The new application of the wheelchair system with more natural and easy-to-use human machine interfaces was one of the main contributions. as user's habits and points of interest are employed to infer the user's desired destination in a map. Erroneous steering signals coming from the user- machine interface input are filtered out, improving the overall performance of the system. Human aware navigation, path planning and obstacle avoidance are performed by the robotic wheelchair while the user is just concerned with "looking where he wants to go"

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Robotic wheelchair controlled through a vision-based interface

    Get PDF
    In this work, a vision-based control interface for commanding a robotic wheelchair is presented. The interface estimates the orientation angles of the user's head and it translates these parameters in command of maneuvers for different devices. The performance of the proposed interface is evaluated both in static experiments as well as when it is applied in commanding the robotic wheelchair. The interface calculates the orientation angles and it translates the parameters as the reference inputs to the robotic wheelchair. Control architecture based on the dynamic model of the wheelchair is implemented in order to achieve safety navigation. Experimental results of the interface performance and the wheelchair navigation are presented.Fil: Perez, Elisa. Universidad Nacional de San Juan. Facultad de Ingeniería. Departamento de Electrónica y Automática. Gabinete de Tecnología Médica; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Bastos, Teodiano Freire. Universidade Federal do Espírito Santo; BrasilFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; Argentin

    Using social cues to estimate possible destinations when driving a robotic wheelchair

    No full text
    International audienceApproaching a group of humans is an important navigation task. Although many methods have been proposed to avoid interrupting groups of people engaged in a conversation, just a few works have considered the proper way of joining those groups. Research in the field of social sciences have proposed geometric models to compute the best points to join a group. In this article we propose a method to use those points as possible destinations when driving a robotic wheelchair. Those points are considered together with other possible destinations in the environment such as points of interest or typical static destinations defined by the user's habits. The intended destination is inferred using a Dynamic Bayesian Network that takes into account the contextual information of the environment and user's orders to compute the probability for each destination

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    A real-time human-robot interaction system based on gestures for assistive scenarios

    Get PDF
    Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.Postprint (author's final draft
    • …
    corecore