815 research outputs found

    Reactive adapted assistance for wheelchair navigation based on a standard skill profile

    Get PDF
    Mobility assistance for wheelchair navigation is typically based on the shared control paradigm. Traditionally, control swaps from user to machine depending either on a trigger mechanism or on a explicit user request. Alternatively, in collaborative control approaches both user and robot contribute to control at the same time. However, in this case it is necessary to decide how much impact the user has in the emergent command. User weight has been estimated based on his/her command efficiency or on the environment complexity. However, the user’s command efficiency may change abruptly, whereas the environment complexity depends on the user’s skills. In this work we propose a collaborative control approach where this weight is determined by the user’s ability to cope with the situation at hand with respect to an average person. This estimation relies on an standard navigation skill profile extracted from a large number of traces from real users. This approach has two major advantages: i) the user receives more assistance only when needed according to his/her own skills; and ii) we avoid an excess of assistance to prevent loss of residual skills. The proposed system has been tested with a group of people with disabilities. Tests prove that resulting efficiencies are similar to other collaborative control approaches although the amount of assistance is reduced.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Navigation system using passive collaborative control adapted to user profile for a rollator device

    Get PDF
    In order to achieve this goal, research in different areas has been necessary. First, a methodology to provide human-like platform motion in reactive navigation algorithms has been proposed to improve user acceptance of help. Then, work has focused on gait analysis and user's condition estimation using only onboard sensors. In addition, a new methodology to evaluate fall risk using only onboard sensors while users walk has been proposed to balance the contribution of user and robot to control. All proposed subsystems have been validated with a set of volunteers at two rehabilitation hospitals: Fondazione Santa Lucia (Rome) and Hospital Regional Universitario (Malaga). Volunteers presented a wide variety of physical and cognitive disabilities. Tests with healthy volunteers have been discarded from the beginning to avoid a sampling bias error. Obtained results have shown that the proposed system can be used for: i) reactively generating human-like trajectories that outperforms all other tested algorithms in terms of likeness to human paths and success rate; ii) monitoring gait and user's condition while users walk using only on-board sensors; and iii) evaluating fall risk without wearable sensors nor ambient sensors. This thesis open a number of open research lines: i) user condition estimation can be extended to another medical scales; ii) the method to reactively generate human-like-trajectories can be extended to add deliberative human-adapted-path-planning; and iii) the fall risk estimator can be extended to a fall risk predictor.Rollators provide autonomy to persons with mobility impairments. These platforms can be used while people perform their Activities of Daily Living in order to provide support and/or balance. Also, they can be used during the rehabilitation process to strengthen the lower limbs or to provide balance before users can progress to canes or crutches. Rollators have a limited set of personalization options, but they are usually related to the users' body size. Hence, people who need extra typically have to choose a wheelchair instead. This transition to a wheelchair limits users' movements and it increases their disuse syndrome because they do not exercise their lower limbs. Hence, it is a priority to extent the use of rollator platforms as much as possible by adapting help to people who can not use a conventional rollator on their own. Technological enhancements can be added to rollator to expand their use to a larger population. For example, force sensors on handlebars provide information about users' weight bearing. This information can be used during rehabilitation to control their partial weight-bearing. Encoders on wheels may also provide useful information about the walking speed, which is a well know estimator of fall risk. In addition to monitorization, motors can be attached to the wheels for assistance, e.g. to reduce effort while ascending slopes. This thesis focuses on creating a navigation system for a robotized rollator, which includes weight bearing sensors, encoders and wheel motors. The navigation system relies on passive collaborative control to continuously combine user and system commands in a seamless way. The main contribution of this work is adaptation to the user's needs through continuous, transparent monitorization and profile estimation

    AutonomĂ­a adaptable por predicciĂłn individualizada en CARMEN

    Get PDF
    El objetivo de esta tesis es presentar una arquitectura de control híbrida para sillas de rueda asistivas, completamente centrada en el usuario. La principal novedad de este trabajo es que realiza una adaptación para el usuaro de todo el proceso de navegación mediante el aprendizaje. La población objetivo incluye personas con discapacidades físicas y cognitivas muy diversas. Proporcionar la cantidad justa de asistencia nos permite evitar frustración, pérdida de habilidades residuales y ADLs

    Shared Control for Wheelchair Interfaces

    Get PDF
    Independent mobility is fundamental to the quality of life of people with impairment. Most people with severe mobility impairments, whether congenital, e.g., from cerebral palsy, or acquired, e.g., from spinal cord injury, are prescribed a wheelchair. A small yet significant number of people are unable to use a typical powered wheelchair controlled with a joystick. Instead, some of these people require alternative interfaces such as a head- array or Sip/Puff switch to drive their powered wheelchairs. However, these alternative interfaces do not work for everyone and often cause frustration, fatigue and collisions. This thesis develops a novel technique to help improve the usability of some of these alternative interfaces, in particular, the head-array and Sip/Puff switch. Control is shared between a powered wheelchair user, using an alternative interface and a pow- ered wheelchair fitted with sensors. This shared control then produces a resulting motion that is close to what the user desires to do but a motion that is also safe. A path planning algorithm on the wheelchair is implemented using techniques in mo- bile robotics. Afterwards, the output of the path planning algorithm and the user’s com- mand are both modelled as random variables. These random variables are then blended in a joint probability distribution where the final velocity to the wheelchair is the one that maximises the joint probability distribution. The performance of the probabilistic approach to blending the user’s inputs with the output of a path planner, is benchmarked against the most common form of shared control called linear blending. The benchmarking consists of several experiments with end users both in a simulated world and in the real-world. The thesis concludes that probabilistic shared control provides safer motion compared with the traditional shared control for difficult tasks and hard-to-use interfaces

    Robot Games for Elderly:A Case-Based Approach

    Get PDF

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Integration of Assistive Technologies into 3D Simulations: Exploratory Studies

    Get PDF
    Virtual worlds and environments have many purposes, ranging from games to scientific research. However, universal accessibility features in such virtual environments are limited. As the impairment prevalence rate increases yearly, so does the research interests in the field of assistive technologies. This work introduces research in assistive technologies and presents three software developments that explore the integration of assistive technologies within virtual environments, with a strong focus on Brain-Computer Interfaces. An accessible gaming system, a hands-free navigation software system, and a Brain-Computer Interaction plugin have been developed to study the capabilities of accessibility features within virtual 3D environments. Details of the specification, design, and implementation of these software applications are presented in the thesis. Observations and preliminary results as well as directions of future work are also included

    A Systematic Review of Adaptivity in Human-Robot Interaction

    Get PDF
    As the field of social robotics is growing, a consensus has been made on the design and implementation of robotic systems that are capable of adapting based on the user actions. These actions may be based on their emotions, personality or memory of past interactions. Therefore, we believe it is significant to report a review of the past research on the use of adaptive robots that have been utilised in various social environments. In this paper, we present a systematic review on the reported adaptive interactions across a number of domain areas during Human-Robot Interaction and also give future directions that can guide the design of future adaptive social robots. We conjecture that this will help towards achieving long-term applicability of robots in various social domains
    • …
    corecore