213 research outputs found

    Robotic wheelchair controlled through a vision-based interface

    Get PDF
    In this work, a vision-based control interface for commanding a robotic wheelchair is presented. The interface estimates the orientation angles of the user's head and it translates these parameters in command of maneuvers for different devices. The performance of the proposed interface is evaluated both in static experiments as well as when it is applied in commanding the robotic wheelchair. The interface calculates the orientation angles and it translates the parameters as the reference inputs to the robotic wheelchair. Control architecture based on the dynamic model of the wheelchair is implemented in order to achieve safety navigation. Experimental results of the interface performance and the wheelchair navigation are presented.Fil: Perez, Elisa. Universidad Nacional de San Juan. Facultad de Ingeniería. Departamento de Electrónica y Automática. Gabinete de Tecnología Médica; ArgentinaFil: Soria, Carlos Miguel. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Nasisi, Oscar Herminio. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Bastos, Teodiano Freire. Universidade Federal do Espírito Santo; BrasilFil: Mut, Vicente Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; Argentin

    Assistive Robots for Patients With Amyotrophic Lateral Sclerosis: Exploratory Task-Based Evaluation Study With an Early-Stage Demonstrator

    Get PDF
    Background: Although robotic manipulators have great potential in promoting motor independence of people with motor impairments, only few systems are currently commercially available. In addition to technical, economic, and normative barriers, a key challenge for their distribution is the current lack of evidence regarding their usefulness, acceptance, and user-specific requirements. Objective: Against this background, a semiautonomous robot system was developed in the research and development project, robot-assisted services for individual and resource-oriented intensive and palliative care of people with amyotrophic lateral sclerosis (ROBINA), to support people with amyotrophic lateral sclerosis (ALS) in various everyday activities. Methods: The developed early-stage demonstrator was evaluated in a task-based laboratory study of 11 patients with ALS. On the basis of a multimethod design consisting of standardized questionnaires, open-ended questions, and observation protocols, participants were asked about its relevance to everyday life, usability, and design requirements. Results: Most participants considered the system to provide relevant support within the test scenarios and for their everyday life. On the basis of the System Usability Scale, the overall usability of the robot-assisted services for individual and resource-oriented intensive and palliative care of people with ALS system was rated as excellent, with a median of 90 (IQR 75-95) points. Moreover, 3 central areas of requirements for the development of semiautonomous robotic manipulators were identified and discussed: requirements for semiautonomous human-robot collaboration, requirements for user interfaces, and requirements for the adaptation of robotic capabilities regarding everyday life. Conclusions: Robotic manipulators can contribute to increase the autonomy of people with ALS. A key issue for future studies is how the existing ability level and the required robotic capabilities can be balanced to ensure both high user satisfaction and effective and efficient task performance

    Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs)

    Full text link
    [EN] Background The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. Methods Ten impaired participants (5 males and 5 females, mean age 52 +/- 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. Results All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). Conclusions Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.This study was funded by the European Commission under the project AIDE (G.A. no: 645322), Spanish Ministry of Science and Innovation, through the projects PID2019-108310RB-I00 and PLEC2022-009424 and by the Ministry of Universities and European Union, "fnanced by European Union-Next Generation EU" through Margarita Salas grant for the training of young doctors.Catalán, JM.; Trigili, E.; Nann, M.; Blanco-Ivorra, A.; Lauretti, C.; Cordella, F.; Ivorra, E.... (2023). Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). Journal of NeuroEngineering and Rehabilitation. 20(1):1-16. https://doi.org/10.1186/s12984-023-01185-w11620

    AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics

    Full text link
    With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.Comment: Accepted submission at The 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS'24

    Autonomous robot systems and competitions: proceedings of the 12th International Conference

    Get PDF
    This is the 2012’s edition of the scientific meeting of the Portuguese Robotics Open (ROBOTICA’ 2012). It aims to disseminate scientific contributions and to promote discussion of theories, methods and experiences in areas of relevance to Autonomous Robotics and Robotic Competitions. All accepted contributions are included in this proceedings book. The conference program has also included an invited talk by Dr.ir. Raymond H. Cuijpers, from the Department of Human Technology Interaction of Eindhoven University of Technology, Netherlands.The conference is kindly sponsored by the IEEE Portugal Section / IEEE RAS ChapterSPR-Sociedade Portuguesa de Robótic

    Intelligent Multimodal Framework for Human Assistive Robotics Based on Computer Vision Algorithms

    Full text link
    [EN] Assistive technologies help all persons with disabilities to improve their accessibility in all aspects of their life. The AIDE European project contributes to the improvement of current assistive technologies by developing and testing a modular and adaptive multimodal interface customizable to the individual needs of people with disabilities. This paper describes the computer vision algorithms part of the multimodal interface developed inside the AIDE European project. The main contribution of this computer vision part is the integration with the robotic system and with the other sensory systems (electrooculography (EOG) and electroencephalography (EEG)). The technical achievements solved herein are the algorithm for the selection of objects using the gaze, and especially the state-of-the-art algorithm for the efficient detection and pose estimation of textureless objects. These algorithms were tested in real conditions, and were thoroughly evaluated both qualitatively and quantitatively. The experimental results of the object selection algorithm were excellent (object selection over 90%) in less than 12 s. The detection and pose estimation algorithms evaluated using the LINEMOD database were similar to the state-of-the-art method, and were the most computationally efficient.The research leading to these results received funding from the European Community's Horizon 2020 programme, AIDE project: "Adaptive Multimodal Interfaces to Assist Disabled People in Daily Activities" (grant agreement No: 645322).Ivorra Martínez, E.; Ortega Pérez, M.; Catalán, JM.; Ezquerro, S.; Lledó, LD.; Garcia-Aracil, N.; Alcañiz Raya, ML. (2018). Intelligent Multimodal Framework for Human Assistive Robotics Based on Computer Vision Algorithms. Sensors. 18(8). https://doi.org/10.3390/s18082408S18

    DEVELOPMENT AND ASSESSMENT OF ADVANCED ASSISTIVE ROBOTIC MANIPULATORS USER INTERFACES

    Get PDF
    Text BoxAssistive Robotic Manipulators (ARM) have shown improvement in self-care and increased independence among people with severe upper extremity disabilities. With an ARM mounted on the side of an electric powered wheelchair, an ARM may provide manipulation assistance, such as picking up object, eating, drinking, dressing, reaching out, or opening doors. However, existing assessment tools are inconsistent between studies, time consuming, and unclear in clinical effectiveness. Therefore, in this research, we have developed an ADL task board evaluation tool that provides standardized, efficient, and reliable assessment of ARM performance. Among powered wheelchair users and able-bodied controls using two commercial ARM user interfaces – joystick and keypad, we found that there were statistical differences between both user interface performances, but no statistical difference was found in the cognitive loading. The ADL task board demonstrated highly correlated performance with an existing functional assessment tool, Wolf Motor Function Test. Through this study, we have also identified barriers and limits in current commercial user interfaces and developed smartphone and assistive sliding-autonomy user interfaces that yields improved performance. Testing results from our smartphone manual interface revealed statistically faster performance. The assistive sliding-autonomy interface helped seamlessly correct the error seen with autonomous functions. The ADL task performance evaluation tool may help clinicians and researchers better access ARM user interfaces and evaluated the efficacy of customized user interfaces to improve performance. The smartphone manual interface demonstrated improved performance and the sliding-autonomy framework showed enhanced success with tasks without recalculating path planning and recognition

    Optimal Wheelchair Multi-LiDAR Placement for Indoor SLAM

    Get PDF
    One of the most prevalent technologies used in modern robotics is Simultaneous Localization and Mapping or, SLAM. Modern SLAM technologies usually employ a number of different probabilistic mathematics to perform processes that enable modern robots to not only map an environment but, also, concurrently localize themselves within said environment. Existing open-source SLAM technologies not only range in the different probabilistic methods they employ to achieve their task but, also, by how well the task is achieved and by their computational requirements. Additionally, the positioning of the sensors in the robot also has a substantial effect on how well these technologies work. Therefore, this dissertation is dedicated to the comparison of existing open-source ROS implemented 2D SLAM technologies and in the maximization of the performance of said SLAM technologies by researching optimal sensor placement in a Intelligent Wheelchair context, using SLAM performance as a benchmark

    Autonomous Navigation of Mobile Robots: Marker-based Localization System and On-line Path

    Get PDF
    Traditional wheelchairs are controlled mainly by joystick, which is not suitable solution with major disabilities. Current thesis aiming to create a human-machine interface and create a software, which performs indoor autonomous navigation of the commercial wheelchair RoboEye, developed at the Measurements Instrumentations Robotic Laboratory at the University of Trento in collaboration with Robosense and Xtrensa,. RoboEye is an intelligent wheelchair that aims to support people by providing independence and autonomy of movement, affected by serious mobility problems from impairing pathologies (for example ALS – amyotrophic lateral sclerosis). This thesis is divided into two main parts – human machine interface creation plus integration of existing services into developed solution, and performing possible solution how given wheelchair can navigate manually utilizing eye-tracking technologies, TOF cameras, odometric localization and Aruco markers. Developed interface supports manual, semi-autonomous and autonomous navigation. In addition to that following user experience specific for eye-tracking devices and people with major disabilities. Application delevoped on Unity 3D software using C# script following state-machine approach with multiple scenes and components. In the current master thesis, suggested solution satisfies user’s need to navigate hands-free, as less tiring as possible. Moreover, user can choose the destination point from defined in advance points of interests and reach it with no further input needed. User interface is intuitive and clear for experienced and inexperienced users. The user can choose UI’s icons image, scale and font size. Software performs in a state machine module, which is tested among users using test cases. Path planning routine is solved using Dijkstra approach and proved to be efficient
    • …
    corecore