3,156 research outputs found

    GUARDIANS final report

    Get PDF
    Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings

    Nonlinear Modeling and Control of Driving Interfaces and Continuum Robots for System Performance Gains

    Get PDF
    With the rise of (semi)autonomous vehicles and continuum robotics technology and applications, there has been an increasing interest in controller and haptic interface designs. The presence of nonlinearities in the vehicle dynamics is the main challenge in the selection of control algorithms for real-time regulation and tracking of (semi)autonomous vehicles. Moreover, control of continuum structures with infinite dimensions proves to be difficult due to their complex dynamics plus the soft and flexible nature of the manipulator body. The trajectory tracking and control of automobile and robotic systems requires control algorithms that can effectively deal with the nonlinearities of the system without the need for approximation, modeling uncertainties, and input disturbances. Control strategies based on a linearized model are often inadequate in meeting precise performance requirements. To cope with these challenges, one must consider nonlinear techniques. Nonlinear control systems provide tools and methodologies for enabling the design and realization of (semi)autonomous vehicle and continuum robots with extended specifications based on the operational mission profiles. This dissertation provides an insight into various nonlinear controllers developed for (semi)autonomous vehicles and continuum robots as a guideline for future applications in the automobile and soft robotics field. A comprehensive assessment of the approaches and control strategies, as well as insight into the future areas of research in this field, are presented.First, two vehicle haptic interfaces, including a robotic grip and a joystick, both of which are accompanied by nonlinear sliding mode control, have been developed and studied on a steer-by-wire platform integrated with a virtual reality driving environment. An operator-in-the-loop evaluation that included 30 human test subjects was used to investigate these haptic steering interfaces over a prescribed series of driving maneuvers through real time data logging and post-test questionnaires. A conventional steering wheel with a robust sliding mode controller was used for all the driving events for comparison. Test subjects operated these interfaces for a given track comprised of a double lane-change maneuver and a country road driving event. Subjective and objective results demonstrate that the driver’s experience can be enhanced up to 75.3% with a robotic steering input when compared to the traditional steering wheel during extreme maneuvers such as high-speed driving and sharp turn (e.g., hairpin turn) passing. Second, a cellphone-inspired portable human-machine-interface (HMI) that incorporated the directional control of the vehicle as well as the brake and throttle functionality into a single holistic device will be presented. A nonlinear adaptive control technique and an optimal control approach based on driver intent were also proposed to accompany the mechatronic system for combined longitudinal and lateral vehicle guidance. Assisting the disabled drivers by excluding extensive arm and leg movements ergonomically, the device has been tested in a driving simulator platform. Human test subjects evaluated the mechatronic system with various control configurations through obstacle avoidance and city road driving test, and a conventional set of steering wheel and pedals were also utilized for comparison. Subjective and objective results from the tests demonstrate that the mobile driving interface with the proposed control scheme can enhance the driver’s performance by up to 55.8% when compared to the traditional driving system during aggressive maneuvers. The system’s superior performance during certain vehicle maneuvers and approval received from the participants demonstrated its potential as an alternative driving adaptation for disabled drivers. Third, a novel strategy is designed for trajectory control of a multi-section continuum robot in three-dimensional space to achieve accurate orientation, curvature, and section length tracking. The formulation connects the continuum manipulator dynamic behavior to a virtual discrete-jointed robot whose degrees of freedom are directly mapped to those of a continuum robot section under the hypothesis of constant curvature. Based on this connection, a computed torque control architecture is developed for the virtual robot, for which inverse kinematics and dynamic equations are constructed and exploited, with appropriate transformations developed for implementation on the continuum robot. The control algorithm is validated in a realistic simulation and implemented on a six degree-of-freedom two-section OctArm continuum manipulator. Both simulation and experimental results show that the proposed method could manage simultaneous extension/contraction, bending, and torsion actions on multi-section continuum robots with decent tracking performance (e.g. steady state arc length and curvature tracking error of 3.3mm and 130mm-1, respectively). Last, semi-autonomous vehicles equipped with assistive control systems may experience degraded lateral behaviors when aggressive driver steering commands compete with high levels of autonomy. This challenge can be mitigated with effective operator intent recognition, which can configure automated systems in context-specific situations where the driver intends to perform a steering maneuver. In this article, an ensemble learning-based driver intent recognition strategy has been developed. A nonlinear model predictive control algorithm has been designed and implemented to generate haptic feedback for lateral vehicle guidance, assisting the drivers in accomplishing their intended action. To validate the framework, operator-in-the-loop testing with 30 human subjects was conducted on a steer-by-wire platform with a virtual reality driving environment. The roadway scenarios included lane change, obstacle avoidance, intersection turns, and highway exit. The automated system with learning-based driver intent recognition was compared to both the automated system with a finite state machine-based driver intent estimator and the automated system without any driver intent prediction for all driving events. Test results demonstrate that semi-autonomous vehicle performance can be enhanced by up to 74.1% with a learning-based intent predictor. The proposed holistic framework that integrates human intelligence, machine learning algorithms, and vehicle control can help solve the driver-system conflict problem leading to safer vehicle operations

    Proactive Robot Assistance via Spatio-Temporal Object Modeling

    Full text link
    Proactive robot assistance enables a robot to anticipate and provide for a user's needs without being explicitly asked. We formulate proactive assistance as the problem of the robot anticipating temporal patterns of object movements associated with everyday user routines, and proactively assisting the user by placing objects to adapt the environment to their needs. We introduce a generative graph neural network to learn a unified spatio-temporal predictive model of object dynamics from temporal sequences of object arrangements. We additionally contribute the Household Object Movements from Everyday Routines (HOMER) dataset, which tracks household objects associated with human activities of daily living across 50+ days for five simulated households. Our model outperforms the leading baseline in predicting object movement, correctly predicting locations for 11.1% more objects and wrongly predicting locations for 11.5% fewer objects used by the human user

    Real-time Hybrid Locomotion Mode Recognition for Lower-limb Wearable Robots

    Get PDF
    Real-time recognition of locomotion-related activities is a fundamental skill that the controller of lower-limb wearable robots should possess. Subject-specific training and reliance on electromyographic interfaces are the main limitations of existing approaches. This study presents a novel methodology for real-time locomotion mode recognition of locomotion-related activities in lower-limb wearable robotics. A hybrid classifier can distinguish among seven locomotion-related activities. First, a time-based approach classifies between static and dynamical states based on gait kinematics data. Second, an event-based fuzzy logic method triggered by foot pressure sensors operates in a subject-independent fashion on a minimal set of relevant biomechanical features to classify among dynamical modes. The locomotion mode recognition algorithm is implemented on the controller of a portable powered orthosis for hip assistance. An experimental protocol is designed to evaluate the controller performance in an out-of-lab scenario without the need for a subject-specific training. Experiments are conducted on six healthy volunteers performing locomotion-related activities at slow, normal, and fast speeds under the zero-torque and assistive mode of the orthosis. The overall accuracy rate of the controller is 99.4% over more than 10,000 steps, including seamless transitions between different modes. The experimental results show a successful subject-independent performance of the controller for wearable robots assisting locomotion-related activities

    Instrumentation of a cane to detect and prevent falls

    Get PDF
    Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Eletrónica Médica)The number of falls is growing as the main cause of injuries and deaths in the geriatric community. As a result, the cost of treating the injuries associated with falls is also increasing. Thus, the development of fall-related strategies with the capability of real-time monitoring without user restriction is imperative. Due to their advantages, daily life accessories can be a solution to embed fall-related systems, and canes are no exception. Moreover, gait assessment might be capable of enhancing the capability of cane usage for older cane users. Therefore, reducing, even more, the possibility of possible falls amongst them. Summing up, it is crucial the development of strategies that recognize states of fall, the step before a fall (pre-fall step) and the different cane events continuously throughout a stride. This thesis aims to develop strategies capable of identifying these situations based on a cane system that collects both inertial and force information, the Assistive Smart Cane (ASCane). The strategy regarding the detection of falls consisted of testing the data acquired with the ASCane with three different fixed multi-threshold fall detection algorithms, one dynamic multi-threshold and machine learning methods from the literature. They were tested and modified to account the use of a cane. The best performance resulted in a sensitivity and specificity of 96.90% and 98.98%, respectively. For the detection of the different cane events in controlled and real-life situations, a state-of-the-art finite-state-machine gait event detector was modified to account the use of a cane and benchmarked against a ground truth system. Moreover, a machine learning study was completed involving eight feature selection methods and nine different machine learning classifiers. Results have shown that the accuracy of the classifiers was quite acceptable and presented the best results with 98.32% of overall accuracy for controlled situations and 94.82% in daily-life situations. Regarding pre-fall step detection, the same machine learning approach was accomplished. The models were very accurate (Accuracy = 98.15%) and with the implementation of an online post-processing filter, all the false positive detections were eliminated, and a fall was able to be detected 1.019s before the end of the corresponding pre-fall step and 2.009s before impact.O número de quedas tornou-se uma das principais causas de lesões e mortes na comunidade geriátrica. Como resultado, o custo do tratamento das lesões também aumenta. Portanto, é necessário o desenvolvimento de estratégias relacionadas com quedas e que exibam capacidade de monitorização em tempo real sem colocar restrições ao usuário. Devido às suas vantagens, os acessórios do dia-a-dia podem ser uma solução para incorporar sistemas relacionados com quedas, sendo que as bengalas não são exceção. Além disso, a avaliação da marcha pode ser capaz de aprimorar a capacidade de uso de uma bengala para usuários mais idosos. Desta forma, é crucial o desenvolvimento de estratégias que reconheçam estados de queda, do passo anterior a uma queda e dos diferentes eventos da marcha de uma bengala. Esta dissertação tem como objetivo desenvolver estratégias capazes de identificar as situações anteriormente descritas com base num sistema incorporado numa bengala que coleta informações inerciais e de força, a Assistive Smart Cane (ASCane). A estratégia referente à deteção de quedas consistiu em testar os dados adquiridos através da ASCane com três algoritmos de deteção de quedas (baseados em thresholds fixos), com um algoritmo de thresholds dinâmicos e diferentes classificadores de machine learning encontrados na literatura. Estes métodos foram testados e modificados para dar conta do uso de informação adquirida através de uma bengala. O melhor desempenho alcançado em termos de sensibilidade e especificidade foi de 96,90% e 98,98%, respetivamente. Relativamente à deteção dos diferentes eventos da ASCane em situações controladas e da vida real, um detetor de eventos da marcha foi e comparado com um sistema de ground truth. Além disso, foi também realizado um estudo de machine learning envolvendo oito métodos de seleção de features e nove classificadores diferentes de machine learning. Os resultados mostraram que a precisão dos classificadores foi bastante aceitável e apresentou, como melhores resultados, 98,32% de precisão para situações controladas e 94.82% para situações do dia-a-dia. No que concerne à deteção de passos pré-queda, a mesma abordagem de machine learning foi realizada. Os modelos foram precisos (precisão = 98,15%) e com a implementação de um filtro de pós-processamento, todas as deteções de falsos positivos foram eliminadas e uma queda foi passível de ser detetada 1,019s antes do final do respetivo passo de pré-queda e 2.009s antes do impacto

    Interaction with a hand rehabilitation exoskeleton in EMG-driven bilateral therapy: Influence of visual biofeedback on the users’ performance

    Get PDF
    Producción CientíficaThe effectiveness of EMG biofeedback with neurorehabilitation robotic platforms has not been previously addressed. The present work evaluates the influence of an EMG-based visual biofeedback on the user performance when performing EMG-driven bilateral exercises with a robotic hand exoskeleton. Eighteen healthy subjects were asked to perform 1-min randomly generated sequences of hand gestures (rest, open and close) in four different conditions resulting from the combination of using or not (1) EMG-based visual biofeedback and (2) kinesthetic feedback from the exoskeleton movement. The user performance in each test was measured by computing similarity between the target gestures and the recognized user gestures using the L2 distance. Statistically significant differences in the subject performance were found in the type of provided feedback (p-value 0.0124). Pairwise comparisons showed that the L2 distance was statistically significantly lower when only EMG-based visual feedback was present (2.89 ± 0.71) than with the presence of the kinesthetic feedback alone (3.43 ± 0.75, p-value = 0.0412) or the combination of both (3.39 ± 0.70, p-value = 0.0497). Hence, EMG-based visual feedback enables subjects to increase their control over the movement of the robotic platform by assessing their muscle activation in real time. This type of feedback could benefit patients in learning more quickly how to activate robot functions, increasing their motivation towards rehabilitation.Ministerio de Ciencia e Innovación - (project RTC2019-007350-1)Consejería de Educación, Fondo Social Europeo, Gobierno Vasco - (BERC 2022-2025) y (project 3KIA (KK-2020/00049)Ministerio de Ciencia, Innovación y Universidades - (BCAM Severo Ochoa: SEV-2017-0718

    An Incremental Navigation Localization Methodology for Application to Semi-Autonomous Mobile Robotic Platforms to Assist Individuals Having Severe Motor Disabilities.

    Get PDF
    In the present work, the author explores the issues surrounding the design and development of an intelligent wheelchair platform incorporating the semi-autonomous system paradigm, to meet the needs of individuals with severe motor disabilities. The author presents a discussion of the problems of navigation that must be solved before any system of this type can be instantiated, and enumerates the general design issues that must be addressed by the designers of systems of this type. This discussion includes reviews of various methodologies that have been proposed as solutions to the problems considered. Next, the author introduces a new navigation method, called Incremental Signature Recognition (ISR), for use by semi-autonomous systems in structured environments. This method is based on the recognition, recording, and tracking of environmental discontinuities: sensor reported anomalies in measured environmental parameters. The author then proposes a robust, redundant, dynamic, self-diagnosing sensing methodology for detecting and compensating for hidden failures of single sensors and sensor idiosyncrasies. This technique is optimized for the detection of spatial discontinuity anomalies. Finally, the author gives details of an effort to realize a prototype ISR based system, along with insights into the various implementation choices made

    Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm

    Full text link
    [EN] Robotics has been successfully applied in the design of collaborative robots for assistance to people with motor disabilities. However, man-machine interaction is difficult for those who suffer severe motor disabilities. The aim of this study was to test the feasibility of a low-cost robotic arm control system with an EEG-based brain-computer interface (BCI). The BCI system relays on the Steady State Visually Evoked Potentials (SSVEP) paradigm. A cross-platform application was obtained in C++. This C++ platform, together with the open-source software Openvibe was used to control a Staubli robot arm model TX60. Communication between Openvibe and the robot was carried out through the Virtual Reality Peripheral Network (VRPN) protocol. EEG signals were acquired with the 8-channel Enobio amplifier from Neuroelectrics. For the processing of the EEG signals, Common Spatial Pattern (CSP) filters and a Linear Discriminant Analysis classifier (LDA) were used. Five healthy subjects tried the BCI. This work allowed the communication and integration of a well-known BCI development platform such as Openvibe with the specific control software of a robot arm such as Staubli TX60 using the VRPN protocol. It can be concluded from this study that it is possible to control the robotic arm with an SSVEP-based BCI with a reduced number of dry electrodes to facilitate the use of the system.Funding for open access charge: Universitat Politecnica de Valencia.Quiles Cucarella, E.; Dadone, J.; Chio, N.; García Moreno, E. (2022). Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm. Sensors. 22(13):1-26. https://doi.org/10.3390/s22135000126221

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification
    corecore