28 research outputs found

    Imitation Learning-based Visual Servoing for Tracking Moving Objects

    Full text link
    In everyday life collaboration tasks between human operators and robots, the former necessitate simple ways for programming new skills, the latter have to show adaptive capabilities to cope with environmental changes. The joint use of visual servoing and imitation learning allows us to pursue the objective of realizing friendly robotic interfaces that (i) are able to adapt to the environment thanks to the use of visual perception and (ii) avoid explicit programming thanks to the emulation of previous demonstrations. This work aims to exploit imitation learning for the visual servoing paradigm to address the specific problem of tracking moving objects. In particular, we show that it is possible to infer from data the compensation term required for realizing the tracking controller, avoiding the explicit implementation of estimators or observers. The effectiveness of the proposed method has been validated through simulations with a robotic manipulator.Comment: International Workshop on Human-Friendly Robotics (HFR), 202

    Differentiable Compliant Contact Primitives for Estimation and Model Predictive Control

    Full text link
    Control techniques like MPC can realize contact-rich manipulation which exploits dynamic information, maintaining friction limits and safety constraints. However, contact geometry and dynamics are required to be known. This information is often extracted from CAD, limiting scalability and the ability to handle tasks with varying geometry. To reduce the need for a priori models, we propose a framework for estimating contact models online based on torque and position measurements. To do this, compliant contact models are used, connected in parallel to model multi-point contact and constraints such as a hinge. They are parameterized to be differentiable with respect to all of their parameters (rest position, stiffness, contact location), allowing the coupled robot/environment dynamics to be linearized or efficiently used in gradient-based optimization. These models are then applied for: offline gradient-based parameter fitting, online estimation via an extended Kalman filter, and online gradient-based MPC. The proposed approach is validated on two robots, showing the efficacy of sensorless contact estimation and the effects of online estimation on MPC performance.Comment: Submitted ICRA24. Video available at https://youtu.be/CuCTcmn3H-o Code available at https://gitlab.cc-asp.fraunhofer.de/hanikevi/contact_mp

    Brain-computer interface for robot control with eye artifacts for assistive applications

    Get PDF
    Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot

    Predicting human motion intention for pHRI assistive control

    Full text link
    This work addresses human intention identification during physical Human-Robot Interaction (pHRI) tasks to include this information in an assistive controller. To this purpose, human intention is defined as the desired trajectory that the human wants to follow over a finite rolling prediction horizon so that the robot can assist in pursuing it. This work investigates a Recurrent Neural Network (RNN), specifically, Long-Short Term Memory (LSTM) cascaded with a Fully Connected layer. In particular, we propose an iterative training procedure to adapt the model. Such an iterative procedure is powerful in reducing the prediction error. Still, it has the drawback that it is time-consuming and does not generalize to different users or different co-manipulated objects. To overcome this issue, Transfer Learning (TL) adapts the pre-trained model to new trajectories, users, and co-manipulated objects by freezing the LSTM layer and fine-tuning the last FC layer, which makes the procedure faster. Experiments show that the iterative procedure adapts the model and reduces prediction error. Experiments also show that TL adapts to different users and to the co-manipulation of a large object. Finally, to check the utility of adopting the proposed method, we compare the proposed controller enhanced by the intention prediction with the other two standard controllers of pHRI

    Design methodology of an active back-support exoskeleton with adaptable backbone-based kinematics

    Get PDF
    Abstract Manual labor is still strongly present in many industrial contexts (such as aerospace industry). Such operations commonly involve onerous tasks requiring to work in non-ergonomic conditions and to manipulate heavy parts. As a result, work-related musculoskeletal disorders are a major problem to tackle in workplace. In particular, back is one of the most affected regions. To solve such issue, many efforts have been made in the design and control of exoskeleton devices, relieving the human from the task load. Besides upper limbs and lower limbs exoskeletons, back-support exoskeletons have been also investigated, proposing both passive and active solutions. While passive solutions cannot empower the human's capabilities, common active devices are rigid, without the possibility to track the human's spine kinematics while executing the task. The here proposed paper describes a methodology to design an active back-support exoskeleton with backbone-based kinematics. On the basis of the (easily implementable) scissor hinge mechanism, a one-degree of freedom device has been designed. In particular, the resulting device allows tracking the motion of a reference vertebra, i.e., the vertebrae in the correspondence of the connection between the scissor hinge mechanism and the back of the operator. Therefore, the proposed device is capable to adapt to the human posture, guaranteeing the support while relieving the person from the task load. In addition, the proposed mechanism can be easily optimized and realized for different subjects, involving a subject-based design procedure, making possible to adapt its kinematics to track the spine motion of the specific user. A prototype of the proposed device has been 3D-printed to show the achieved kinematics. Preliminary tests for discomfort evaluation show the potential of the proposed methodology, foreseeing extensive subjects-based optimization, realization and testing of the device

    Environment-based Assistance Modulation for a Hip Exosuit via Computer Vision

    Get PDF
    Just like in humans vision plays a fundamental role in guiding adaptive locomotion, when designing the control strategy for a walking assistive technology, Computer Vision may bring substantial improvements when performing an environment-based assistance modulation. In this work, we developed a hip exosuit controller able to distinguish among three different walking terrains through the use of an RGB camera and to adapt the assistance accordingly. The system was tested with seven healthy participants walking throughout an overground path comprising of staircases and level ground. Subjects performed the task with the exosuit disabled (Exo Off), constant assistance profile (Vision Off ), and with assistance modulation (Vision On). Our results showed that the controller was able to promptly classify in real-time the path in front of the user with an overall accuracy per class above the 85%, and to perform assistance modulation accordingly. Evaluation related to the effects on the user showed that Vision On was able to outperform the other two conditions: we obtained significantly higher metabolic savings than Exo Off, with a peak of about -20% when climbing up the staircase and about -16% in the overall path, and than Vision Off when ascending or descending stairs. Such advancements in the field may yield to a step forward for the exploitation of lightweight walking assistive technologies in real-life scenarios

    Environment-based Assistance Modulation for a Hip Exosuit via Computer Vision

    Get PDF
    Just like in humans vision plays a fundamental role in guiding adaptive locomotion, when designing the control strategy for a walking assistive technology, Computer Vision may bring substantial improvements when performing an environment-based assistance modulation. In this work, we developed a hip exosuit controller able to distinguish among three different walking terrains through the use of an RGB camera and to adapt the assistance accordingly. The system was tested with seven healthy participants walking throughout an overground path comprising of staircases and level ground. Subjects performed the task with the exosuit disabled (Exo Off), constant assistance profile (Vision Off ), and with assistance modulation (Vision On). Our results showed that the controller was able to promptly classify in real-time the path in front of the user with an overall accuracy per class above the 85%, and to perform assistance modulation accordingly. Evaluation related to the effects on the user showed that Vision On was able to outperform the other two conditions: we obtained significantly higher metabolic savings than Exo Off, with a peak of about -20% when climbing up the staircase and about -16% in the overall path, and than Vision Off when ascending or descending stairs. Such advancements in the field may yield to a step forward for the exploitation of lightweight walking assistive technologies in real-life scenarios

    A Machine Learning Approach for Mortality Prediction in COVID-19 Pneumonia: Development and Evaluation of the Piacenza Score

    Get PDF
    Background: Several models have been developed to predict mortality in patients with COVID-19 pneumonia, but only a few have demonstrated enough discriminatory capacity. Machine learning algorithms represent a novel approach for the data-driven prediction of clinical outcomes with advantages over statistical modeling.Objective: We aimed to develop a machine learning-based score-the Piacenza score-for 30-day mortality prediction in patients with COVID-19 pneumonia.Methods: The study comprised 852 patients with COVID-19 pneumonia, admitted to the Guglielmo da Saliceto Hospital in Italy from February to November 2020. Patients' medical history, demographics, and clinical data were collected using an electronic health record. The overall patient data set was randomly split into derivation and test cohorts. The score was obtained through the naive Bayes classifier and externally validated on 86 patients admitted to Centro Cardiologico Monzino (Italy) in February 2020. Using a forward-search algorithm, 6 features were identified: age, mean corpuscular hemoglobin concentration, PaO2/FiO(2) ratio, temperature, previous stroke, and gender. The Brier index was used to evaluate the ability of the machine learning model to stratify and predict the observed outcomes. A user-friendly website was designed and developed to enable fast and easy use of the tool by physicians. Regarding the customization properties of the Piacenza score, we added a tailored version of the algorithm to the website, which enables an optimized computation of the mortality risk score for a patient when some of the variables used by the Piacenza score are not available. In this case, the naive Bayes classifier is retrained over the same derivation cohort but using a different set of patient characteristics. We also compared the Piacenza score with the 4C score and with a naive Bayes algorithm with 14 features chosen a priori.Results: The Piacenza score exhibited an area under the receiver operating characteristic curve (AUC) of 0.78 (95% CI 0.74-0.84, Brier score=0.19) in the internal validation cohort and 0.79 (95% CI 0.68-0.89, Brier score=0.16) in the external validation cohort, showing a comparable accuracy with respect to the 4C score and to the naive Bayes model with a priori chosen features; this achieved an AUC of 0.78 (95% CI 0.73-0.83, Brier score=0.26) and 0.80 (95% CI 0.75-0.86, Brier score=0.17), respectively.Conclusions: Our findings demonstrated that a customizable machine learning-based score with a purely data-driven selection of features is feasible and effective for the prediction of mortality among patients with COVID-19 pneumonia

    Development of Impedance Control Based Strategies for Light-Weight Manipulator Applications Involving Compliant Interacting Environments and Compliant Bases

    No full text
    The paper defines impedance control based control laws for interaction tasks with environments of unknown geometrical and mechanical properties, both considering manipulators mounted on A) rigid and B) compliant bases. In A) a deformation-tracking strategy allows the control of a desired deformation of the target environment. In B) a force-tracking strategy allows the control of a desired interaction force. In both A) and B) the on-line estimation of the environment stiffness is required. Therefore, an Extended Kalman Filter is defined. In B) the on-line estimation of the robot base position is used as a feedback in the control loop. The compliant base is modelled as a second-order physical system with known parameters (offline identification) and the base position is estimated from the measure of interaction forces. The Extended Kalman Filter, the grounding position estimation and the defined control laws are validated in simulation and with experiments, especially dedicated to an insertion-assembly task with A) time-varing stiffness environment and B) constant stiffness environment
    corecore