560 research outputs found
A user study on personalized stiffness control and task specificity in physical Human-Robot Interaction
Gopinathan S, Ötting SK, Steil JJ. A user study on personalized stiffness control and task specificity in physical Human-Robot Interaction. Frontiers in Robotics and AI. 2017;4: 58.An ideal physical human–robot interaction (pHRI) should offer the users robotic systems that are easy to handle, intuitive to use, ergonomic and adaptive to human habits and preferences. But the variance in the user behavior is often high and rather unpredictable, which hinders the development of such systems. This article introduces a Personalized Adaptive Stiffness controller for pHRI that is calibrated for the user’s force profile and validates its performance in an extensive user study with 49 participants on two different tasks. The user study compares the new scheme to conventional fixed stiffness or gravitation compensation controllers on the 7-DOF KUKA LWR IVb by employing two typical joint-manipulation tasks. The results clearly point out the importance of considering task specific parameters and human specific parameters while designing control modes for pHRI. The analysis shows that for simpler tasks a standard fixed controller may perform sufficiently well and that respective task dependency strongly prevails over individual differences. In the more complex task, quantitative and qualitative results reveal differences between the respective control modes, where the Personalized Adaptive Stiffness controller excels in terms of both performance gain and user preference. Further analysis shows that human and task parameters can be combined and quantified by considering the manipulability of a simplified human arm model. The analysis of user’s interaction force profiles confirms this finding
Personalization and Adaptation in Physical Human-Robot Interaction
Gopinathan S. Personalization and Adaptation in Physical Human-Robot Interaction. Bielefeld: Universität Bielefeld; 2019.Recent advancements in physical human-robot interaction (pHRI) makes it possible for
compliant robots to assist the human counterpart while closely working together. An ideal
control mode designed for pHRI should be easy to handle, intuitive to use, ergonomic
and adaptive to human habits and preferences. The major stumbling block in achieving
this is that each user has varying physical capabilities and characteristics. This variance
in the user behavior and other features is often high and rather unpredictable, which
hinders the development of such systems. To tackle this problem, the idea of personalized
adaptive stiffness control for pHRI is introduced in this thesis. Extensive user-studies are
conducted in scope of this thesis and various control modes for pHRI are proposed and
evaluated using appropriate user-studies. Both naive and expert users were considered in
the user-studies and inferences from each study were used to improve the control mode
to be better suited for pHRI.
The thesis follows a meticulous research plan, an initial user-study confirms the im-
portance of pHRI and kinesthetic guidance in industrial tasks. Subsequently, the user
interactive force based adaptation is proposed and a second user-study is conducted where
it is compared with standard control modes for pHRI. Importance of task specific param-
eters and the need for combining the task and human factors emerged from the results
of the second user-study. In the next phase manipulability based approaches which com-
bine both task and human parameters are proposed and validated by conducting a third
user-study. In the final phase a fourth user-study is conducted where the proposed con-
trol modes are compared against more complex methods that have been proposed in the
literature.
The importance of human physical factors and needs for human centered systems for
pHRI is validated in this thesis. The results show that including these human factors not
only improve the performance but also improves the interaction quality and reduces the
complexity of the pHRI
Dyadic behavior in co-manipulation :from humans to robots
To both decrease the physical toll on a human worker, and increase a robot’s environment perception, a human-robot dyad may be used to co-manipulate a shared object. From the premise that humans are efficient working together, this work’s approach is to investigate human-human dyads co-manipulating an object. The co-manipulation is evaluated from motion capture data, surface electromyography (EMG) sensors, and custom contact sensors for qualitative performance analysis. A human-human dyadic co-manipulation experiment is designed in which every human is instructed to behave as a leader, as a follower or neither, acting as naturally as possible. The experiment data analysis revealed that humans modulate their arm mechanical impedance depending on their role during the co-manipulation. In order to emulate the human behavior during a co-manipulation task, an admittance controller with varying stiffness is presented. The desired stiffness is continuously varied based on a scalar and smooth function that assigns a degree of leadership to the robot. Furthermore, the controller is analyzed through simulations, its stability is analyzed by Lyapunov. The resulting object trajectories greatly resemble the patterns seen in the human-human dyad experiment.Para tanto diminuir o esforço fĂsico de um humano, quanto aumentar a percepção de um ambiente por um robĂ´, um dĂade humano-robĂ´ pode ser usado para co-manipulação de um objeto compartilhado. Partindo da premissa de que humanos sĂŁo eficientes trabalhando juntos, a abordagem deste trabalho Ă© a de investigar dĂades humano-humano co-manipulando um objeto compartilhado. A co-manipulação Ă© avaliada a partir de dados de um sistema de captura de movimentos, sinais de eletromiografia (EMG), e de sensores de contato customizados para análise qualitativa de desempenho. Um experimento de co-manipulação com dĂades humano-humano foi projetado no qual cada humano Ă© instruĂdo a se comportar como um lĂder, um seguidor, ou simplesmente agir tĂŁo naturalmente quanto possĂvel. A análise de dados do experimento revelou que os humanos modulam a rigidez mecânica do braço a depender de que tipo de comportamento eles foram designados antes da co-manipulação. Para emular o comportamento humano durante uma tarefa de co-manipulação, um controle por admitância com rigidez variável Ă© apresentado neste trabalho. A rigidez desejada Ă© continuamente variada com base em uma função escalar suave que define o grau de liderança do robĂ´. AlĂ©m disso, o controlador Ă© analisado por meio de simulações, e sua estabilidade Ă© analisada pela teoria de Lyapunov. As trajetĂłrias resultantes do uso do controlador mostraram um padrĂŁo de comportamento muito parecido ao do experimento com dĂades humano-humano
Voluntary control of wearable robotic exoskeletons by patients with paresis via neuromechanical modeling.
BACKGROUND: Research efforts in neurorehabilitation technologies have been directed towards creating robotic exoskeletons to restore motor function in impaired individuals. However, despite advances in mechatronics and bioelectrical signal processing, current robotic exoskeletons have had only modest clinical impact. A major limitation is the inability to enable exoskeleton voluntary control in neurologically impaired individuals. This hinders the possibility of optimally inducing the activity-driven neuroplastic changes that are required for recovery. METHODS: We have developed a patient-specific computational model of the human musculoskeletal system controlled via neural surrogates, i.e., electromyography-derived neural activations to muscles. The electromyography-driven musculoskeletal model was synthesized into a human-machine interface (HMI) that enabled poststroke and incomplete spinal cord injury patients to voluntarily control multiple joints in a multifunctional robotic exoskeleton in real time. RESULTS: We demonstrated patients' control accuracy across a wide range of lower-extremity motor tasks. Remarkably, an increased level of exoskeleton assistance always resulted in a reduction in both amplitude and variability in muscle activations as well as in the mechanical moments required to perform a motor task. Since small discrepancies in onset time between human limb movement and that of the parallel exoskeleton would potentially increase human neuromuscular effort, these results demonstrate that the developed HMI precisely synchronizes the device actuation with residual voluntary muscle contraction capacity in neurologically impaired patients. CONCLUSIONS: Continuous voluntary control of robotic exoskeletons (i.e. event-free and task-independent) has never been demonstrated before in populations with paretic and spastic-like muscle activity, such as those investigated in this study. Our proposed methodology may open new avenues for harnessing residual neuromuscular function in neurologically impaired individuals via symbiotic wearable robots
Patient-specific simulation environment for surgical planning and preoperative rehearsal
Surgical simulation is common practice in the fields of surgical education and training. Numerous surgical simulators are available from commercial and academic organisations for the generic modelling of surgical tasks. However, a simulation platform is still yet to be found that fulfils the key requirements expected for patient-specific surgical simulation of soft tissue, with an effective translation into clinical practice. Patient-specific modelling is possible, but to date has been time-consuming, and consequently costly, because data preparation can be technically demanding. This motivated the research developed herein, which addresses the main challenges of biomechanical modelling for patient-specific surgical simulation. A novel implementation of soft tissue deformation and estimation of the patient-specific intraoperative environment is achieved using a position-based dynamics approach. This modelling approach overcomes the limitations derived from traditional physically-based approaches, by providing a simulation for patient-specific models with visual and physical accuracy, stability and real-time interaction. As a geometrically- based method, a calibration of the simulation parameters is performed and the simulation framework is successfully validated through experimental studies. The capabilities of the simulation platform are demonstrated by the integration of different surgical planning applications that are found relevant in the context of kidney cancer surgery. The simulation of pneumoperitoneum facilitates trocar placement planning and intraoperative surgical navigation. The implementation of deformable ultrasound simulation can assist surgeons in improving their scanning technique and definition of an optimal procedural strategy. Furthermore, the simulation framework has the potential to support the development and assessment of hypotheses that cannot be tested in vivo. Specifically, the evaluation of feedback modalities, as a response to user-model interaction, demonstrates improved performance and justifies the need to integrate a feedback framework in the robot-assisted surgical setting.Open Acces
- …