979 research outputs found
User Intent Detection and Control of a Soft Poly-Limb
abstract: This work presents the integration of user intent detection and control in the development of the fluid-driven, wearable, and continuum, Soft Poly-Limb (SPL). The SPL utilizes the numerous traits of soft robotics to enable a novel approach to provide safe and compliant mobile manipulation assistance to healthy and impaired users. This wearable system equips the user with an additional limb made of soft materials that can be controlled to produce complex three-dimensional motion in space, like its biological counterparts with hydrostatic muscles. Similar to the elephant trunk, the SPL is able to manipulate objects using various end effectors, such as suction adhesion or a soft grasper, and can also wrap its entire length around objects for manipulation. User control of the limb is demonstrated using multiple user intent detection modalities. Further, the performance of the SPL studied by testing its capability to interact safely and closely around a user through a spatial mobility test. Finally, the limb’s ability to assist the user is explored through multitasking scenarios and pick and place tests with varying mounting locations of the arm around the user’s body. The results of these assessments demonstrate the SPL’s ability to safely interact with the user while exhibiting promising performance in assisting the user with a wide variety of tasks, in both work and general living scenarios.Dissertation/ThesisMasters Thesis Biomedical Engineering 201
Bio-Artificial Synergies for Grasp Posture Control of Supernumerary Robotic Fingers
A new type of wrist-mounted robot, the Supernumerary Robotic (SR) Fingers, is proposed to work closely with the human hand and aid the human in performing a variety of prehensile tasks. For people with diminished functionality of their hands, these robotic fingers could provide the opportunity to live with more independence and work more productively. A natural and implicit coordination between the SR Fingers and the human fingers is required so the robot can be transformed to act as part of the human body. This paper presents a novel control algorithm, termed “Bio-Artificial Synergies”, which enables the SR and human fingers to share the task load together and adapt to diverse task conditions. Through grasp experiments and data analysis, postural synergies were found for a seven-fingered hand comprised of two SR Fingers and five human fingers. The synergy-based control law was then extracted from the experimental data using Partial Least Squares (PLS) regression and tested on the SR Finger prototype as a proof of concept
Recommended from our members
Intuitive Human-Machine Interfaces for Non-Anthropomorphic Robotic Hands
As robots become more prevalent in our everyday lives, both in our workplaces and in our homes, it becomes increasingly likely that people who are not experts in robotics will be asked to interface with robotic devices. It is therefore important to develop robotic controls that are intuitive and easy for novices to use. Robotic hands, in particular, are very useful, but their high dimensionality makes creating intuitive human-machine interfaces for them complex. In this dissertation, we study the control of non-anthropomorphic robotic hands by non-roboticists in two contexts: collaborative manipulation and assistive robotics.
In the field of collaborative manipulation, the human and the robot work side by side as independent agents. Teleoperation allows the human to assist the robot when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperator’s hand as an input device can provide an intuitive control method, but finding a mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands’ dissimilar kinematics. In this dissertation, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users.
We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We first propose the general concept of the subspace, its properties and the variables needed to map from the human hand to a robot hand. We then propose three ways to populate the teleoperation subspace mapping. Two of our mappings use a dataglove to harvest information about the user's hand. We define the mapping between joint space and teleoperation subspace with an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and with an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. Our third mapping for the teleoperation subspace uses forearm electromyography (EMG) as a control input.
Assistive orthotics is another area of robotics where human-machine interfaces are critical, since, in this field, the robot is attached to the hand of the human user. In this case, the goal is for the robot to assist the human with movements they would not otherwise be able to achieve. Orthotics can improve the quality of life of people who do not have full use of their hands. Human-machine interfaces for assistive hand orthotics that use EMG signals from the affected forearm as input are intuitive and repeated use can strengthen the muscles of the user's affected arm. In this dissertation, we seek to create an EMG based control for an orthotic device used by people who have had a stroke. We would like our control to enable functional motions when used in conjunction with a orthosis and to be robust to changes in the input signal.
We propose a control for a wearable hand orthosis which uses an easy to don, commodity forearm EMG band. We develop an supervised algorithm to detect a user’s intent to open and close their hand, and pair this algorithm with a training protocol which makes our intent detection robust to changes in the input signal. We show that this algorithm, when used in conjunction with an orthosis over several weeks, can improve distal function in users. Additionally, we propose two semi-supervised intent detection algorithms designed to keep our control robust to changes in the input data while reducing the length and frequency of our training protocol
인간 기계 상호작용을 위한 강건하고 정확한 손동작 추적 기술 연구
학위논문(박사) -- 서울대학교대학원 : 공과대학 기계항공공학부, 2021.8. 이동준.Hand-based interface is promising for realizing intuitive, natural and accurate human machine interaction (HMI), as the human hand is main source of dexterity in our daily activities.
For this, the thesis begins with the human perception study on the detection threshold of visuo-proprioceptive conflict (i.e., allowable tracking error) with or without cutantoues haptic feedback, and suggests tracking error specification for realistic and fluidic hand-based HMI. The thesis then proceeds to propose a novel wearable hand tracking module, which, to be compatible with the cutaneous haptic devices spewing magnetic noise, opportunistically employ heterogeneous sensors (IMU/compass module and soft sensor) reflecting the anatomical properties of human hand, which is suitable for specific application (i.e., finger-based interaction with finger-tip haptic devices).
This hand tracking module however loses its tracking when interacting with, or being nearby, electrical machines or ferromagnetic materials. For this, the thesis presents its main contribution, a novel visual-inertial skeleton tracking (VIST) framework, that can provide accurate and robust hand (and finger) motion tracking even for many challenging real-world scenarios and environments,
for which the state-of-the-art technologies are known to fail due to their respective fundamental limitations (e.g., severe occlusions for tracking purely with vision sensors; electromagnetic interference for tracking purely with IMUs (inertial measurement units) and compasses; and mechanical contacts for tracking purely with soft sensors).
The proposed VIST framework comprises a sensor glove with multiple IMUs and passive visual markers as well as a head-mounted stereo camera; and a tightly-coupled filtering-based visual-inertial fusion algorithm to estimate the hand/finger motion and auto-calibrate hand/glove-related kinematic parameters simultaneously while taking into account the hand anatomical constraints.
The VIST framework exhibits good tracking accuracy and robustness, affordable material cost, light hardware and software weights, and ruggedness/durability even to permit washing.
Quantitative and qualitative experiments are also performed to validate the advantages and properties of our VIST framework, thereby, clearly demonstrating its potential for real-world applications.손 동작을 기반으로 한 인터페이스는 인간-기계 상호작용 분야에서 직관성, 몰입감, 정교함을 제공해줄 수 있어 많은 주목을 받고 있고, 이를 위해 가장 필수적인 기술 중 하나가 손 동작의 강건하고 정확한 추적 기술 이다.
이를 위해 본 학위논문에서는 먼저 사람 인지의 관점에서 손 동작 추적 오차의 인지 범위를 규명한다. 이 오차 인지 범위는 새로운 손 동작 추적 기술 개발 시 중요한 설계 기준이 될 수 있어 이를 피험자 실험을 통해 정량적으로 밝히고, 특히 손끝 촉각 장비가 있을때 이 인지 범위의 변화도 밝힌다.
이를 토대로, 촉각 피드백을 주는 것이 다양한 인간-기계 상호작용 분야에서 널리 연구되어 왔으므로, 먼저 손끝 촉각 장비와 함께 사용할 수 있는 손 동작 추적 모듈을 개발한다.
이 손끝 촉각 장비는 자기장 외란을 일으켜 착용형 기술에서 흔히 사용되는 지자기 센서를 교란하는데, 이를 적절한 사람 손의 해부학적 특성과 관성 센서/지자기 센서/소프트 센서의 적절한 활용을 통해 해결한다.
이를 확장하여 본 논문에서는, 촉각 장비 착용 시 뿐 아니라 모든 장비 착용 / 환경 / 물체와의 상호작용 시에도 사용 가능한 새로운 손 동작 추적 기술을 제안한다.
기존의 손 동작 추적 기술들은 가림 현상 (영상 기반 기술), 지자기 외란 (관성/지자기 센서 기반 기술), 물체와의 접촉 (소프트 센서 기반 기술) 등으로 인해 제한된 환경에서 밖에 사용하지 못한다.
이를 위해 많은 문제를 일으키는 지자기 센서 없이 상보적인 특성을 지니는 관성 센서와 영상 센서를 융합하고, 이때 작은 공간에 다 자유도의 움직임을 갖는 손 동작을 추적하기 위해 다수의 구분되지 않는 마커들을 사용한다.
이 마커의 구분 과정 (correspondence search)를 위해 기존의 약결합 (loosely-coupled) 기반이 아닌 강결합 (tightly-coupled 기반 센서 융합 기술을 제안하고, 이를 통해 지자기 센서 없이 정확한 손 동작이 가능할 뿐 아니라 착용형 센서들의 정확성/편의성에 문제를 일으키던 센서 부착 오차 / 사용자의 손 모양 등을 자동으로 정확히 보정한다.
이 제안된 영상-관성 센서 융합 기술 (Visual-Inertial Skeleton Tracking (VIST)) 의 뛰어난 성능과 강건성이 다양한 정량/정성 실험을 통해 검증되었고, 이는 VIST의 다양한 일상환경에서 기존 시스템이 구현하지 못하던 손 동작 추적을 가능케 함으로써, 많은 인간-기계 상호작용 분야에서의 가능성을 보여준다.1 Introduction 1
1.1. Motivation 1
1.2. Related Work 5
1.3. Contribution 12
2 Detection Threshold of Hand Tracking Error 16
2.1. Motivation 16
2.2. Experimental Environment 20
2.2.1. Hardware Setup 21
2.2.2. Virtual Environment Rendering 23
2.2.3. HMD Calibration 23
2.3. Identifying the Detection Threshold of Tracking Error 26
2.3.1. Experimental Setup 27
2.3.2. Procedure 27
2.3.3. Experimental Result 31
2.4. Enlarging the Detection Threshold of Tracking Error by Haptic Feedback 31
2.4.1. Experimental Setup 31
2.4.2. Procedure 32
2.4.3. Experimental Result 34
2.5. Discussion 34
3 Wearable Finger Tracking Module for Haptic Interaction 38
3.1. Motivation 38
3.2. Development of Finger Tracking Module 42
3.2.1. Hardware Setup 42
3.2.2. Tracking algorithm 45
3.2.3. Calibration method 48
3.3. Evaluation for VR Haptic Interaction Task 50
3.3.1. Quantitative evaluation of FTM 50
3.3.2. Implementation of Wearable Cutaneous Haptic Interface
51
3.3.3. Usability evaluation for VR peg-in-hole task 53
3.4. Discussion 57
4 Visual-Inertial Skeleton Tracking for Human Hand 59
4.1. Motivation 59
4.2. Hardware Setup and Hand Models 62
4.2.1. Human Hand Model 62
4.2.2. Wearable Sensor Glove 62
4.2.3. Stereo Camera 66
4.3. Visual Information Extraction 66
4.3.1. Marker Detection in Raw Images 68
4.3.2. Cost Function for Point Matching 68
4.3.3. Left-Right Stereo Matching 69
4.4. IMU-Aided Correspondence Search 72
4.5. Filtering-based Visual-Inertial Sensor Fusion 76
4.5.1. EKF States for Hand Tracking and Auto-Calibration 78
4.5.2. Prediction with IMU Information 79
4.5.3. Correction with Visual Information 82
4.5.4. Correction with Anatomical Constraints 84
4.6. Quantitative Evaluation for Free Hand Motion 87
4.6.1. Experimental Setup 87
4.6.2. Procedure 88
4.6.3. Experimental Result 90
4.7. Quantitative and Comparative Evaluation for Challenging Hand Motion 95
4.7.1. Experimental Setup 95
4.7.2. Procedure 96
4.7.3. Experimental Result 98
4.7.4. Performance Comparison with Existing Methods for Challenging Hand Motion 101
4.8. Qualitative Evaluation for Real-World Scenarios 105
4.8.1. Visually Complex Background 105
4.8.2. Object Interaction 106
4.8.3. Wearing Fingertip Cutaneous Haptic Devices 109
4.8.4. Outdoor Environment 111
4.9. Discussion 112
5 Conclusion 116
References 124
Abstract (in Korean) 139
Acknowledgment 141박
The Supernumerary Robotic 3rd Thumb for Skilled Music Tasks
Wearable robotics bring the opportunity to augment human capability and performance, be it through prosthetics, exoskeletons, or supernumerary robotic limbs. The latter concept allows enhancing human performance and assisting them in daily tasks. An important research question is, however, whether the use of such devices can lead to their eventual cognitive embodiment, allowing the user to adapt to them and use them seamlessly as any other limb of their own. This paper describes the creation of a platform to investigate this. Our supernumerary robotic 3rd thumb was created to augment piano playing, allowing a pianist to press piano keys beyond their natural hand-span; thus leading to functional augmentation of their skills and the technical feasibility to play with 11 fingers. The robotic finger employs sensors, motors, and a human interfacing algorithm to control its movement in real-time. A proof of concept validation experiment has been conducted to show the effectiveness of the robotic finger in playing musical pieces on a grand piano, showing that naive users were able to use it for 11 finger play within a few hours
Exploiting Intrinsic Kinematic Null Space for Supernumerary Robotic Limbs Control
Supernumerary robotic limbs (SRLs) gained increasing interest in the last years for their applicability as healthcare and assistive technologies. These devices can either support or augment human sensorimotor capabilities, allowing users to complete tasks that are more complex than those feasible for their natural limbs. However, for a successful coordination between natural and artificial limbs, intuitiveness of interaction and perception of autonomy are key enabling features, especially for people suffering from motor disorders and impairments. The development of suitable human-robot interfaces is thus fundamental to foster the adoption of SRLs.With this work, we describe how to control an extra degree of freedom by taking advantage of what we defined the Intrinsic Kinematic Null Space, i.e. the redundancy of the human kinematic chain involved in the ongoing task. Obtained results demonstrated that the proposed control strategy is effective for performing complex tasks with a supernumerary robotic finger, and that practice improves users' control ability
Collaborative robot control with hand gestures
Mestrado de dupla diplomação com a Université Libre de TunisThis thesis focuses on hand gesture recognition by proposing an architecture to control a collaborative robot in real-time vision based on hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bar e hand in a cluttered background using skin detection and contour comparison. The second stage allows recognizing hand gestures using a Machine learning method algorithm. Finally an interface has been developed to control the robot over.
Our hand gesture recognition system consists of two parts, in the first part for every frame captured from a camera we extract the keypoints for every training image using a machine learning algorithm, and we appoint the keypoints from every image into a keypoint map. This map is treated as an input for our processing algorithm which uses several methods to recognize the fingers in each hand.
In the second part, we use a 3D camera with Infrared capabilities to get a 3D model of the hand to implement it in our system, after that we track the fingers in each hand and recognize them which made it possible to count the extended fingers and to distinguish each finger pattern.
An interface to control the robot has been made that utilizes the previous steps that gives a real-time process and a dynamic 3D representation.Esta dissertação trata do reconhecimento de gestos realizados com a mão humana, propondo uma arquitetura para interagir com um robô colaborativo, baseado em visão computacional, rastreamento e reconhecimento de gestos. O primeiro estágio do sistema desenvolvido permite detectar e rastrear a presença de uma mão em um fundo desordenado usando detecção de pele e comparação de contornos. A segunda fase permite reconhecer os gestos das mãos usando um algoritmo do método de aprendizado de máquina. Finalmente, uma interface foi desenvolvida para interagir com robô. O sistema de reconhecimento de gestos manuais está dividido em duas partes. Na primeira parte, para cada quadro capturado de uma câmera, foi extraído os pontos-chave de cada imagem de treinamento usando um algoritmo de aprendizado de máquina e nomeamos os pontos-chave de cada imagem em um mapa de pontos-chave. Este mapa é tratado como uma entrada para o algoritmo de processamento que usa vários métodos para reconhecer os dedos em cada mão. Na segunda parte, foi utilizado uma câmera 3D com recursos de infravermelho para obter um modelo 3D da mão para implementá-lo em no sistema desenvolvido, e então, foi realizado os rastreio dos dedos de cada mão seguido pelo reconhecimento que possibilitou contabilizar os dedos estendidos e para distinguir cada padrão de dedo. Foi elaborado uma interface para interagir com o robô manipulador que utiliza as etapas anteriores que fornece um processo em tempo real e uma representação 3D dinâmica
Principles of human movement augmentation and the challenges in making it a reality
Augmenting the body with artificial limbs controlled concurrently to one's natural limbs has long appeared in science fiction, but recent technological and neuroscientific advances have begun to make this possible. By allowing individuals to achieve otherwise impossible actions, movement augmentation could revolutionize medical and industrial applications and profoundly change the way humans interact with the environment. Here, we construct a movement augmentation taxonomy through what is augmented and how it is achieved. With this framework, we analyze augmentation that extends the number of degrees-of-freedom, discuss critical features of effective augmentation such as physiological control signals, sensory feedback and learning as well as application scenarios, and propose a vision for the field
- …