593 research outputs found
Robotics Platforms Incorporating Manipulators Having Common Joint Designs
Manipulators in accordance with various embodiments of the invention can be utilized to implement statically stable robots capable of both dexterous manipulation and versatile mobility. Manipulators in accordance with one embodiment of the invention include: an azimuth actuator; three elbow joints that each include two actuators that are offset to allow greater than 360 degree rotation of each joint; a first connecting structure that connects the azimuth actuator and a first of the three elbow joints; a second connecting structure that connects the first elbow joint and a second of the three elbow joints; a third connecting structure that connects the second elbow joint to a third of the three elbow joints; and an end-effector interface connected to the third of the three elbow joints
Recommended from our members
Data-driven Tactile Sensing using Spatially Overlapping Signals
Providing robots with distributed, robust and accurate tactile feedback is a fundamental problem in robotics because of the large number of tasks that require physical interaction with objects. Tactile sensors can provide robots with information about the location of each point of contact with the manipulated object, an estimation of the contact forces applied (normal and shear) and even slip detection. Despite significant advances in touch and force transduction, tactile sensing is still far from ubiquitous in robotic manipulation. Existing methods for building touch sensors have proven difficult to integrate into robot fingers due to multiple challenges, including difficulty in covering multicurved surfaces, high wire count, or packaging constrains preventing their use in dexterous hands.
In this dissertation, we focus on the development of soft tactile systems that can be deployed over complex, three-dimensional surfaces with a low wire count and using easily accessible manufacturing methods. To this effect, we present a general methodology called spatially overlapping signals. The key idea behind our method is to embed multiple sensing terminals in a volume of soft material which can be deployed over arbitrary, non-developable surfaces. Unlike a traditional taxel, these sensing terminals are not capable of measuring strain on their own. Instead, we take measurements across pairs of sensing terminals. Applying strain in the receptive field of this terminal pair should measurably affect the signal associated with it. As we embed multiple sensing terminals in this soft material, a significant overlap of these receptive fields occurs across the whole active sensing area, providing us with a very rich dataset characterizing the contact event. The use of an all-pairs approach, where all possible combinations of sensing terminals pairs are used, maximizes the number of signals extracted while reducing the total number of wires for the overall sensor, which in turn facilitates its integration.
Building an analytical model for how this rich signal set relates to various contacts events can be very challenging. Further, any such model would depend on knowing the exact locations of the terminals in the sensor, thus requiring very precise manufacturing. Instead, we build forward models of our sensors from data. We collect training data using a dataset of controlled indentations of known characteristics, directly learning the mapping between our signals and the variables characterizing a contact event. This approach allows for accessible, cheap manufacturing while enabling extensive coverage of curved surfaces. The concept of spatially overlapping signals can be realized using various transduction methods; we demonstrate sensors using piezoresistance, pressure transducers and optics. With piezoresistivity we measure resistance values across various electrodes embedded in a carbon nanotubes infused elastomer to determine the location of touch. Using commercially available pressure transducers embedded in various configurations inside a soft volume of rubber, we show its possible to localize contacts across a curved surface. Finally, using optics, we measure light transport between LEDs and photodiodes inside a clear elastomer which makes up our sensor. Our optical sensors are able to detect both the location and depth of an indentation very accurately on both planar and multicurved surfaces.
Our Distributed Interleaved Signals for Contact via Optics or D.I.S.C.O Finger is the culmination of this methodology: a fully integrated, sensorized robot finger, with a low wire count and designed for easy integration into dexterous manipulators. Our DISCO Finger can generally determine contact location with sub-millimeter accuracy, and contact force to within 10% (and often with 5%) of the true value without the need for analytical models. While our data-driven method requires training data representative of the final operational conditions that the system will encounter, we show our finger can be robust to novel contact scenarios where the shape of the indenter has not been seen during training. Moreover, the forward model that predicts contact locations and applied normal force can be transfered to new fingers with minimal loss of performance, eliminating the need to collect training data for each individual finger. We believe that rich tactile information, in a highly functional form with limited blind spots and a simple integration path into complete systems, like we demonstrate in this dissertation, will prove to be an important enabler for data-driven complex robotic motor skills, such as dexterous manipulation
Whole-Hand Robotic Manipulation with Rolling, Sliding, and Caging
Traditional manipulation planning and modeling relies on strong assumptions about contact. Specifically, it is common to assume that contacts are fixed and do not slide. This assumption ensures that objects are stably grasped during every step of the manipulation, to avoid ejection. However, this assumption limits achievable manipulation to the feasible motion of the closed-loop kinematic chains formed by the object and fingers. To improve manipulation capability, it has been shown that relaxing contact constraints and allowing sliding can enhance dexterity. But in order to safely manipulate with shifting contacts, other safeguards must be used to protect against ejection. โCaging manipulation,โ in which the object is geometrically trapped by the fingers, can be employed to guarantee that an object never leaves the hand, regardless of constantly changing contact conditions. Mechanical compliance and underactuated joint coupling, or carefully chosen design parameters, can be used to passively create a caging grasp โ protecting against accidental ejection โ while simultaneously manipulating with all parts of the hand. And with passive ejection avoidance, hand control schemes can be made very simple, while still accomplishing manipulation. In place of complex control, better design can be used to improve manipulation capabilityโby making smart choices about parameters such as phalanx length, joint stiffness, joint coupling schemes, finger frictional properties, and actuator mode of operation. I will present an approach for modeling fully actuated and underactuated whole-hand-manipulation with shifting contacts, show results demonstrating the relationship between design parameters and manipulation metrics, and show how this can produce highly dexterous manipulators
Design of a cybernetic hand for perception and action
Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllabilityโin addition to poor cosmetic appearanceโare the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumbโfinger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the userโs intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control
Tactile Perception And Visuotactile Integration For Robotic Exploration
As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communitiesโ attention towards deliberate physical interaction with the environment prior to, during, and after a task.
We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping
์ธ๊ฐ ๊ธฐ๊ณ ์ํธ์์ฉ์ ์ํ ๊ฐ๊ฑดํ๊ณ ์ ํํ ์๋์ ์ถ์ ๊ธฐ์ ์ฐ๊ตฌ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ) -- ์์ธ๋ํ๊ต๋ํ์ : ๊ณต๊ณผ๋ํ ๊ธฐ๊ณํญ๊ณต๊ณตํ๋ถ, 2021.8. ์ด๋์ค.Hand-based interface is promising for realizing intuitive, natural and accurate human machine interaction (HMI), as the human hand is main source of dexterity in our daily activities.
For this, the thesis begins with the human perception study on the detection threshold of visuo-proprioceptive conflict (i.e., allowable tracking error) with or without cutantoues haptic feedback, and suggests tracking error specification for realistic and fluidic hand-based HMI. The thesis then proceeds to propose a novel wearable hand tracking module, which, to be compatible with the cutaneous haptic devices spewing magnetic noise, opportunistically employ heterogeneous sensors (IMU/compass module and soft sensor) reflecting the anatomical properties of human hand, which is suitable for specific application (i.e., finger-based interaction with finger-tip haptic devices).
This hand tracking module however loses its tracking when interacting with, or being nearby, electrical machines or ferromagnetic materials. For this, the thesis presents its main contribution, a novel visual-inertial skeleton tracking (VIST) framework, that can provide accurate and robust hand (and finger) motion tracking even for many challenging real-world scenarios and environments,
for which the state-of-the-art technologies are known to fail due to their respective fundamental limitations (e.g., severe occlusions for tracking purely with vision sensors; electromagnetic interference for tracking purely with IMUs (inertial measurement units) and compasses; and mechanical contacts for tracking purely with soft sensors).
The proposed VIST framework comprises a sensor glove with multiple IMUs and passive visual markers as well as a head-mounted stereo camera; and a tightly-coupled filtering-based visual-inertial fusion algorithm to estimate the hand/finger motion and auto-calibrate hand/glove-related kinematic parameters simultaneously while taking into account the hand anatomical constraints.
The VIST framework exhibits good tracking accuracy and robustness, affordable material cost, light hardware and software weights, and ruggedness/durability even to permit washing.
Quantitative and qualitative experiments are also performed to validate the advantages and properties of our VIST framework, thereby, clearly demonstrating its potential for real-world applications.์ ๋์์ ๊ธฐ๋ฐ์ผ๋ก ํ ์ธํฐํ์ด์ค๋ ์ธ๊ฐ-๊ธฐ๊ณ ์ํธ์์ฉ ๋ถ์ผ์์ ์ง๊ด์ฑ, ๋ชฐ์
๊ฐ, ์ ๊ตํจ์ ์ ๊ณตํด์ค ์ ์์ด ๋ง์ ์ฃผ๋ชฉ์ ๋ฐ๊ณ ์๊ณ , ์ด๋ฅผ ์ํด ๊ฐ์ฅ ํ์์ ์ธ ๊ธฐ์ ์ค ํ๋๊ฐ ์ ๋์์ ๊ฐ๊ฑดํ๊ณ ์ ํํ ์ถ์ ๊ธฐ์ ์ด๋ค.
์ด๋ฅผ ์ํด ๋ณธ ํ์๋
ผ๋ฌธ์์๋ ๋จผ์ ์ฌ๋ ์ธ์ง์ ๊ด์ ์์ ์ ๋์ ์ถ์ ์ค์ฐจ์ ์ธ์ง ๋ฒ์๋ฅผ ๊ท๋ช
ํ๋ค. ์ด ์ค์ฐจ ์ธ์ง ๋ฒ์๋ ์๋ก์ด ์ ๋์ ์ถ์ ๊ธฐ์ ๊ฐ๋ฐ ์ ์ค์ํ ์ค๊ณ ๊ธฐ์ค์ด ๋ ์ ์์ด ์ด๋ฅผ ํผํ์ ์คํ์ ํตํด ์ ๋์ ์ผ๋ก ๋ฐํ๊ณ , ํนํ ์๋ ์ด๊ฐ ์ฅ๋น๊ฐ ์์๋ ์ด ์ธ์ง ๋ฒ์์ ๋ณํ๋ ๋ฐํ๋ค.
์ด๋ฅผ ํ ๋๋ก, ์ด๊ฐ ํผ๋๋ฐฑ์ ์ฃผ๋ ๊ฒ์ด ๋ค์ํ ์ธ๊ฐ-๊ธฐ๊ณ ์ํธ์์ฉ ๋ถ์ผ์์ ๋๋ฆฌ ์ฐ๊ตฌ๋์ด ์์ผ๋ฏ๋ก, ๋จผ์ ์๋ ์ด๊ฐ ์ฅ๋น์ ํจ๊ป ์ฌ์ฉํ ์ ์๋ ์ ๋์ ์ถ์ ๋ชจ๋์ ๊ฐ๋ฐํ๋ค.
์ด ์๋ ์ด๊ฐ ์ฅ๋น๋ ์๊ธฐ์ฅ ์ธ๋์ ์ผ์ผ์ผ ์ฐฉ์ฉํ ๊ธฐ์ ์์ ํํ ์ฌ์ฉ๋๋ ์ง์๊ธฐ ์ผ์๋ฅผ ๊ต๋ํ๋๋ฐ, ์ด๋ฅผ ์ ์ ํ ์ฌ๋ ์์ ํด๋ถํ์ ํน์ฑ๊ณผ ๊ด์ฑ ์ผ์/์ง์๊ธฐ ์ผ์/์ํํธ ์ผ์์ ์ ์ ํ ํ์ฉ์ ํตํด ํด๊ฒฐํ๋ค.
์ด๋ฅผ ํ์ฅํ์ฌ ๋ณธ ๋
ผ๋ฌธ์์๋, ์ด๊ฐ ์ฅ๋น ์ฐฉ์ฉ ์ ๋ฟ ์๋๋ผ ๋ชจ๋ ์ฅ๋น ์ฐฉ์ฉ / ํ๊ฒฝ / ๋ฌผ์ฒด์์ ์ํธ์์ฉ ์์๋ ์ฌ์ฉ ๊ฐ๋ฅํ ์๋ก์ด ์ ๋์ ์ถ์ ๊ธฐ์ ์ ์ ์ํ๋ค.
๊ธฐ์กด์ ์ ๋์ ์ถ์ ๊ธฐ์ ๋ค์ ๊ฐ๋ฆผ ํ์ (์์ ๊ธฐ๋ฐ ๊ธฐ์ ), ์ง์๊ธฐ ์ธ๋ (๊ด์ฑ/์ง์๊ธฐ ์ผ์ ๊ธฐ๋ฐ ๊ธฐ์ ), ๋ฌผ์ฒด์์ ์ ์ด (์ํํธ ์ผ์ ๊ธฐ๋ฐ ๊ธฐ์ ) ๋ฑ์ผ๋ก ์ธํด ์ ํ๋ ํ๊ฒฝ์์ ๋ฐ์ ์ฌ์ฉํ์ง ๋ชปํ๋ค.
์ด๋ฅผ ์ํด ๋ง์ ๋ฌธ์ ๋ฅผ ์ผ์ผํค๋ ์ง์๊ธฐ ์ผ์ ์์ด ์๋ณด์ ์ธ ํน์ฑ์ ์ง๋๋ ๊ด์ฑ ์ผ์์ ์์ ์ผ์๋ฅผ ์ตํฉํ๊ณ , ์ด๋ ์์ ๊ณต๊ฐ์ ๋ค ์์ ๋์ ์์ง์์ ๊ฐ๋ ์ ๋์์ ์ถ์ ํ๊ธฐ ์ํด ๋ค์์ ๊ตฌ๋ถ๋์ง ์๋ ๋ง์ปค๋ค์ ์ฌ์ฉํ๋ค.
์ด ๋ง์ปค์ ๊ตฌ๋ถ ๊ณผ์ (correspondence search)๋ฅผ ์ํด ๊ธฐ์กด์ ์ฝ๊ฒฐํฉ (loosely-coupled) ๊ธฐ๋ฐ์ด ์๋ ๊ฐ๊ฒฐํฉ (tightly-coupled ๊ธฐ๋ฐ ์ผ์ ์ตํฉ ๊ธฐ์ ์ ์ ์ํ๊ณ , ์ด๋ฅผ ํตํด ์ง์๊ธฐ ์ผ์ ์์ด ์ ํํ ์ ๋์์ด ๊ฐ๋ฅํ ๋ฟ ์๋๋ผ ์ฐฉ์ฉํ ์ผ์๋ค์ ์ ํ์ฑ/ํธ์์ฑ์ ๋ฌธ์ ๋ฅผ ์ผ์ผํค๋ ์ผ์ ๋ถ์ฐฉ ์ค์ฐจ / ์ฌ์ฉ์์ ์ ๋ชจ์ ๋ฑ์ ์๋์ผ๋ก ์ ํํ ๋ณด์ ํ๋ค.
์ด ์ ์๋ ์์-๊ด์ฑ ์ผ์ ์ตํฉ ๊ธฐ์ (Visual-Inertial Skeleton Tracking (VIST)) ์ ๋ฐ์ด๋ ์ฑ๋ฅ๊ณผ ๊ฐ๊ฑด์ฑ์ด ๋ค์ํ ์ ๋/์ ์ฑ ์คํ์ ํตํด ๊ฒ์ฆ๋์๊ณ , ์ด๋ VIST์ ๋ค์ํ ์ผ์ํ๊ฒฝ์์ ๊ธฐ์กด ์์คํ
์ด ๊ตฌํํ์ง ๋ชปํ๋ ์ ๋์ ์ถ์ ์ ๊ฐ๋ฅ์ผ ํจ์ผ๋ก์จ, ๋ง์ ์ธ๊ฐ-๊ธฐ๊ณ ์ํธ์์ฉ ๋ถ์ผ์์์ ๊ฐ๋ฅ์ฑ์ ๋ณด์ฌ์ค๋ค.1 Introduction 1
1.1. Motivation 1
1.2. Related Work 5
1.3. Contribution 12
2 Detection Threshold of Hand Tracking Error 16
2.1. Motivation 16
2.2. Experimental Environment 20
2.2.1. Hardware Setup 21
2.2.2. Virtual Environment Rendering 23
2.2.3. HMD Calibration 23
2.3. Identifying the Detection Threshold of Tracking Error 26
2.3.1. Experimental Setup 27
2.3.2. Procedure 27
2.3.3. Experimental Result 31
2.4. Enlarging the Detection Threshold of Tracking Error by Haptic Feedback 31
2.4.1. Experimental Setup 31
2.4.2. Procedure 32
2.4.3. Experimental Result 34
2.5. Discussion 34
3 Wearable Finger Tracking Module for Haptic Interaction 38
3.1. Motivation 38
3.2. Development of Finger Tracking Module 42
3.2.1. Hardware Setup 42
3.2.2. Tracking algorithm 45
3.2.3. Calibration method 48
3.3. Evaluation for VR Haptic Interaction Task 50
3.3.1. Quantitative evaluation of FTM 50
3.3.2. Implementation of Wearable Cutaneous Haptic Interface
51
3.3.3. Usability evaluation for VR peg-in-hole task 53
3.4. Discussion 57
4 Visual-Inertial Skeleton Tracking for Human Hand 59
4.1. Motivation 59
4.2. Hardware Setup and Hand Models 62
4.2.1. Human Hand Model 62
4.2.2. Wearable Sensor Glove 62
4.2.3. Stereo Camera 66
4.3. Visual Information Extraction 66
4.3.1. Marker Detection in Raw Images 68
4.3.2. Cost Function for Point Matching 68
4.3.3. Left-Right Stereo Matching 69
4.4. IMU-Aided Correspondence Search 72
4.5. Filtering-based Visual-Inertial Sensor Fusion 76
4.5.1. EKF States for Hand Tracking and Auto-Calibration 78
4.5.2. Prediction with IMU Information 79
4.5.3. Correction with Visual Information 82
4.5.4. Correction with Anatomical Constraints 84
4.6. Quantitative Evaluation for Free Hand Motion 87
4.6.1. Experimental Setup 87
4.6.2. Procedure 88
4.6.3. Experimental Result 90
4.7. Quantitative and Comparative Evaluation for Challenging Hand Motion 95
4.7.1. Experimental Setup 95
4.7.2. Procedure 96
4.7.3. Experimental Result 98
4.7.4. Performance Comparison with Existing Methods for Challenging Hand Motion 101
4.8. Qualitative Evaluation for Real-World Scenarios 105
4.8.1. Visually Complex Background 105
4.8.2. Object Interaction 106
4.8.3. Wearing Fingertip Cutaneous Haptic Devices 109
4.8.4. Outdoor Environment 111
4.9. Discussion 112
5 Conclusion 116
References 124
Abstract (in Korean) 139
Acknowledgment 141๋ฐ
Grasping and Assembling with Modular Robots
A wide variety of problems, from manufacturing to disaster response and space exploration, can benefit from robotic systems that can firmly grasp objects or assemble various structures, particularly in difficult, dangerous environments. In this thesis, we study the two problems, robotic grasping and assembly, with a modular robotic approach that can facilitate the problems with versatility and robustness.
First, this thesis develops a theoretical framework for grasping objects with customized effectors that have curved contact surfaces, with applications to modular robots. We present a collection of grasps and cages that can effectively restrain the mobility of a wide range of objects including polyhedra. Each of the grasps or cages is formed by at most three effectors. A stable grasp is obtained by simple motion planning and control. Based on the theory, we create a robotic system comprised of a modular manipulator equipped with customized end-effectors and a software suite for planning and control of the manipulator.
Second, this thesis presents efficient assembly planning algorithms for constructing planar target structures collectively with a collection of homogeneous mobile modular robots. The algorithms are provably correct and address arbitrary target structures that may include internal holes. The resultant assembly plan supports parallel assembly and guarantees easy accessibility in the sense that a robot does not have to pass through a narrow gap while approaching its target position. Finally, we extend the algorithms to address various symmetric patterns formed by a collection of congruent rectangles on the plane.
The basic ideas in this thesis have broad applications to manufacturing (restraint), humanitarian missions (forming airfields on the high seas), and service robotics (grasping and manipulation)
Recommended from our members
A Unified Visual-Haptic Fingertip Sensor For Advanced Robot Dexterity
The problem of robotic grasping and manipulation requires a system level perspective that needs to be aimed at solving the interlinked sub-problems simultaneously. These sub-problems consists of designing an appropriate robot hand, sensing technology, control, and planning strategy, that can increase the dexterity of a robot hand in complex environments. Approaches towards these lack the proper use and integration of tactile feedback that can potentially enable robot hands with far superior capabilities than found today. This thesis addresses this challenge from three aspects: hardware design, system integration, and algorithm development. On the hardware side, it traces the thorough development of a multi and cross-modal tactile sensor that can measure proximity, contact, and force (PCF). Three unique features of the PCF sensor are (i) the ability to measure visual as well as tactile object features, (ii) its low manufacturing cost and (iii) that it can be easily integrated into different type of robot hands. This is achieved by embedding infrared proximity sensing integrated chips in soft elastomer to achieve a multitude of signals. On the system integration side, the thesis manifests the individual importance of the hand design, visual and tactile sensing modalities in the context of robotic manipulation related tasks through careful real-world robotic experiments. On the algorithmic side, it shows the implementation of several algorithms concerning signal processing, computer vision, controls, probabilistic theory and machine learning for experimental evaluation.</p
- โฆ