635 research outputs found

    Vision-based self-calibration and control of parallel kinematic mechanisms without proprioceptive sensing

    Get PDF
    International audienceThis work is a synthesis of our experience over parallel kinematic machine control, which aims at changing the standard conceptual approach to this problem. Indeed, since the task space, the state space and the measurement space can coincide in this class of mechanism, we came to redefine the complete modeling, identification and control methodology. Thus, it is shown in this paper that, generically and with the help of sensor-based control, this methodology does not require any joint measurement, thus opening a path to simplified mechanical design and reducing the number of kinematic parameters to identify. This novel approach was validated on the reference parallel kinematic mechanism (the Gough-Stewart platform) with vision as the exteroceptive sensor

    Image-based Visual Servoing of a Gough-Stewart Parallel Manipulator using Leg Observations

    Get PDF
    International audienceIn this paper, a tight coupling between computer vision and paral- lel robotics is exhibited through the projective line geometry. Indeed, contrary to the usual methodology where the robot is modeled indepen- dently from the control law which will be implemented, we take into ac- count, since the early modeling stage, that vision will be used for con- trol. Hence, kinematic modeling and projective geometry are fused into a control-devoted projective kinematic model. Thus, a novel vision-based kinematic modeling of a Gough-Stewart manipulator is proposed through the image projection of its cylindrical legs. Using this model, a visual ser- voing scheme is presented, where the image projection of the non-rigidly linked legs are servoed, rather than the end-effector pose

    Body models in humans, animals, and robots: mechanisms and plasticity

    Full text link
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent - yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables to control the machine. In this article I compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. I put forth a number of axes regarding the nature of such body models: fixed vs. plastic, amodal vs. modal, explicit vs. implicit, serial vs. parallel, modular vs. holistic, and centralized vs. distributed. An interesting trend emerges: on many of the axes, there is a sequence from robot body models, over body image, body schema, to the body representation in lower animals like the octopus. In some sense, robots have a lot in common with Ian Waterman - "the man who lost his body" - in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. I will then detail how robots can inform the biological sciences dealing with body representations and finally, I will study which of the features of the "body in the brain" should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.Comment: 27 pages, 8 figure

    One Camera in Hand for Kinematic Calibration of a Parallel Robot

    Full text link
    The main purpose of robot calibration is the correction of the possible errors in the robot parameters. This paper presents a method for a kinematic calibration of a parallel robot that is equipped with one camera in hand. In order to preserve the mechanical configuration of the robot, the camera is utilized to acquire incremental positions of the end effector from a spherical object that is fixed in the word reference frame. Incremental positions of the end effector are related to incremental positions of encoders of the motors of the robot. A kinematic model of the robot is modified in order to take into account possible errors of kinematic parameters. The solution of the model utilizes incremental positions of the resolvers and end effector, the new parameters minimizes errors in the kinematic equations. Spherical properties and intrinsic camera parameters are utilized to model sphere projection in order to improve spatial measurements. The robot system is designed to carry out tracking tasks and the calibration of the system is finally validated by means of integrating the errors of the visual controller

    The Role of Stereopsis in the Control of Grasp Forces during Prehension

    Get PDF
    Background: Binocular viewing is associated with a superior prehensile performance, which is particularly evident in the latter part of the reach as the hand approaches and makes contact with the target object. However, the visuomotor mechanisms through which binocular vision serves prehensile performance remains unclear. The present study was designed to investigate the role of stereopsis in the planning and control of grasping using outcome measures which reflect predictive control. It was hypothesized that binocular viewing will be associated with more efficient grasp execution because stereoacuity provides more accurate sensory input about the objectโ€™s material properties to plan appropriate grip forces to successfully lift the target object. In the case when binocular vision is reduced or unavailable, predictive control of grasping will be reduced, and subjects will have to rely on somatosensory feedback to successfully execute the grasp. Methods: 20 healthy participants (17-35 years, 11 male) with normal vision were recruited. Subjects performed a precision reach-to-grasp task which required them to reach, grasp, and transport a bead (~2 cm in diameter) to a specified location. Subjects were instructed to perform the task as fast as possible in the following viewing conditions: binocular, monocular, and two conditions with reduced stereoacuity: 200 arcsec stereo, 800 arcsec stereo, which were randomized in blocks. Results: Binocular, compared to monocular viewing had a greater influence on the grasp phase compared to the reach and transport phase. Specifically, there was a 36% increase in post-contact time, 29% decrease in grip force 50ms following object grasp, and 30% increase in grasp errors. In contrast, parameters of the reach and transport phase only demonstrated a 3-8% reduction in performance. Grasp performance was similarly disrupted during binocular viewing with reduced stereoacuity whereby a reduction in stereoacuity was associated with a proportional reduction in grasp performance. Notably, grip force at the time of object lift-off was comparable between all viewing conditions. Conclusion: The results demonstrate that binocular viewing contributes significantly more to the performance of grasping relative to the reach and transport phase. In addition, the results suggest that stereopsis provides important sensory information which enables the central nervous system to engage in predictive control of grasp forces. When binocular disparity information is reduced or absent, subjects take on a more cautious approach to the grasp and make more errors (i.e., collisions followed by readjustments). Overall, findings from the current study indicate that stereopsis provides important sensory input for the predictive control of grasping, and a progressive reduction in stereopsis is associated with increased uncertainty which results in a greater reliance on somatosensory feedback control

    Peripersonal Space and Margin of Safety around the Body: Learning Visuo-Tactile Associations in a Humanoid Robot with Artificial Skin

    Get PDF
    This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement

    Preliminary variation on multiview geometry for vision-guided laser surgery.

    No full text
    International audienceThis paper proposes to use the multiview geometry to control an orientable laser beam for surgery. Two methods are proposed based on the analogy between a scanning laser beam and a camera: the first method uses one camera and the laser scanner as a virtual camera to form a virtual stereoscopic system while the second method uses two cameras to form a virtual trifocal system. Using the associated epipolar or trifocal geometry, two control laws are derived without any matrix inversion nor estimation of the 3D scene. It is shown that the more geometry is used, the simpler the control gets. These control laws show, as expected, exponential convergence in simulation validation

    ์ธ๊ฐ„ ๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ์„ ์œ„ํ•œ ๊ฐ•๊ฑดํ•˜๊ณ  ์ •ํ™•ํ•œ ์†๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ  ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2021.8. ์ด๋™์ค€.Hand-based interface is promising for realizing intuitive, natural and accurate human machine interaction (HMI), as the human hand is main source of dexterity in our daily activities. For this, the thesis begins with the human perception study on the detection threshold of visuo-proprioceptive conflict (i.e., allowable tracking error) with or without cutantoues haptic feedback, and suggests tracking error specification for realistic and fluidic hand-based HMI. The thesis then proceeds to propose a novel wearable hand tracking module, which, to be compatible with the cutaneous haptic devices spewing magnetic noise, opportunistically employ heterogeneous sensors (IMU/compass module and soft sensor) reflecting the anatomical properties of human hand, which is suitable for specific application (i.e., finger-based interaction with finger-tip haptic devices). This hand tracking module however loses its tracking when interacting with, or being nearby, electrical machines or ferromagnetic materials. For this, the thesis presents its main contribution, a novel visual-inertial skeleton tracking (VIST) framework, that can provide accurate and robust hand (and finger) motion tracking even for many challenging real-world scenarios and environments, for which the state-of-the-art technologies are known to fail due to their respective fundamental limitations (e.g., severe occlusions for tracking purely with vision sensors; electromagnetic interference for tracking purely with IMUs (inertial measurement units) and compasses; and mechanical contacts for tracking purely with soft sensors). The proposed VIST framework comprises a sensor glove with multiple IMUs and passive visual markers as well as a head-mounted stereo camera; and a tightly-coupled filtering-based visual-inertial fusion algorithm to estimate the hand/finger motion and auto-calibrate hand/glove-related kinematic parameters simultaneously while taking into account the hand anatomical constraints. The VIST framework exhibits good tracking accuracy and robustness, affordable material cost, light hardware and software weights, and ruggedness/durability even to permit washing. Quantitative and qualitative experiments are also performed to validate the advantages and properties of our VIST framework, thereby, clearly demonstrating its potential for real-world applications.์† ๋™์ž‘์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ธํ„ฐํŽ˜์ด์Šค๋Š” ์ธ๊ฐ„-๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ ๋ถ„์•ผ์—์„œ ์ง๊ด€์„ฑ, ๋ชฐ์ž…๊ฐ, ์ •๊ตํ•จ์„ ์ œ๊ณตํ•ด์ค„ ์ˆ˜ ์žˆ์–ด ๋งŽ์€ ์ฃผ๋ชฉ์„ ๋ฐ›๊ณ  ์žˆ๊ณ , ์ด๋ฅผ ์œ„ํ•ด ๊ฐ€์žฅ ํ•„์ˆ˜์ ์ธ ๊ธฐ์ˆ  ์ค‘ ํ•˜๋‚˜๊ฐ€ ์† ๋™์ž‘์˜ ๊ฐ•๊ฑดํ•˜๊ณ  ์ •ํ™•ํ•œ ์ถ”์  ๊ธฐ์ˆ  ์ด๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ๋จผ์ € ์‚ฌ๋žŒ ์ธ์ง€์˜ ๊ด€์ ์—์„œ ์† ๋™์ž‘ ์ถ”์  ์˜ค์ฐจ์˜ ์ธ์ง€ ๋ฒ”์œ„๋ฅผ ๊ทœ๋ช…ํ•œ๋‹ค. ์ด ์˜ค์ฐจ ์ธ์ง€ ๋ฒ”์œ„๋Š” ์ƒˆ๋กœ์šด ์† ๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ  ๊ฐœ๋ฐœ ์‹œ ์ค‘์š”ํ•œ ์„ค๊ณ„ ๊ธฐ์ค€์ด ๋  ์ˆ˜ ์žˆ์–ด ์ด๋ฅผ ํ”ผํ—˜์ž ์‹คํ—˜์„ ํ†ตํ•ด ์ •๋Ÿ‰์ ์œผ๋กœ ๋ฐํžˆ๊ณ , ํŠนํžˆ ์†๋ ์ด‰๊ฐ ์žฅ๋น„๊ฐ€ ์žˆ์„๋•Œ ์ด ์ธ์ง€ ๋ฒ”์œ„์˜ ๋ณ€ํ™”๋„ ๋ฐํžŒ๋‹ค. ์ด๋ฅผ ํ† ๋Œ€๋กœ, ์ด‰๊ฐ ํ”ผ๋“œ๋ฐฑ์„ ์ฃผ๋Š” ๊ฒƒ์ด ๋‹ค์–‘ํ•œ ์ธ๊ฐ„-๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ ๋ถ„์•ผ์—์„œ ๋„๋ฆฌ ์—ฐ๊ตฌ๋˜์–ด ์™”์œผ๋ฏ€๋กœ, ๋จผ์ € ์†๋ ์ด‰๊ฐ ์žฅ๋น„์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์† ๋™์ž‘ ์ถ”์  ๋ชจ๋“ˆ์„ ๊ฐœ๋ฐœํ•œ๋‹ค. ์ด ์†๋ ์ด‰๊ฐ ์žฅ๋น„๋Š” ์ž๊ธฐ์žฅ ์™ธ๋ž€์„ ์ผ์œผ์ผœ ์ฐฉ์šฉํ˜• ๊ธฐ์ˆ ์—์„œ ํ”ํžˆ ์‚ฌ์šฉ๋˜๋Š” ์ง€์ž๊ธฐ ์„ผ์„œ๋ฅผ ๊ต๋ž€ํ•˜๋Š”๋ฐ, ์ด๋ฅผ ์ ์ ˆํ•œ ์‚ฌ๋žŒ ์†์˜ ํ•ด๋ถ€ํ•™์  ํŠน์„ฑ๊ณผ ๊ด€์„ฑ ์„ผ์„œ/์ง€์ž๊ธฐ ์„ผ์„œ/์†Œํ”„ํŠธ ์„ผ์„œ์˜ ์ ์ ˆํ•œ ํ™œ์šฉ์„ ํ†ตํ•ด ํ•ด๊ฒฐํ•œ๋‹ค. ์ด๋ฅผ ํ™•์žฅํ•˜์—ฌ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š”, ์ด‰๊ฐ ์žฅ๋น„ ์ฐฉ์šฉ ์‹œ ๋ฟ ์•„๋‹ˆ๋ผ ๋ชจ๋“  ์žฅ๋น„ ์ฐฉ์šฉ / ํ™˜๊ฒฝ / ๋ฌผ์ฒด์™€์˜ ์ƒํ˜ธ์ž‘์šฉ ์‹œ์—๋„ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ƒˆ๋กœ์šด ์† ๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด์˜ ์† ๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ ๋“ค์€ ๊ฐ€๋ฆผ ํ˜„์ƒ (์˜์ƒ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ), ์ง€์ž๊ธฐ ์™ธ๋ž€ (๊ด€์„ฑ/์ง€์ž๊ธฐ ์„ผ์„œ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ), ๋ฌผ์ฒด์™€์˜ ์ ‘์ด‰ (์†Œํ”„ํŠธ ์„ผ์„œ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ) ๋“ฑ์œผ๋กœ ์ธํ•ด ์ œํ•œ๋œ ํ™˜๊ฒฝ์—์„œ ๋ฐ–์— ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋งŽ์€ ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ค๋Š” ์ง€์ž๊ธฐ ์„ผ์„œ ์—†์ด ์ƒ๋ณด์ ์ธ ํŠน์„ฑ์„ ์ง€๋‹ˆ๋Š” ๊ด€์„ฑ ์„ผ์„œ์™€ ์˜์ƒ ์„ผ์„œ๋ฅผ ์œตํ•ฉํ•˜๊ณ , ์ด๋•Œ ์ž‘์€ ๊ณต๊ฐ„์— ๋‹ค ์ž์œ ๋„์˜ ์›€์ง์ž„์„ ๊ฐ–๋Š” ์† ๋™์ž‘์„ ์ถ”์ ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์ˆ˜์˜ ๊ตฌ๋ถ„๋˜์ง€ ์•Š๋Š” ๋งˆ์ปค๋“ค์„ ์‚ฌ์šฉํ•œ๋‹ค. ์ด ๋งˆ์ปค์˜ ๊ตฌ๋ถ„ ๊ณผ์ • (correspondence search)๋ฅผ ์œ„ํ•ด ๊ธฐ์กด์˜ ์•ฝ๊ฒฐํ•ฉ (loosely-coupled) ๊ธฐ๋ฐ˜์ด ์•„๋‹Œ ๊ฐ•๊ฒฐํ•ฉ (tightly-coupled ๊ธฐ๋ฐ˜ ์„ผ์„œ ์œตํ•ฉ ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์ง€์ž๊ธฐ ์„ผ์„œ ์—†์ด ์ •ํ™•ํ•œ ์† ๋™์ž‘์ด ๊ฐ€๋Šฅํ•  ๋ฟ ์•„๋‹ˆ๋ผ ์ฐฉ์šฉํ˜• ์„ผ์„œ๋“ค์˜ ์ •ํ™•์„ฑ/ํŽธ์˜์„ฑ์— ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ค๋˜ ์„ผ์„œ ๋ถ€์ฐฉ ์˜ค์ฐจ / ์‚ฌ์šฉ์ž์˜ ์† ๋ชจ์–‘ ๋“ฑ์„ ์ž๋™์œผ๋กœ ์ •ํ™•ํžˆ ๋ณด์ •ํ•œ๋‹ค. ์ด ์ œ์•ˆ๋œ ์˜์ƒ-๊ด€์„ฑ ์„ผ์„œ ์œตํ•ฉ ๊ธฐ์ˆ  (Visual-Inertial Skeleton Tracking (VIST)) ์˜ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ๊ณผ ๊ฐ•๊ฑด์„ฑ์ด ๋‹ค์–‘ํ•œ ์ •๋Ÿ‰/์ •์„ฑ ์‹คํ—˜์„ ํ†ตํ•ด ๊ฒ€์ฆ๋˜์—ˆ๊ณ , ์ด๋Š” VIST์˜ ๋‹ค์–‘ํ•œ ์ผ์ƒํ™˜๊ฒฝ์—์„œ ๊ธฐ์กด ์‹œ์Šคํ…œ์ด ๊ตฌํ˜„ํ•˜์ง€ ๋ชปํ•˜๋˜ ์† ๋™์ž‘ ์ถ”์ ์„ ๊ฐ€๋Šฅ์ผ€ ํ•จ์œผ๋กœ์จ, ๋งŽ์€ ์ธ๊ฐ„-๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ ๋ถ„์•ผ์—์„œ์˜ ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์—ฌ์ค€๋‹ค.1 Introduction 1 1.1. Motivation 1 1.2. Related Work 5 1.3. Contribution 12 2 Detection Threshold of Hand Tracking Error 16 2.1. Motivation 16 2.2. Experimental Environment 20 2.2.1. Hardware Setup 21 2.2.2. Virtual Environment Rendering 23 2.2.3. HMD Calibration 23 2.3. Identifying the Detection Threshold of Tracking Error 26 2.3.1. Experimental Setup 27 2.3.2. Procedure 27 2.3.3. Experimental Result 31 2.4. Enlarging the Detection Threshold of Tracking Error by Haptic Feedback 31 2.4.1. Experimental Setup 31 2.4.2. Procedure 32 2.4.3. Experimental Result 34 2.5. Discussion 34 3 Wearable Finger Tracking Module for Haptic Interaction 38 3.1. Motivation 38 3.2. Development of Finger Tracking Module 42 3.2.1. Hardware Setup 42 3.2.2. Tracking algorithm 45 3.2.3. Calibration method 48 3.3. Evaluation for VR Haptic Interaction Task 50 3.3.1. Quantitative evaluation of FTM 50 3.3.2. Implementation of Wearable Cutaneous Haptic Interface 51 3.3.3. Usability evaluation for VR peg-in-hole task 53 3.4. Discussion 57 4 Visual-Inertial Skeleton Tracking for Human Hand 59 4.1. Motivation 59 4.2. Hardware Setup and Hand Models 62 4.2.1. Human Hand Model 62 4.2.2. Wearable Sensor Glove 62 4.2.3. Stereo Camera 66 4.3. Visual Information Extraction 66 4.3.1. Marker Detection in Raw Images 68 4.3.2. Cost Function for Point Matching 68 4.3.3. Left-Right Stereo Matching 69 4.4. IMU-Aided Correspondence Search 72 4.5. Filtering-based Visual-Inertial Sensor Fusion 76 4.5.1. EKF States for Hand Tracking and Auto-Calibration 78 4.5.2. Prediction with IMU Information 79 4.5.3. Correction with Visual Information 82 4.5.4. Correction with Anatomical Constraints 84 4.6. Quantitative Evaluation for Free Hand Motion 87 4.6.1. Experimental Setup 87 4.6.2. Procedure 88 4.6.3. Experimental Result 90 4.7. Quantitative and Comparative Evaluation for Challenging Hand Motion 95 4.7.1. Experimental Setup 95 4.7.2. Procedure 96 4.7.3. Experimental Result 98 4.7.4. Performance Comparison with Existing Methods for Challenging Hand Motion 101 4.8. Qualitative Evaluation for Real-World Scenarios 105 4.8.1. Visually Complex Background 105 4.8.2. Object Interaction 106 4.8.3. Wearing Fingertip Cutaneous Haptic Devices 109 4.8.4. Outdoor Environment 111 4.9. Discussion 112 5 Conclusion 116 References 124 Abstract (in Korean) 139 Acknowledgment 141๋ฐ•

    An Overview about Emerging Technologies of Autonomous Driving

    Full text link
    Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007, autonomous driving has been the most active field of AI applications. This paper gives an overview about technical aspects of autonomous driving technologies and open problems. We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc. Especially we elaborate on all these issues in a framework of data closed loop, a popular platform to solve the long tailed autonomous driving problems
    • โ€ฆ
    corecore