67 research outputs found

    A calibration procedure for reconfigurable Gough-Stewart manipulators

    Get PDF
    ยฉ 2020 This paper introduces a calibration procedure for the identification of the geometrical parameters of a reconfigurable Gough-Stewart parallel manipulator. By using the proposed method, the geometry of a general Gough-Stewart platform can be evaluated through the measurement of the distance between couples of points on the base and mobile platform, repeated for a given set of different poses of the manipulator. The mathematical modelling of the problem is described and a numeric algorithm for an efficient solution to the problem is proposed. Furthermore, an application of the proposed method is discussed with a numerical example, and the behaviour of the calibration procedure is analysed as a function of the number of acquisitions and the number of poses

    Mobile Robot Manipulator System Design for Localization and Mapping in Cluttered Environments

    Get PDF
    In this thesis, a compact mobile robot has been developed to build real-time 3D maps of hazards and cluttered environments inside damaged buildings for rescue tasks using visual Simultaneous Localization And Mapping (SLAM) algorithms. In order to maximize the survey area in such environments, this mobile robot is designed with four omni-wheels and equipped with a 6 Degree of Freedom (DOF) robotic arm carrying a stereo camera mounted on its end-effector. The aim of using this mobile articulated robotic system is monitor different types of regions within the area of interest, ranging from wide open spaces to smaller and irregular regions behind narrow gaps. In the first part of the thesis, the robot system design is presented in detail, including the kinematic systems of the omni-wheeled mobile platform and the 6-DOF robotic arm, estimation of the biases in parameters of these kinematic systems, the sensors and calibration of their parameters. These parameters are important for the sensor fusion utilized in the next part of the thesis, where two operation modes are proposed to retain the camera pose when the visual SLAM algorithms fail due to variety of the region types. In the second part, an integrated sensor data fusion, odometry and SLAM scheme is developed, where the camera poses are estimated using forward kinematic equations of the robotic arm and fused to the visual SLAM and odometry algorithms. A modified wavefront algorithm with reduced computational complexity is used to find the shortest path to reach the identified goal points. Finally, a dynamic control scheme is developed for path tracking and motion control of the mobile platform and the robot arm, with sub-systems in the form of PD controllers and extended Kalman filters. The overall system design is physically implemented on a prototype integrated mobile robot platform and successfully tested in real-time

    Visual Calibration, Identification and Control of 6-RSS Parallel Robots

    Get PDF
    Parallel robots present some outstanding advantages in high force-to-weight ratio, better stiffness and theoretical higher accuracy compared with serial manipulators. Hence parallel robots have been utilized increasingly in various applications. However, due to the manufacturing tolerances and defections in the robot structure, the positioning accuracy of parallel robots is basically equivalent with that of serial manipulators according to previous researches on the accuracy analysis of the Stewart Platform [1], which is difficult to meet the precision requirement of many potential applications. In addition, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. Visual sensor is a good choice for providing non-contact measurement of the end-effector pose (position and orientation) with simplicity in operation and low cost compared to other measurement methods such as the coordinate measurement machine (CMM) [2] and the laser tracker [3]. In this research, a series of solutions including kinematic calibration, dynamic identification and visual servoing are proposed to improve the positioning and tracking performance of the parallel robot based on the visual sensor. The main contributions of this research include three parts. In the first part, a relative pose-based algorithm (RPBA) is proposed to solve the kinematic calibration problem of a six-revolute-spherical-spherical (6-RSS) parallel robot by using the optical CMM sensor. Based on the relative poses between the candidate and the initial configurations, a calibration algorithm is proposed to determine the optimal error parameters of the robot kinematic model and external parameters introduced by the optical sensor. The experimental results demonstrate that the proposal RPBA using optical CMM is an implementable and effective method for the parallel robot calibration. The second part focuses on the dynamic model identification of the 6-RSS parallel robots. A visual closed-loop output-error identification method based on an optical CMM sensor is proposed for the purpose of the advanced model-based visual servoing control design of parallel robots. By using an outer loop visual servoing controller to stabilize both the parallel robot and the simulated model, the visual closed-loop output-error identification method is developed and the model parameters are identified by using a nonlinear optimization technique. The effectiveness of the proposed identification algorithm is validated by experimental tests. In the last part, a dynamic sliding mode control (DSMC) scheme combined with the visual servoing method is proposed to improve the tracking performance of the 6-RSS parallel robot based on the optical CMM sensor. By employing a position-to-torque converter, the torque command generated by DSMC can be applied to the position controlled industrial robot. The stability of the proposed DSMC has been proved by using Lyapunov theorem. The real-time experiment tests on a 6-RSS parallel robot demonstrate that the developed DSMC scheme is robust to the modeling errors and uncertainties. Compared with the classical kinematic level controllers, the proposed DSMC exhibits the superiority in terms of tracking performance and robustness

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    ์ธ๊ฐ„ ๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ์„ ์œ„ํ•œ ๊ฐ•๊ฑดํ•˜๊ณ  ์ •ํ™•ํ•œ ์†๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ  ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2021.8. ์ด๋™์ค€.Hand-based interface is promising for realizing intuitive, natural and accurate human machine interaction (HMI), as the human hand is main source of dexterity in our daily activities. For this, the thesis begins with the human perception study on the detection threshold of visuo-proprioceptive conflict (i.e., allowable tracking error) with or without cutantoues haptic feedback, and suggests tracking error specification for realistic and fluidic hand-based HMI. The thesis then proceeds to propose a novel wearable hand tracking module, which, to be compatible with the cutaneous haptic devices spewing magnetic noise, opportunistically employ heterogeneous sensors (IMU/compass module and soft sensor) reflecting the anatomical properties of human hand, which is suitable for specific application (i.e., finger-based interaction with finger-tip haptic devices). This hand tracking module however loses its tracking when interacting with, or being nearby, electrical machines or ferromagnetic materials. For this, the thesis presents its main contribution, a novel visual-inertial skeleton tracking (VIST) framework, that can provide accurate and robust hand (and finger) motion tracking even for many challenging real-world scenarios and environments, for which the state-of-the-art technologies are known to fail due to their respective fundamental limitations (e.g., severe occlusions for tracking purely with vision sensors; electromagnetic interference for tracking purely with IMUs (inertial measurement units) and compasses; and mechanical contacts for tracking purely with soft sensors). The proposed VIST framework comprises a sensor glove with multiple IMUs and passive visual markers as well as a head-mounted stereo camera; and a tightly-coupled filtering-based visual-inertial fusion algorithm to estimate the hand/finger motion and auto-calibrate hand/glove-related kinematic parameters simultaneously while taking into account the hand anatomical constraints. The VIST framework exhibits good tracking accuracy and robustness, affordable material cost, light hardware and software weights, and ruggedness/durability even to permit washing. Quantitative and qualitative experiments are also performed to validate the advantages and properties of our VIST framework, thereby, clearly demonstrating its potential for real-world applications.์† ๋™์ž‘์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ธํ„ฐํŽ˜์ด์Šค๋Š” ์ธ๊ฐ„-๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ ๋ถ„์•ผ์—์„œ ์ง๊ด€์„ฑ, ๋ชฐ์ž…๊ฐ, ์ •๊ตํ•จ์„ ์ œ๊ณตํ•ด์ค„ ์ˆ˜ ์žˆ์–ด ๋งŽ์€ ์ฃผ๋ชฉ์„ ๋ฐ›๊ณ  ์žˆ๊ณ , ์ด๋ฅผ ์œ„ํ•ด ๊ฐ€์žฅ ํ•„์ˆ˜์ ์ธ ๊ธฐ์ˆ  ์ค‘ ํ•˜๋‚˜๊ฐ€ ์† ๋™์ž‘์˜ ๊ฐ•๊ฑดํ•˜๊ณ  ์ •ํ™•ํ•œ ์ถ”์  ๊ธฐ์ˆ  ์ด๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ๋จผ์ € ์‚ฌ๋žŒ ์ธ์ง€์˜ ๊ด€์ ์—์„œ ์† ๋™์ž‘ ์ถ”์  ์˜ค์ฐจ์˜ ์ธ์ง€ ๋ฒ”์œ„๋ฅผ ๊ทœ๋ช…ํ•œ๋‹ค. ์ด ์˜ค์ฐจ ์ธ์ง€ ๋ฒ”์œ„๋Š” ์ƒˆ๋กœ์šด ์† ๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ  ๊ฐœ๋ฐœ ์‹œ ์ค‘์š”ํ•œ ์„ค๊ณ„ ๊ธฐ์ค€์ด ๋  ์ˆ˜ ์žˆ์–ด ์ด๋ฅผ ํ”ผํ—˜์ž ์‹คํ—˜์„ ํ†ตํ•ด ์ •๋Ÿ‰์ ์œผ๋กœ ๋ฐํžˆ๊ณ , ํŠนํžˆ ์†๋ ์ด‰๊ฐ ์žฅ๋น„๊ฐ€ ์žˆ์„๋•Œ ์ด ์ธ์ง€ ๋ฒ”์œ„์˜ ๋ณ€ํ™”๋„ ๋ฐํžŒ๋‹ค. ์ด๋ฅผ ํ† ๋Œ€๋กœ, ์ด‰๊ฐ ํ”ผ๋“œ๋ฐฑ์„ ์ฃผ๋Š” ๊ฒƒ์ด ๋‹ค์–‘ํ•œ ์ธ๊ฐ„-๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ ๋ถ„์•ผ์—์„œ ๋„๋ฆฌ ์—ฐ๊ตฌ๋˜์–ด ์™”์œผ๋ฏ€๋กœ, ๋จผ์ € ์†๋ ์ด‰๊ฐ ์žฅ๋น„์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์† ๋™์ž‘ ์ถ”์  ๋ชจ๋“ˆ์„ ๊ฐœ๋ฐœํ•œ๋‹ค. ์ด ์†๋ ์ด‰๊ฐ ์žฅ๋น„๋Š” ์ž๊ธฐ์žฅ ์™ธ๋ž€์„ ์ผ์œผ์ผœ ์ฐฉ์šฉํ˜• ๊ธฐ์ˆ ์—์„œ ํ”ํžˆ ์‚ฌ์šฉ๋˜๋Š” ์ง€์ž๊ธฐ ์„ผ์„œ๋ฅผ ๊ต๋ž€ํ•˜๋Š”๋ฐ, ์ด๋ฅผ ์ ์ ˆํ•œ ์‚ฌ๋žŒ ์†์˜ ํ•ด๋ถ€ํ•™์  ํŠน์„ฑ๊ณผ ๊ด€์„ฑ ์„ผ์„œ/์ง€์ž๊ธฐ ์„ผ์„œ/์†Œํ”„ํŠธ ์„ผ์„œ์˜ ์ ์ ˆํ•œ ํ™œ์šฉ์„ ํ†ตํ•ด ํ•ด๊ฒฐํ•œ๋‹ค. ์ด๋ฅผ ํ™•์žฅํ•˜์—ฌ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š”, ์ด‰๊ฐ ์žฅ๋น„ ์ฐฉ์šฉ ์‹œ ๋ฟ ์•„๋‹ˆ๋ผ ๋ชจ๋“  ์žฅ๋น„ ์ฐฉ์šฉ / ํ™˜๊ฒฝ / ๋ฌผ์ฒด์™€์˜ ์ƒํ˜ธ์ž‘์šฉ ์‹œ์—๋„ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ƒˆ๋กœ์šด ์† ๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด์˜ ์† ๋™์ž‘ ์ถ”์  ๊ธฐ์ˆ ๋“ค์€ ๊ฐ€๋ฆผ ํ˜„์ƒ (์˜์ƒ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ), ์ง€์ž๊ธฐ ์™ธ๋ž€ (๊ด€์„ฑ/์ง€์ž๊ธฐ ์„ผ์„œ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ), ๋ฌผ์ฒด์™€์˜ ์ ‘์ด‰ (์†Œํ”„ํŠธ ์„ผ์„œ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ) ๋“ฑ์œผ๋กœ ์ธํ•ด ์ œํ•œ๋œ ํ™˜๊ฒฝ์—์„œ ๋ฐ–์— ์‚ฌ์šฉํ•˜์ง€ ๋ชปํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋งŽ์€ ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ค๋Š” ์ง€์ž๊ธฐ ์„ผ์„œ ์—†์ด ์ƒ๋ณด์ ์ธ ํŠน์„ฑ์„ ์ง€๋‹ˆ๋Š” ๊ด€์„ฑ ์„ผ์„œ์™€ ์˜์ƒ ์„ผ์„œ๋ฅผ ์œตํ•ฉํ•˜๊ณ , ์ด๋•Œ ์ž‘์€ ๊ณต๊ฐ„์— ๋‹ค ์ž์œ ๋„์˜ ์›€์ง์ž„์„ ๊ฐ–๋Š” ์† ๋™์ž‘์„ ์ถ”์ ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์ˆ˜์˜ ๊ตฌ๋ถ„๋˜์ง€ ์•Š๋Š” ๋งˆ์ปค๋“ค์„ ์‚ฌ์šฉํ•œ๋‹ค. ์ด ๋งˆ์ปค์˜ ๊ตฌ๋ถ„ ๊ณผ์ • (correspondence search)๋ฅผ ์œ„ํ•ด ๊ธฐ์กด์˜ ์•ฝ๊ฒฐํ•ฉ (loosely-coupled) ๊ธฐ๋ฐ˜์ด ์•„๋‹Œ ๊ฐ•๊ฒฐํ•ฉ (tightly-coupled ๊ธฐ๋ฐ˜ ์„ผ์„œ ์œตํ•ฉ ๊ธฐ์ˆ ์„ ์ œ์•ˆํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์ง€์ž๊ธฐ ์„ผ์„œ ์—†์ด ์ •ํ™•ํ•œ ์† ๋™์ž‘์ด ๊ฐ€๋Šฅํ•  ๋ฟ ์•„๋‹ˆ๋ผ ์ฐฉ์šฉํ˜• ์„ผ์„œ๋“ค์˜ ์ •ํ™•์„ฑ/ํŽธ์˜์„ฑ์— ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ค๋˜ ์„ผ์„œ ๋ถ€์ฐฉ ์˜ค์ฐจ / ์‚ฌ์šฉ์ž์˜ ์† ๋ชจ์–‘ ๋“ฑ์„ ์ž๋™์œผ๋กœ ์ •ํ™•ํžˆ ๋ณด์ •ํ•œ๋‹ค. ์ด ์ œ์•ˆ๋œ ์˜์ƒ-๊ด€์„ฑ ์„ผ์„œ ์œตํ•ฉ ๊ธฐ์ˆ  (Visual-Inertial Skeleton Tracking (VIST)) ์˜ ๋›ฐ์–ด๋‚œ ์„ฑ๋Šฅ๊ณผ ๊ฐ•๊ฑด์„ฑ์ด ๋‹ค์–‘ํ•œ ์ •๋Ÿ‰/์ •์„ฑ ์‹คํ—˜์„ ํ†ตํ•ด ๊ฒ€์ฆ๋˜์—ˆ๊ณ , ์ด๋Š” VIST์˜ ๋‹ค์–‘ํ•œ ์ผ์ƒํ™˜๊ฒฝ์—์„œ ๊ธฐ์กด ์‹œ์Šคํ…œ์ด ๊ตฌํ˜„ํ•˜์ง€ ๋ชปํ•˜๋˜ ์† ๋™์ž‘ ์ถ”์ ์„ ๊ฐ€๋Šฅ์ผ€ ํ•จ์œผ๋กœ์จ, ๋งŽ์€ ์ธ๊ฐ„-๊ธฐ๊ณ„ ์ƒํ˜ธ์ž‘์šฉ ๋ถ„์•ผ์—์„œ์˜ ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์—ฌ์ค€๋‹ค.1 Introduction 1 1.1. Motivation 1 1.2. Related Work 5 1.3. Contribution 12 2 Detection Threshold of Hand Tracking Error 16 2.1. Motivation 16 2.2. Experimental Environment 20 2.2.1. Hardware Setup 21 2.2.2. Virtual Environment Rendering 23 2.2.3. HMD Calibration 23 2.3. Identifying the Detection Threshold of Tracking Error 26 2.3.1. Experimental Setup 27 2.3.2. Procedure 27 2.3.3. Experimental Result 31 2.4. Enlarging the Detection Threshold of Tracking Error by Haptic Feedback 31 2.4.1. Experimental Setup 31 2.4.2. Procedure 32 2.4.3. Experimental Result 34 2.5. Discussion 34 3 Wearable Finger Tracking Module for Haptic Interaction 38 3.1. Motivation 38 3.2. Development of Finger Tracking Module 42 3.2.1. Hardware Setup 42 3.2.2. Tracking algorithm 45 3.2.3. Calibration method 48 3.3. Evaluation for VR Haptic Interaction Task 50 3.3.1. Quantitative evaluation of FTM 50 3.3.2. Implementation of Wearable Cutaneous Haptic Interface 51 3.3.3. Usability evaluation for VR peg-in-hole task 53 3.4. Discussion 57 4 Visual-Inertial Skeleton Tracking for Human Hand 59 4.1. Motivation 59 4.2. Hardware Setup and Hand Models 62 4.2.1. Human Hand Model 62 4.2.2. Wearable Sensor Glove 62 4.2.3. Stereo Camera 66 4.3. Visual Information Extraction 66 4.3.1. Marker Detection in Raw Images 68 4.3.2. Cost Function for Point Matching 68 4.3.3. Left-Right Stereo Matching 69 4.4. IMU-Aided Correspondence Search 72 4.5. Filtering-based Visual-Inertial Sensor Fusion 76 4.5.1. EKF States for Hand Tracking and Auto-Calibration 78 4.5.2. Prediction with IMU Information 79 4.5.3. Correction with Visual Information 82 4.5.4. Correction with Anatomical Constraints 84 4.6. Quantitative Evaluation for Free Hand Motion 87 4.6.1. Experimental Setup 87 4.6.2. Procedure 88 4.6.3. Experimental Result 90 4.7. Quantitative and Comparative Evaluation for Challenging Hand Motion 95 4.7.1. Experimental Setup 95 4.7.2. Procedure 96 4.7.3. Experimental Result 98 4.7.4. Performance Comparison with Existing Methods for Challenging Hand Motion 101 4.8. Qualitative Evaluation for Real-World Scenarios 105 4.8.1. Visually Complex Background 105 4.8.2. Object Interaction 106 4.8.3. Wearing Fingertip Cutaneous Haptic Devices 109 4.8.4. Outdoor Environment 111 4.9. Discussion 112 5 Conclusion 116 References 124 Abstract (in Korean) 139 Acknowledgment 141๋ฐ•

    Modeling and Control of the Cooperative Automated Fiber Placement System

    Get PDF
    The Automated Fiber Placement (AFP) machines have brought significant improvement on composite manufacturing. However, the current AFP machines are designed for the manufacture of simple structures like shallow shells or tubes, and not capable of handling some applications with more complex shapes. A cooperative AFP system is proposed to manufacture more complex composite components which pose high demand for trajectory planning than those by the current APF system. The system consists of a 6 degree-of-freedom (DOF) serial robot holding the fiber placement head, a 6-DOF revolute-spherical-spherical (RSS) parallel robot on which a 1-DOF mandrel holder is installed and an eye-to-hand photogrammetry sensor, i.e. C-track, to detect the poses of both end-effectors of parallel robot and serial robot. Kinematic models of the parallel robot and the serial robot are built. The analysis of constraints and singularities is conducted for the cooperative AFP system. The definitions of the tool frames for the serial robot and the parallel robot are illustrated. Some kinematic parameters of the parallel robot are calibrated using the photogrammetry sensor. Although, the cooperative AFP system increases the flexibility of composite manufacturing by adding more DOF, there might not be a feasible path for laying up the fiber in some cases due to the requirement of free from collisions and singularities. To meet the challenge, an innovative semi-offline trajectory synchronized algorithm is proposed to incorporate the on-line robot control in following the paths generated off-line especially when the generated paths are infeasible for the current multiple robots to realize. By adding correction to the path of the robots at the points where the collision and singularity occur, the fiber can be laid up continuously without interruption. The correction is calculated based on the pose tracking data of the parallel robot detected by the photogrammetry sensor on-line. Due to the flexibility of the 6-DOF parallel robot, the optimized offsets with varying movements are generated based on the different singularities and constraints. Experimental results demonstrate the successful avoidance of singularities and joint limits, and the designed cooperative AFP system can fulfill the movement needed for manufacturing a composite structure with Y-shape

    Modeling Humans at Rest with Applications to Robot Assistance

    Get PDF
    Humans spend a large part of their lives resting. Machine perception of this class of body poses would be beneficial to numerous applications, but it is complicated by line-of-sight occlusion from bedding. Pressure sensing mats are a promising alternative, but data is challenging to collect at scale. To overcome this, we use modern physics engines to simulate bodies resting on a soft bed with a pressure sensing mat. This method can efficiently generate data at scale for training deep neural networks. We present a deep model trained on this data that infers 3D human pose and body shape from a pressure image, and show that it transfers well to real world data. We also present a model that infers pose, shape and contact pressure from a depth image facing the person in bed, and it does so in the presence of blankets. This model similarly benefits from synthetic data, which is created by simulating blankets on the bodies in bed. We evaluate this model on real world data and compare it to an existing method that requires RGB, depth, thermal and pressure imagery in the input. Our model only requires an input depth image, yet it is 12% more accurate. Our methods are relevant to applications in healthcare, including patient acuity monitoring and pressure injury prevention. We demonstrate this work in the context of robotic caregiving assistance, by using it to control a robot to move to locations on a personโ€™s body in bed.Ph.D

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    Hybrid Marker-less Camera Pose Tracking with Integrated Sensor Fusion

    Get PDF
    This thesis presents a framework for a hybrid model-free marker-less inertial-visual camera pose tracking with an integrated sensor fusion mechanism. The proposed solution addresses the fundamental problem of pose recovery in computer vision and robotics and provides an improved solution for wide-area pose tracking that can be used on mobile platforms and in real-time applications. In order to arrive at a suitable pose tracking algorithm, an in-depth investigation was conducted into current methods and sensors used for pose tracking. Preliminary experiments were then carried out on hybrid GPS-Visual as well as wireless micro-location tracking in order to evaluate their suitability for camera tracking in wide-area or GPS-denied environments. As a result of this investigation a combination of an inertial measurement unit and a camera was chosen as the primary sensory inputs for a hybrid camera tracking system. After following a thorough modelling and mathematical formulation process, a novel and improved hybrid tracking framework was designed, developed and evaluated. The resulting system incorporates an inertial system, a vision-based system and a recursive particle filtering-based stochastic data fusion and state estimation algorithm. The core of the algorithm is a state-space model for motion kinematics which, combined with the principles of multi-view camera geometry and the properties of optical flow and focus of expansion, form the main components of the proposed framework. The proposed solution incorporates a monitoring system, which decides on the best method of tracking at any given time based on the reliability of the fresh vision data provided by the vision-based system, and automatically switches between visual and inertial tracking as and when necessary. The system also includes a novel and effective self-adjusting mechanism, which detects when the newly captured sensory data can be reliably used to correct the past pose estimates. The corrected state is then propagated through to the current time in order to prevent sudden pose estimation errors manifesting as a permanent drift in the tracking output. Following the design stage, the complete system was fully developed and then evaluated using both synthetic and real data. The outcome shows an improved performance compared to existing techniques, such as PTAM and SLAM. The low computational cost of the algorithm enables its application on mobile devices, while the integrated self-monitoring, self-adjusting mechanisms allow for its potential use in wide-area tracking applications
    • โ€ฆ
    corecore