6 research outputs found

    RGB-PTAM 기반 μΏΌλ“œλ‘œν„° λ¬΄μΈλΉ„ν–‰λ‘œλ΄‡μ˜ μƒνƒœμΆ”μ •

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : 기계항곡곡학뢀, 2016. 2. 이동쀀.λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λ¬΄μΈλΉ„ν–‰λ‘œλ΄‡μ˜ 비행을 μœ„ν•œ μƒνƒœμΆ”μ • 기법에 λŒ€ν•΄μ„œ κΈ°μˆ ν•œλ‹€. λ…Όλ¬Έμ—μ„œ μ œμ‹œλ˜λŠ” μƒνƒœμΆ”μ • 기법은 카메라에 μ μš©ν•œ μ˜μƒ μ•Œκ³ λ¦¬μ¦˜κ³Ό κ΄€μ„±μΈ‘μ •μž₯치의 데이터λ₯Ό 기반으둜 ν™•μž₯μΉΌλ§Œν•„ν„°λ₯Ό μ μš©ν•˜μ—¬, μœ„μ„±ν•­λ²•μ‹œμŠ€ν…œμ΄ μ°¨λ‹¨λœ ν™˜κ²½μ—μ„œμ˜ μƒνƒœμΆ”μ •μ„ 보μž₯ν•œλ‹€. 이λ₯Ό μœ„ν•˜μ—¬ μ‚¬μš©λ˜λŠ” μ˜μƒ μ•Œκ³ λ¦¬μ¦˜μ΄ νšŒμƒ‰μ‘°μ˜ λͺ…도 정보가 μ•„λ‹Œ RGB 각각의 색상 채널을 μ΄μš©ν•œλ‹€. λ˜ν•œ μ œμ•ˆλœ μ˜μƒ μ•Œκ³ λ¦¬μ¦˜μ€ μ£Όλ³€ ν™˜κ²½μ˜ 곡간 κΈ°ν•˜ν•™μ  ꡬ쑰뿐만 μ•„λ‹ˆλΌ ν‰λ©΄μƒμ˜ 색상변화λ₯Ό λ―Όκ°ν•˜κ²Œ κ°μ§€ν•˜λ©°, μžμ‹ μ˜ μœ„μΉ˜ 및 μžμ„Έλ₯Ό μΆ”μ •ν•œλ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ μƒνƒœμΆ”μ • κΈ°λ²•μ˜ μ‹€ν—˜ κ²°κ³Όλ₯Ό μ œμ‹œν•œλ‹€.We introduce a novel state estimation for unmanned aerial vehicle (UAV) based on Simultaneous Localization And Mapping algorithm with RGB color model. The proposed method allows robust and consistent estimation of pose over artificial environments with little texture and plentiful color variation. To achieve this, the method combines high frequency data from inertial measurement unit (IMU) and low frequency data from the vision algorithm based on RGB color model under Extended Kalman Filter framework. The vision algorithm conducts feature extraction on intensities of each RGB color channel instead of grayscale light intensity. While feature extraction on grayscale light intensity detects spatial geometric edges, the extraction on color channels also detects color variation on plane sensitively. The method is applied to estimate the pose of UAV in GPS-denied environments and the experiment is also performed to illustrate the accuracy.Chapter 1 Introduction 1 1.1 Motivation and Objectives 1 1.2 State of the Art 2 1.3 Contribution of this Work 4 Chapter 2 System Description 6 2.1 System overview 6 2.2 Control of Unmanned Aerial Vehicles 7 Chapter 3 State Estimation 10 3.1 IMU Sensor Model 11 3.2 State Representation 12 3.3 Error State Representation 14 3.4 Extended Kalman Filter 16 Chapter 4 Vision algorithm 19 4.1 Parallel Tracking and Mapping 19 4.2 RGB-PTAM 21 4.2.1 Tracking 21 4.2.1.1 Feature extraction 21 4.2.1.2 Matching 25 4.2.1.3 Pose update 25 4.2.2 Mapping 26 Chapter 5 Experiment 27 5.1 Hardwares 27 5.2 Vision algorithm 28 5.3 Experimental Result 29 Chapter 6 Conclusion and Future Work 36 6.1 Conclusion 36 6.2 Future Work 37 Bibliography 38 μš”μ•½ 43Maste

    Fusion of remote vision and on-board acceleration data for the vibration estimation of large space structures

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006.Includes bibliographical references (leaves 81-84).Future space structures such as solar power stations and telescopes are expected to be very large. These structures will require on-orbit construction. Due to the risks and costs of human extravehicular work, teams of robots will be essential for the on-orbit assembly of the large space structures. There are a number of technical challenges presented by such robotic construction. The structures will need to be made of lightweight materials and will be very flexible. Autonomous robots will require information about the vibrations of the flexible structures and their dynamic parameters in order to perform the construction efficiently. Often models of the structures are imperfect, therefore the magnitude of the vibrations of the structure must be estimated on-orbit. This thesis presents a method for estimating the shape and dynamic parameters of a vibrating large space structure. This technique is a cooperative sensing approach using remote free-flying robot observers equipped with vision sensors and structure-mounted accelerometers. This approach exploits the complementary nature of the two types of sensors.(cont.) Vision sensors are able to measure structure deflections at a high spatial frequency but are bandwidth limited. Accelerometers are able to make measurements at high temporal frequency, but are sparsely located on the structure. The fused estimation occurs in three steps. First, the vision data is condensed in a modal decomposition that results in coarse estimates of modal coefficients. In the second step, the coarse estimates of the modal coefficients obtained from vision data are fused with the accelerometer measurements in a multi-rate nonlinear Kalman filter, resulting in a refined estimate of the modal coefficients and dynamic properties of the structure. In the final step, the estimated modal coefficients are combined with the mode shapes to provide a shape estimate of the entire structure. Simulation and experimental results demonstrate that the performance of this fused estimation approach is superior to the performance achieved when using only a single type of sensor.by Amy M. Bilton.S.M

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Directional Estimation for Robotic Beating Heart Surgery

    Get PDF
    In robotic beating heart surgery, a remote-controlled robot can be used to carry out the operation while automatically canceling out the heart motion. The surgeon controlling the robot is shown a stabilized view of the heart. First, we consider the use of directional statistics for estimation of the phase of the heartbeat. Second, we deal with reconstruction of a moving and deformable surface. Third, we address the question of obtaining a stabilized image of the heart
    corecore