1,517 research outputs found

    Unsupervised Learning of Complex Articulated Kinematic Structures combining Motion and Skeleton Information

    Get PDF
    In this paper we present a novel framework for unsupervised kinematic structure learning of complex articulated objects from a single-view image sequence. In contrast to prior motion information based methods, which estimate relatively simple articulations, our method can generate arbitrarily complex kinematic structures with skeletal topology by a successive iterative merge process. The iterative merge process is guided by a skeleton distance function which is generated from a novel object boundary generation method from sparse points. Our main contributions can be summarised as follows: (i) Unsupervised complex articulated kinematic structure learning by combining motion and skeleton information. (ii) Iterative fine-to-coarse merging strategy for adaptive motion segmentation and structure smoothing. (iii) Skeleton estimation from sparse feature points. (iv) A new highly articulated object dataset containing multi-stage complexity with ground truth. Our experiments show that the proposed method out-performs state-of-the-art methods both quantitatively and qualitatively

    Robust Real-time RGB-D Visual Odometry in Dynamic Environments via Rigid Motion Model

    Full text link
    In the paper, we propose a robust real-time visual odometry in dynamic environments via rigid-motion model updated by scene flow. The proposed algorithm consists of spatial motion segmentation and temporal motion tracking. The spatial segmentation first generates several motion hypotheses by using a grid-based scene flow and clusters the extracted motion hypotheses, separating objects that move independently of one another. Further, we use a dual-mode motion model to consistently distinguish between the static and dynamic parts in the temporal motion tracking stage. Finally, the proposed algorithm estimates the pose of a camera by taking advantage of the region classified as static parts. In order to evaluate the performance of visual odometry under the existence of dynamic rigid objects, we use self-collected dataset containing RGB-D images and motion capture data for ground-truth. We compare our algorithm with state-of-the-art visual odometry algorithms. The validation results suggest that the proposed algorithm can estimate the pose of a camera robustly and accurately in dynamic environments

    Median K-flats for hybrid linear modeling with many outliers

    Full text link
    We describe the Median K-Flats (MKF) algorithm, a simple online method for hybrid linear modeling, i.e., for approximating data by a mixture of flats. This algorithm simultaneously partitions the data into clusters while finding their corresponding best approximating l1 d-flats, so that the cumulative l1 error is minimized. The current implementation restricts d-flats to be d-dimensional linear subspaces. It requires a negligible amount of storage, and its complexity, when modeling data consisting of N points in D-dimensional Euclidean space with K d-dimensional linear subspaces, is of order O(n K d D+n d^2 D), where n is the number of iterations required for convergence (empirically on the order of 10^4). Since it is an online algorithm, data can be supplied to it incrementally and it can incrementally produce the corresponding output. The performance of the algorithm is carefully evaluated using synthetic and real data

    동적 ν™˜κ²½μ— κ°•μΈν•œ λͺ¨μ…˜ λΆ„λ₯˜ 기반의 μ˜μƒ 항법 μ•Œκ³ λ¦¬μ¦˜ 개발

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› κ³΅κ³ΌλŒ€ν•™ 기계항곡곡학뢀, 2017. 8. κΉ€ν˜„μ§„.In the paper, we propose a robust visual odometry algorithm for dynamic environments via rigid motion segmentation using a grid-based optical flow. The algorithm first divides image frame by a fixed-size grid, then calculates the three-dimensional motion of grids for light computational load and uniformly distributed optical flow vectors. Next, it selects several adjacent points among grid-based optical flow vectors based on a so-called entropy and generates motion hypotheses formed by three-dimensional rigid transformation. These processes for a spatial motion segmentation utilizes the principle of randomized hypothesis generation and the existing clustering algorithm, thus separating objects that move independently of each other. Moreover, we use a dual-mode simple Gaussian model in order to differentiate static and dynamic parts persistently. The model measures the output of the spatial motion segmentation algorithm and updates a probability vector consisting of the likelihood of representing specific label. For the evaluation of the proposed algorithm, we use a self-made dataset captured by ASUS Xtion Pro live RGB-D camera and Vicon motion capture system. We compare our algorithm with the existing motion segmentation algorithm and the current state-of-the-art visual odometry algorithm respectively, and the proposed algorithm estimates the ego-motion robustly and accurately in dynamic environments while showing the competitive performance of the motion segmentation.κΈ°μ‘΄ λŒ€λ‹€μˆ˜μ˜ μ˜μƒ 항법 μ•Œκ³ λ¦¬μ¦˜μ€ 정적인 ν™˜κ²½μ„ κ°€μ •ν•˜μ—¬ κ°œλ°œλ˜μ–΄ μ™”μœΌλ©°, 잘 μ •μ˜λœ λ°μ΄ν„°μ…‹μ—μ„œ μ„±λŠ₯이 κ²€μ¦λ˜μ–΄ μ™”λ‹€. ν•˜μ§€λ§Œ 무인 λ‘œλ΄‡μ΄ μ˜μƒ 항법을 ν™œμš©ν•˜μ—¬ μž„λ¬΄λ₯Ό μˆ˜ν–‰ν•˜μ—¬μ•Ό ν•˜λŠ” μž₯μ†ŒλŠ”, μ‹€μ œ μ‚¬λžŒμ΄λ‚˜ μ°¨λŸ‰μ΄ μ™•λž˜ν•˜λŠ” λ“± 동적인 ν™˜κ²½μΌ κ°€λŠ₯성이 크닀. 비둝 RANSAC을 ν™œμš©ν•˜μ—¬ μ˜μƒ 항법을 μˆ˜ν–‰ν•˜λŠ” 일뢀 μ•Œκ³ λ¦¬μ¦˜λ“€μ€ ν”„λ ˆμž„ λ‚΄μ˜ 비정상적인 μ›€μ§μž„μ„ μœ„μΉ˜ μΆ”μ • κ³Όμ •μ—μ„œ λ°°μ œν•  수 μžˆμ§€λ§Œ, μ΄λŠ” 동적 물체가 μ˜μƒ ν”„λ ˆμž„μ˜ μž‘μ€ 뢀뢄을 μ°¨μ§€ν•˜λŠ” κ²½μš°μ—λ§Œ 적용이 κ°€λŠ₯ν•˜λ‹€. λ”°λΌμ„œ λΆˆν™•μ‹€μ„±μ΄ μ‘΄μž¬ν•˜λŠ” 동적 ν™˜κ²½μ—μ„œ 자기 μœ„μΉ˜λ₯Ό κ°•μΈν•˜κ²Œ μΆ”μ •ν•˜κΈ° μœ„ν•΄, λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 동적 ν™˜κ²½μ— κ°•μΈν•œ μ˜μƒ 기반 μ£Όν–‰ 기둝계 μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•œλ‹€. μ œμ•ˆν•œ μ•Œκ³ λ¦¬μ¦˜μ€ μ›ν™œν•œ μˆ˜ν–‰ 속도와 이미지 내에 κ· μΌν•˜κ²Œ λΆ„ν¬λœ λͺ¨μ…˜μ„ κ³„μ‚°ν•˜κΈ° μœ„ν•΄, 격자 기반 μ˜΅ν‹°μ»¬ ν”Œλ‘œμš°λ₯Ό μ΄μš©ν•œλ‹€. 그리고 격자 λ‹¨μœ„ κ·Έλ¦¬λ“œμ˜ λͺ¨μ…˜μ„ 톡해 단일 ν”„λ ˆμž„ λ‚΄μ—μ„œ 3차원 곡간 λͺ¨μ…˜ 뢄할을 μˆ˜ν–‰ν•˜κ³ , λ‹€μˆ˜μ˜ 동적 물체 및 정적 μš”μ†Œλ₯Ό μ§€μ†μ μœΌλ‘œ ꡬ뢄 및 κ΅¬λ³„ν•˜κΈ° μœ„ν•΄ μ‹œκ°„μ  λͺ¨μ…˜ 뢄할을 μˆ˜ν–‰ν•œλ‹€. 특히 μ§€μ†μ μœΌλ‘œ 동적 및 정적 μš”μ†Œλ₯Ό κ΅¬λ³„ν•˜κΈ° μœ„ν•΄, μš°λ¦¬λŠ” 이미지 λ‚΄μ˜ 각 κ·Έλ¦¬λ“œμ— 이쀑 λͺ¨λ“œ κ°€μš°μ‹œμ•ˆ λͺ¨λΈμ„ μ μš©ν•˜μ—¬ μ•Œκ³ λ¦¬μ¦˜μ΄ 곡간적 λͺ¨μ…˜ λΆ„ν• μ˜ μΌμ‹œμ  λ…Έμ΄μ¦ˆμ— κ°•μΈν•˜κ²Œ ν•˜κ³ , ν™•λ₯  벑터λ₯Ό κ΅¬μ„±ν•˜μ—¬ κ·Έλ¦¬λ“œκ°€ μ„œλ‘œ κ΅¬λ³„λ˜λŠ” 각각의 μš”μ†Œλ‘œ λ°œν˜„ν•  ν™•λ₯ μ„ κ³„μ‚°ν•˜κ²Œ ν•œλ‹€. κ°œλ°œν•œ μ•Œκ³ λ¦¬μ¦˜μ˜ μ„±λŠ₯ 검증을 μœ„ν•΄ ASUS Xtion RGB-D 카메라와 Vicon λͺ¨μ…˜ 캑쳐 μ‹œμŠ€ν…œμ„ 톡해 κ΅¬μ„±ν•œ 데이터셋을 μ΄μš©ν•˜μ˜€μœΌλ©°, κΈ°μ‘΄ λͺ¨μ…˜ λΆ„ν•  μ•Œκ³ λ¦¬μ¦˜κ³Όμ˜ μž¬ν˜„μœ¨ (recall), 정밀도 (precision) 비ꡐ 및 κΈ°μ‘΄ μ˜μƒ 기반 μ£Όν–‰ 기둝계 μ•Œκ³ λ¦¬μ¦˜κ³Όμ˜ μΆ”μ • 였차 비ꡐλ₯Ό 톡해 타 μ•Œκ³ λ¦¬μ¦˜ λŒ€λΉ„ μš°μˆ˜ν•œ λͺ¨μ…˜ κ²€μΆœ 및 μœ„μΉ˜ μΆ”μ • μ„±λŠ₯을 ν™•μΈν•˜μ˜€λ‹€.Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Thesis contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Background Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 Rigid transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Grid-based optical flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Motion Spatial Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.1 Motion hypothesis search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Motion hypothesis refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Motion hypothesis clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4 Motion Temporal Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.1 Label matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.2 Dual-mode simple Gaussian model . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4.2.1 Update model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2.2 Compensate model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 5 Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5.2 Motion segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.3 Visual odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Maste
    • …
    corecore