51 research outputs found

    ์›€์ง์ด๋Š” ๋ฌผ์ฒด ๊ฒ€์ถœ ๋ฐ ์ถ”์ ์„ ์œ„ํ•œ ์ƒ์ฒด ๋ชจ๋ฐฉ ๋ชจ๋ธ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2014. 2. ์ตœ์ง„์˜.In this thesis, we propose bio-mimetic models for motion detection and visual tracking to overcome the limitations of existing methods in actual environments. The models are inspired from the theory that there are four different forms of visual memory for human visual perception when representing a scenevisible persistence, informational persistence, visual short-term memory (VSTM), and visual long-term memory (VLTM). We view our problem as a problem of modeling and representing an observed scene with temporary short-term models (TSTM) and conservative long-term models (CLTM). We study on building efficient and effective models for TSTM and CLTM, and utilizing them together to obtain robust detection and tracking results under occlusions, clumsy initializations, background clutters, drifting, and non-rigid deformations encountered in actual environments. First, we propose an efficient representation of TSTM to be used for moving object detection on non-stationary cameras, which runs within 5.8 milliseconds (ms) on a PC, and real-time on mobile devices. To achieve real-time capability with robust performance, our method models the background through the proposed dual-mode kernel model (DMKM) and compensates the motion of the camera by mixing neighboring models. Modeling through DMKM prevents the background model from being contaminated by foreground pixels, while still allowing the model to be able to adapt to changes of the background. Mixing neighboring models reduces the errors arising from motion compensation and their influences are further reduced by keeping the age of the model. Also, to decrease computation load, the proposed method applies one DMKM to multiple pixels without performance degradation. Experimental results show the computational lightness and the real-time capability of our method on a smart phone with robust detection performances. Second, by using the concept from both TSTM and CLTM, a new visual tracking method using the novel tri-model is proposed. The proposed method aims to solve the problems of occlusions, background clutters, and drifting simultaneously with the new tri-model. The proposed tri-model is composed of three models, where each model learns the target object, the background, and other non-target moving objects online. The proposed scheme performs tracking by finding the best explanation of the scene with the three learned models. By utilizing the information in the background and the foreground models as well as the target object model, our method obtains robust results under occlusions and background clutters. Also, the target object model is updated in a conservative way to prevent drifting. Furthermore, our method is not restricted to bounding-boxes when representing the target object, and is able to give pixel-wise tracking results. Third, we go beyond pixel-wise modeling and propose a local feature based tracking model using both TSTM and CLTM to track objects in case of uncertain initializations and severe occlusions. To track objects accurately in such situations, the proposed scheme uses ``motion saliency'' and ``descriptor saliency'' of local features and performs tracking based on generalized Hough transform (GHT). The proposed motion saliency of a local feature utilizes instantaneous velocity of features to form TSTM and emphasizes features having distinctive motions, compared to the motions coming from local features which are not from the object. The descriptor saliency models local features as CLTM and emphasizes features which are likely to be of the object in terms of its feature descriptors. Through these saliencies, the proposed method tries to ``learn and find'' the target object rather than looking for what was given at initialization, becoming robust to initialization problems. Also, our tracking result is obtained by combining the results of each local features of the target and the surroundings, thus being robust against severe occlusions as well. The proposed method is compared against eight other methods, with nine image sequences, and hundred random initializations. The experimental results show that our method outperforms all other compared methods. Fourth and last, we focus on building robust CLTM with local patches and their neighboring structures. The proposed method is based on sequential Bayesian inference and focuses on solving both the problem of tracking under partial occlusions and the problem of non-rigid object tracking in real-time on desktop personal computers (PC). The proposed scheme is mainly composed of two parts: (1) modeling the target object using elastic structure of local patches for robust performanceand (2) efficient hierarchical diffusion method to perform the tracking process in real-time. The elastic structure of local patches allows the proposed scheme to handle partial occlusions and non-rigid deformations through the relationship among neighboring patches. The proposed hierarchical diffusion generates samples from the region where the posterior is concentrated to reduce computation time. The method is extensively tested on a number of challenging image sequences with occlusion and non-rigid deformation. The experimental results show the real-time capability and the robustness of the proposed scheme under various situations.1 Introduction 1.1 Background and Research Issues 1.1.1 Issues in Motion Detection 1.1.2 Issues in Object Tracking 1.2 The Human Visual Memory 1.2.1 Sensory Memory 1.2.2 Visual Short-Term Memory 1.2.3 Visual Long-Term Memory 1.3 Bio-mimetic Framework for Detection and Tracking 1.4 Contents of the Research 2 Detection by Pixel-wise Dual-Mode Kernel Model 2.1 Proposed Method 2.1.1 Approximated Gaussian Kernel Model 2.1.2 Dual-Mode Kernel Model (DMKM) 2.1.3 Motion Compensation by Mixing Models 2.1.4 Detection of Foreground Pixels 2.2 Experimental Results 2.2.1 Runtime Comparison 2.2.2 Qualitative Comparison 2.2.3 Quantitative Comparison 2.2.4 Effects of Dual-Mode Kernel Model 2.2.5 Effects of Motion Compensation 2.2.6 Mobile Results 2.3 Remarks and Discussion 3 Tracking by Pixel-wise Tri-Model Representation 3.1 Tri-Model Framework 3.1.1 Overall Scheme 3.1.2 Advantages 3.1.3 Practical Approximation 3.2 Tracking with the Tri-Model 3.2.1 Likelihood of the Tri-Model 3.2.2 Likelihood Maximization 3.2.3 Estimating Pixel-Wise Labels 3.3 Learning the Tri-Model 3.3.1 Target Model 3.3.2 Background Model 3.3.3 Foreground Model 3.4 Experimental Results 3.4.1 Experimental Settings 3.4.2 Tracking Accuracy: Bounding Box 3.4.3 Tracking Accuracy: Pixel-Wise 3.5 Remarks and Discussion 4 Tracking by Feature-point-wise Saliency Model 4.1 Proposed Method 4.1.1 Tracking based on GHT 4.1.2 Descriptor Saliency and Feature DB Update 4.1.3 Motion Saliency 4.2 Experimental Results 4.2.1 Tracking with Inaccurate Initializations 4.2.2 Tracking Under Occlusions 4.3 Remarks and Discussion 5 Tracking by Patch-wise Elastic Structure Model 5.1 Tracking with Elastic Structure of Local Patches 5.1.1 Sequential Bayesian Inference Framework 5.1.2 Elastic Structure of Local Patches 5.1.3 Modeling a Single Patch 5.1.4 Modeling the Relationship between Patches 5.1.5 Model Update 5.1.6 Hierarchical Diffusion 5.1.7 Summary of the Proposed Method 5.2 Experiments 5.2.1 Parameter Effects 5.2.2 Performance Evaluation 5.2.3 Discussion on Translation, Rotation, Illumination Changes 5.2.4 Discussion on Partial Occlusions 5.2.5 Discussion on Non-Rigid Deformations 5.2.6 Discussion on Additional Cases 5.2.7 Summary of Tracking Results 5.2.8 Effectiveness of Hierarchical Diffusion 5.2.9 Limitations 5.3 Remarks and Discussion 6 Concluding Remarks and Future Works Bibliography Abstract in KoreanDocto

    4th SC@RUG 2007 proceedings:Student Colloquium 2006-2007

    Get PDF

    Digital Image Processing

    Get PDF
    This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further

    A Robotic System for Learning Visually-Driven Grasp Planning (Dissertation Proposal)

    Get PDF
    We use findings in machine learning, developmental psychology, and neurophysiology to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually-driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a gripper, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up such important issues as observation sparsity in high-dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well-established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for projections of high-dimensional data sets that capture task invariants. We also pursue the following problem: how can we use human expertise and insight into grasping to train a system to select both appropriate hand preshapes and approaches for a wide variety of objects, and then have it verify and refine its skills through trial and error. To accomplish this learning we propose a new class of Density Adaptive reinforcement learning algorithms. These algorithms use statistical tests to identify possibly interesting regions of the attribute space in which the dynamics of the task change. They automatically concentrate the building of high resolution descriptions of the reinforcement in those areas, and build low resolution representations in regions that are either not populated in the given task or are highly uniform in outcome. Additionally, the use of any learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate mistakes during learning and not damage itself. We address this by the use of an instrumented, compliant robot wrist that controls impact forces

    4th SC@RUG 2007 proceedings:Student Colloquium 2006-2007

    Get PDF

    Essential Speech and Language Technology for Dutch: Results by the STEVIN-programme

    Get PDF
    Computational Linguistics; Germanic Languages; Artificial Intelligence (incl. Robotics); Computing Methodologie
    • โ€ฆ
    corecore