89 research outputs found

    Spatial Pyramid Context-Aware Moving Object Detection and Tracking for Full Motion Video and Wide Aerial Motion Imagery

    Get PDF
    A robust and fast automatic moving object detection and tracking system is essential to characterize target object and extract spatial and temporal information for different functionalities including video surveillance systems, urban traffic monitoring and navigation, robotic. In this dissertation, I present a collaborative Spatial Pyramid Context-aware moving object detection and Tracking system. The proposed visual tracker is composed of one master tracker that usually relies on visual object features and two auxiliary trackers based on object temporal motion information that will be called dynamically to assist master tracker. SPCT utilizes image spatial context at different level to make the video tracking system resistant to occlusion, background noise and improve target localization accuracy and robustness. We chose a pre-selected seven-channel complementary features including RGB color, intensity and spatial pyramid of HoG to encode object color, shape and spatial layout information. We exploit integral histogram as building block to meet the demands of real-time performance. A novel fast algorithm is presented to accurately evaluate spatially weighted local histograms in constant time complexity using an extension of the integral histogram method. Different techniques are explored to efficiently compute integral histogram on GPU architecture and applied for fast spatio-temporal median computations and 3D face reconstruction texturing. We proposed a multi-component framework based on semantic fusion of motion information with projected building footprint map to significantly reduce the false alarm rate in urban scenes with many tall structures. The experiments on extensive VOTC2016 benchmark dataset and aerial video confirm that combining complementary tracking cues in an intelligent fusion framework enables persistent tracking for Full Motion Video and Wide Aerial Motion Imagery.Comment: PhD Dissertation (162 pages

    Joint localization of pursuit quadcopters and target using monocular cues

    Get PDF
    Pursuit robots (autonomous robots tasked with tracking and pursuing a moving target) require accurate tracking of the target's position over time. One possibly effective pursuit platform is a quadcopter equipped with basic sensors and a monocular camera. However, combined noise of the quadcopter's sensors causes large disturbances of target's 3D position estimate. To solve this problem, in this paper, we propose a novel method for joint localization of a quadcopter pursuer with a monocular camera and an arbitrary target. Our method localizes both the pursuer and target with respect to a common reference frame. The joint localization method fuses the quadcopter's kinematics and the target's dynamics in a joint state space model. We show that predicting and correcting pursuer and target trajectories simultaneously produces better results than standard approaches to estimating relative target trajectories in a 3D coordinate system. Our method also comprises a computationally efficient visual tracking method capable of redetecting a temporarily lost target. The efficiency of the proposed method is demonstrated by a series of experiments with a real quadcopter pursuing a human. The results show that the visual tracker can deal effectively with target occlusions and that joint localization outperforms standard localization methods

    A mathematical model for computerized car crash detection using computer vision techniques

    Full text link
    My proposed approach to the automatic detection of traffic accidents in a signalized intersection is presented here. In this method, a digital camera is strategically placed to view the entire intersection. The images are captured, processed and analyzed for the presence of vehicles and pedestrians in the proposed detection zones. Those images are further processed to detect if an accident has occurred; The mathematical model presented is a Poisson distribution that predicts the number of accidents in an intersection per week, which can be used as approximations for modeling the crash process. We believe that the crash process can be modeled by using a two-state method, which implies that the intersection is in one of two states: clear (no accident) or obstructed (accident). We can then incorporate a rule-based AI system, which will help us in identifying that a crash has taken or will possibly take place; We have modeled the intersection as a service facility, which processes vehicles in a relatively small amount of time. A traffic accident is then perceived as an interruption of that service

    Video Object Segmentation and Tracking Using GMM and GMM-RBF Method for Surveillance System

    Get PDF
    Now a day’s computer vision has been applied to every organisation. Such that the all in security systems, computers are widely used regarding to this the security purpose every organisation are used different monitoring system i.e. surveillance system, suspicious monitoring system etc. Object tracking and explanation is the definitive purpose of many video processing systems. The two critical, low-level computer vision tasks that have been undertaken in this work are: Foreground-Background Segmentation and Object Tracking. In surveillance system cameras capture the footage for tracking suspicious movement in organisation, in this condition the videos prepare with the help of surveillance cameras the most difficult task is to tracking the object from the video and make the another image so that image should be vague to identification. Generally the surveillance system work We use a stochastic model of the background and also adapt the model through time. This adaptive nature is essential for long-term surveillance applications, particularly when the background composition or intensity distribution changes with time. In such cases, concept of a static reference background would no longer make sense. DOI: 10.17762/ijritcc2321-8169.15062

    A Bayesian Approach For Image-Based Underwater Target Tracking And Navigation [TC1800. A832 2007 f rb].

    Get PDF
    Operasi pemeriksaan dan pemantauan di dasar laut merupakan aktiviti penting untuk industri di luar persisiran pantai terutamanya bagi tujuan pembangunan dan pemasangan infrastruktur. Sejak kebelakangan ini, pemasangan struktur di dasar laut seperti saluran paip gas atau petroleum dan kabel telekomunikasi telah meningkat. Pemeriksaan rutin adalah sangat mustahak untuk mencegah kerosakan. Undersea inspections and surveys are important requirements for offshore industry and mining organisation for various infra-structures installations. During the last decade, the use of underwater structure installations, such as oil or gas pipeline and telecommunication cables has increased many folds

    Annex 16 : automated traffic monitoring for complex road conditions

    Get PDF
    Recent advancements in computer vision and machine learning techniques have made traffic monitoring systems highly effective in well structured traffic conditions such as highways. But these systems struggle in handling complex and irregular conditions that exist in developing countries, due to lack of infrastructure and regulation. This research breaks down the problem into different sub-tasks such as vehicle detection, vehicle tracking, and vehicle recognition, then combines each process into one pipeline that can be used for traffic monitoring. Implementing the final pipeline involves improving and aggregating existing techniques. Results demonstrate the potential of these techniques for automated traffic monitoring

    Robust object tracking algorithms using C++ and MATLAB

    Get PDF
    Object tracking, all in all, is a testing issue. Troubles in tracking objects emerge because of unexpected motion of the object, scene appearance change, object appearance change, structures of objects that are not rigid. Besides this full and partial occlusions and motion of the camera also pose challenges. Commonly, we make some assumptions to oblige the tracking issue in the connection of a specific provision. Ordinarily it gets important to track all the moving objects in the real time video. Tracking using colour performs well when the colour of the target is unique compared to its background. Tracking using the contours as a feature is very effective even for non-rigid targets. Tracking using spatial histogram gives satisfactory results even though the target object undergoes size change or has similar coloured background. In this project robust algorithms based on colour, contour and spatiograms to track moving objects have been studied, proposed and implemented

    Motion tracking on embedded systems: vision-based vehicle tracking using image alignment with symmetrical function.

    Get PDF
    Cheung, Lap Chi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 91-95).Abstracts in English and Chinese.Chapter 1. --- INTRODUCTION --- p.1Chapter 1.1. --- Background --- p.1Chapter 1.1.1. --- Introduction to Intelligent Vehicle --- p.1Chapter 1.1.2. --- Typical Vehicle Tracking Systems for Rear-end Collision Avoidance --- p.2Chapter 1.1.3. --- Passive VS Active Vehicle Tracking --- p.3Chapter 1.1.4. --- Vision-based Vehicle Tracking Systems --- p.4Chapter 1.1.5. --- Characteristics of Computing Devices on Vehicles --- p.5Chapter 1.2. --- Motivation and Objectives --- p.6Chapter 1.3. --- Major Contributions --- p.7Chapter 1.3.1. --- A 3-phase Vision-based Vehicle Tracking Framework --- p.7Chapter 1.3.2. --- Camera-to-vehicle Distance Measurement by Single Camera --- p.9Chapter 1.3.3. --- Real Time Vehicle Detection --- p.10Chapter 1.3.4. --- Real Time Vehicle Tracking using Simplified Image Alignment --- p.10Chapter 1.4. --- Evaluation Platform --- p.11Chapter 1.5. --- Thesis Organization --- p.11Chapter 2. --- RELATED WORK --- p.13Chapter 2.1. --- Stereo-based Vehicle Tracking --- p.13Chapter 2.2. --- Motion-based Vehicle Tracking --- p.16Chapter 2.3. --- Knowledge-based Vehicle Tracking --- p.18Chapter 2.4. --- Commercial Systems --- p.19Chapter 3. --- 3-PHASE VISION-BASED VEHICLE TRACKING FRAMEWORK --- p.22Chapter 3.1. --- Introduction to the 3-phase Framework --- p.22Chapter 3.2. --- Vehicle Detection --- p.23Chapter 3.2.1. --- Overview of Vehicle Detection --- p.23Chapter 3.2.2. --- Locating the Vehicle Center - Symmetrical Measurement --- p.25Chapter 3.2.3. --- Locating the Vehicle Roof and Bottom --- p.28Chapter 3.2.4. --- Locating the Vehicle Sides - Over-complete Haar Transform --- p.30Chapter 3.3. --- Vehicle Template Tracking Image Alignment --- p.37Chapter 3.3.5. --- Overview of Vehicle Template Tracking --- p.37Chapter 3.3.6. --- Goal of Image Alignment --- p.41Chapter 3.3.7. --- Alternative Image Alignment - Compositional Image Alignment --- p.42Chapter 3.3.8. --- Efficient Image Alignment - Inverse Compositional Algorithm --- p.43Chapter 3.4. --- Vehicle Template Update --- p.46Chapter 3.4.1. --- Situation of Vehicle lost --- p.46Chapter 3.4.2. --- Template Filling by Updating the positions of Vehicle Features --- p.48Chapter 3.5. --- Experiments and Discussions --- p.49Chapter 3.5. 1. --- Experiment Setup --- p.49Chapter 3.5.2. --- Successful Tracking Percentage --- p.50Chapter 3.6. --- Comparing with other tracking methodologies --- p.52Chapter 3.6.1. --- 1-phase Vision-based Vehicle Tracking --- p.52Chapter 3.6.2. --- Image Correlation --- p.54Chapter 3.6.3. --- Continuously Adaptive Mean Shift --- p.58Chapter 4. --- CAMERA TO-VEHICLE DISTANCE MEASUREMENT BY SINGLE CAMERA --- p.61Chapter 4.1 --- The Principle of Law of Perspective --- p.61Chapter 4.2. --- Distance Measurement by Single Camera --- p.62Chapter 5. --- REAL TIME VEHICLE DETECTION --- p.66Chapter 5.1. --- Introduction --- p.66Chapter 5.2. --- Timing Analysis of Vehicle Detection --- p.66Chapter 5.3. --- Symmetrical Measurement Optimization --- p.67Chapter 5.3.1. --- Diminished Gradient Image for Symmetrical Measurement --- p.67Chapter 5.3.2. --- Replacing Division by Multiplication Operations --- p.71Chapter 5.4. --- Over-complete Haar Transform Optimization --- p.73Chapter 5.4.1. --- Characteristics of Over-complete Haar Transform --- p.75Chapter 5.4.2. --- Pre-compntation of Haar block --- p.74Chapter 5.5. --- Summary --- p.77Chapter 6. --- REAL TIME VEHICLE TRACKING USING SIMPLIFIED IMAGE ALIGNMENT --- p.78Chapter 6.1. --- Introduction --- p.78Chapter 6.2. --- Timing Analysis of Original Image Alignment --- p.78Chapter 6.3. --- Simplified Image Alignment --- p.80Chapter 6.3.1. --- Reducing the Number of Parameters in Affine Transformation --- p.80Chapter 6.3.2. --- Size Reduction of Image A ligmnent Matrixes --- p.85Chapter 6.4. --- Experiments and Discussions --- p.85Chapter 6.4.1. --- Successful Tracking Percentage --- p.86Chapter 6.4.2. --- Timing Improvement --- p.87Chapter 7. --- CONCLUSIONS --- p.89Chapter 8. --- BIBLIOGRAPHY --- p.9
    corecore