1,573 research outputs found

    Tracking Skin-Colored Objects in Real-Time

    Get PDF
    We present a methodology for tracking multiple skin-colored objects in a monocular image sequence. The proposed approach encompasses a collection of techniques that allow the modeling, detection and temporal association of skincolored objects across image sequences. A non-parametric model of skin color is employed. Skin-colored objects are detected with a Bayesian classifier that is bootstrapped with a small set of training data and refined through an off-line iterative training procedure. By using on-line adaptation of skin-color probabilities the classifier is able to cope with considerable illumination changes. Tracking over time is achieved by a novel technique that can handle multiple objects simultaneously. Tracked objects may move in complex trajectories, occlude each other in the field of view of a possibly moving camera and vary in number over time. A prototype implementation of the developed system operates on 320x240 live video in real time (28Hz), running on a conventional Pentium IV processor. Representative experimental results from the application of this prototype to image sequences are also presented. 1

    Mobile Robot Range Sensing through Visual Looming

    Full text link
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Mobile Robot Range Sensing through Visual Looming

    Get PDF
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Demo: real-time indoors people tracking in scalable camera networks

    Get PDF
    In this demo we present a people tracker in indoor environments. The tracker executes in a network of smart cameras with overlapping views. Special attention is given to real-time processing by distribution of tasks between the cameras and the fusion server. Each camera performs tasks of processing the images and tracking of people in the image plane. Instead of camera images, only metadata (a bounding box per person) are sent from each camera to the fusion server. The metadata are used on the server side to estimate the position of each person in real-world coordinates. Although the tracker is designed to suit any indoor environment, in this demo the tracker's performance is presented in a meeting scenario, where occlusions of people by other people and/or furniture are significant and occur frequently. Multiple cameras insure views from multiple angles, which keeps tracking accurate even in cases of severe occlusions in some of the views

    Vehicle Detection for Traffic Flow Analysis

    Get PDF
    Soodamani, R & Varsani, V (2016), Vehicle Detection for Traffic Flow Analysis, ICCST2016, Paper presented at the IEEE International Carnahan Conference on Security Technology, 24-27 October 2016, Orlando, Florida. This document is the Accepted Manuscript version. The Version of Record, published in Security Technology (ICCST), 2016 is available online at IEEE Xplore: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7815693, doi: https://doi.org/10.1109/CCST.2016.7815693. © IEEE 2016.This paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.Final Accepted Versio

    Coffee Queue Project

    Get PDF
    In this paper, a computer vision system for counting people standing in line is presented. In this application, common techniques such as Adaptive Background Subtraction (ABS), blob tracking with Kalman filter, and occlusion resistive techniques are used to detect and track people. Additionally, a novel method using Dual Adaptive Background Subtractors (DABS) is implemented for dynamically determining the line region in a real-world crowded scene, and also as an alternative target acquisition to regular ABS. The DABS technique acts as a temporal bandpass filter for motion, helping identify people standing in line while in the presence of other moving people. This is achieved by using two ABS with different temporal adaptiveness. Unlike other computer vision papers which perform tests in highly controlled environments, the DABS technique is tested in a crowded Starbucks© at the Cal Poly student union. For any length of people standing in line, result shows that DABS has a lower mean error by one or more people when compared to ABS. Even in challenging crowded scenes where the line can reach 19 people in length, DABS achieves a Normalized RMS Error of 43%

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    Detecting Invasive Insects with Unmanned Aerial Vehicles

    Full text link
    A key aspect to controlling and reducing the effects invasive insect species have on agriculture is to obtain knowledge about the migration patterns of these species. Current state-of-the-art methods of studying these migration patterns involve a mark-release-recapture technique, in which insects are released after being marked and researchers attempt to recapture them later. However, this approach involves a human researcher manually searching for these insects in large fields and results in very low recapture rates. In this paper, we propose an automated system for detecting released insects using an unmanned aerial vehicle. This system utilizes ultraviolet lighting technology, digital cameras, and lightweight computer vision algorithms to more quickly and accurately detect insects compared to the current state of the art. The efficiency and accuracy that this system provides will allow for a more comprehensive understanding of invasive insect species migration patterns. Our experimental results demonstrate that our system can detect real target insects in field conditions with high precision and recall rates.Comment: IEEE ICRA 2019. 7 page

    Real time hand gesture recognition including hand segmentation and tracking

    Get PDF
    In this paper we present a system that performs automatic gesture recognition. The system consists of two main components: (i) A unified technique for segmentation and tracking of face and hands using a skin detection algorithm along with handling occlusion between skin objects to keep track of the status of the occluded parts. This is realized by combining 3 useful features, namely, color, motion and position. (ii) A static and dynamic gesture recognition system. Static gesture recognition is achieved using a robust hand shape classification, based on PCA subspaces, that is invariant to scale along with small translation and rotation transformations. Combining hand shape classification with position information and using DHMMs allows us to accomplish dynamic gesture recognition
    • 

    corecore