138,764 research outputs found

    Multiple Vehicle Detection and Tracking in Hard Real Time

    Get PDF
    A vision system has been developed that recognizes and tracks multiple vehicles from sequences of gray-scale images taken from a moving car in hard real-time. Recognition is accomplished by combining the analysis of single image frames with the analysis of the motion information provided by multiple consecutive image frames. In single image frames, cars are recognized by matching deformable gray-scale templates, by detecting image features, such as corners, and by evaluating how these features relate to each other. Cars are also recognized by differencing consecutive image frames and by tracking motion parameters that are typical for cars. The vision system utilizes the hard real-time operating system Maruti which guarantees that the timing constraints on the various processes of the vision system are satisfied. The dynamic creation and termination of tracking processes optimizes the amount of computational resources spent and allows fast detection and tracking of multiple cars. Experimental results demonstrate robust, real-time recognition and tracking over thousands of image frames. (Also cross-referenced as UMIACS-TR-96-52

    Extending Linear System Models to Characterize the Performance Bounds of a Fixating Active Vision System

    Get PDF
    If active vision systems are to be used reliably in practical applications, it is crucial to understand their limits and failure modes. In the work presented here, we derive, theoretically and experimentally, bounds on the performance of an active vision system in a fixation task. In particular, we characterize the tracking limits that are imposed by the finite field of view. Two classes of target motion are examined: sinusoidal motions, representative for targets moving with high turning rates, and constant-velocity motions, exemplary for slowly varying target movements. For each class of motion, we identify a linear model of the fixating system from measurements on a real active vision system and analyze the range of target motions that can be handled with a given field of view. To illustrate the utility of such performance bounds, we sketch how the tracking performance can be maximized by dynamically adapting optical parameters of the system to the characteristics of the target motion. The originality of our work arises from combining the theoretical analysis of a complete active vision system with rigorous performance measurements on the real system. We generate repeatable and controllable target motions with the help of two robot manipulators and measure the real-time performance of the system. The experimental results are used to verify or identify a linear model of the active vision system. A major difference to related work lies in analyzing the limits of the linear models that we develop. Active vision systems have been modeled as linear systems many times before, but the performance limits at which the models break down and the system loses its target have not attracted much attention so far. With our work we hope to demonstrate how the knowledge of such limits can be used to actually extend the performance of an active vision system

    Automation of CAD model based assembly simulations using motion capture

    Get PDF
    The manufacturing industry often uses text and video as a supplement for worker training procedures. Unfortunately, text is often difficult to follow and video lends itself to occlusion issues. CAD based animations of assembly operations can overcome these shortcomings while offering greater potential for operations analysis and simulation flexibility. However, creating such CAD based training simulations manually is a time intensive task and many a times fail to reveal the practical assembly issues faced in the real world. Thus, it is highly beneficial to be able to automate these simulations using motion capture from a physical environment. This thesis research summary includes the development and demonstration of a low-cost versatile motion tracking system and its application for generating CAD based assembly simulations. The motion tracking system is practically attractive because it is inexpensive, wireless, and easily portable. The system development focus herein is on two important aspects. One is generation of model based simulation, and the other is motion capture. Multiple Wii Remotes (Wiimotes) are used to form vision systems to perform 3D motion tracking. All the 6 DOF of a part can be tracked with the help of four Infra-red (IR) LEDs mounted on the part to be tracked. The obtained data is fed in realtime to automatically generate an assembly simulation of object models represented by Siemens NX5 CAD software. A Wiimote Vision System Setup Toolkit has been developed to help users in setting up a new vision system using Wiimotes given the required volume and floor area to be tracked. Implementation examples have been developed with different physical assemblies to demonstrate the capabilities of the system --Abstract, page iii

    Three-Dimensional Hand Tracking and Surface-Geometry Measurement for a Robot-Vision System

    Get PDF
    Tracking of human motion and object identification and recognition are important in many applications including motion capture for human-machine interaction systems. This research is part of a global project to enable a service robot to recognize new objects and perform different object-related tasks based on task guidance and demonstration provided by a general user. This research consists of the calibration and testing of two vision systems which are part of a robot-vision system. First, real-time tracking of a human hand is achieved using images acquired from three calibrated synchronized cameras. Hand pose is determined from the positions of physical markers and input to the robot system in real-time. Second, a multi-line laser camera range sensor is designed, calibrated, and mounted on a robot end-effector to provide three-dimensional (3D) geometry information about objects in the robot environment. The laser-camera sensor includes two cameras to provide stereo vision. For the 3D hand tracking, a novel score-based hand tracking scheme is presented employing dynamic multi-threshold marker detection, a stereo camera-pair utilization scheme, marker matching and labeling using epipolar geometry and hand pose axis analysis, to enable real-time hand tracking under occlusion and non-uniform lighting environments. For surface-geometry measurement using the multi-line laser range sensor, two different approaches are analyzed for two-dimensional (2D) to 3D coordinate mapping, using Bezier surface fitting and neural networks, respectively. The neural-network approach was found to be a more viable approach for surface-geometry measurement worth future exploration for its lower magnitude of 3D reconstruction error and consistency over different regions of the object space

    Audio‐Visual Speaker Tracking

    Get PDF
    Target motion tracking found its application in interdisciplinary fields, including but not limited to surveillance and security, forensic science, intelligent transportation system, driving assistance, monitoring prohibited area, medical science, robotics, action and expression recognition, individual speaker discrimination in multi‐speaker environments and video conferencing in the fields of computer vision and signal processing. Among these applications, speaker tracking in enclosed spaces has been gaining relevance due to the widespread advances of devices and technologies and the necessity for seamless solutions in real‐time tracking and localization of speakers. However, speaker tracking is a challenging task in real‐life scenarios as several distinctive issues influence the tracking process, such as occlusions and an unknown number of speakers. One approach to overcome these issues is to use multi‐modal information, as it conveys complementary information about the state of the speakers compared to single‐modal tracking. To use multi‐modal information, several approaches have been proposed which can be classified into two categories, namely deterministic and stochastic. This chapter aims at providing multimedia researchers with a state‐of‐the‐art overview of tracking methods, which are used for combining multiple modalities to accomplish various multimedia analysis tasks, classifying them into different categories and listing new and future trends in this field

    Real Time Corner Detection for Miniaturized Electro-Optical Sensors Onboard Small Unmanned Aerial Systems

    Get PDF
    This paper describes the target detection algorithm for the image processor of a vision-based system that is installed onboard an unmanned helicopter. It has been developed in the framework of a project of the French national aerospace research center Office National d’Etudes et de Recherches Aérospatiales (ONERA) which aims at developing an air-to-ground target tracking mission in an unknown urban environment. In particular, the image processor must detect targets and estimate ground motion in proximity of the detected target position. Concerning the target detection function, the analysis has dealt with realizing a corner detection algorithm and selecting the best choices in terms of edge detection methods, filtering size and type and the more suitable criterion of detection of the points of interest in order to obtain a very fast algorithm which fulfills the computation load requirements. The compared criteria are the Harris-Stephen and the Shi-Tomasi, ones, which are the most widely used in literature among those based on intensity. Experimental results which illustrate the performance of the developed algorithm and demonstrate that the detection time is fully compliant with the requirements of the real-time system are discussed

    A system for learning statistical motion patterns

    Get PDF
    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy k-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction

    A system for learning statistical motion patterns

    Get PDF
    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy k-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction
    corecore