23 research outputs found

    Combining logic and probability in tracking and scene interpretation

    Get PDF
    The paper gives a high-level overview of some ways in which logical representations and reasoning can be used in computer vision applications, such as tracking and scene interpretation. The combination of logical and statistical approaches is also considered

    Automatic Vehicle Trajectory Extraction by Aerial Remote Sensing

    Get PDF
    Research in road users’ behaviour typically depends on detailed observational data availability, particularly if the interest is in driving behaviour modelling. Among this type of data, vehicle trajectories are an important source of information for traffic flow theory, driving behaviour modelling, innovation in traffic management and safety and environmental studies. Recent developments in sensing technologies and image processing algorithms reduced the resources (time and costs) required for detailed traffic data collection, promoting the feasibility of site-based and vehicle-based naturalistic driving observation. For testing the core models of a traffic microsimulation application for safety assessment, vehicle trajectories were collected by remote sensing on a typical Portuguese suburban motorway. Multiple short flights over a stretch of an urban motorway allowed for the collection of several partial vehicle trajectories. In this paper the technical details of each step of the methodology used is presented: image collection, image processing, vehicle identification and vehicle tracking. To collect the images, a high-resolution camera was mounted on an aircraft's gyroscopic platform. The camera was connected to a DGPS for extraction of the camera position and allowed the collection of high resolution images at a low frame rate of 2s. After generic image orthorrectification using the flight details and the terrain model, computer vision techniques were used for fine rectification: the scale-invariant feature transform algorithm was used for detection and description of image features, and the random sample consensus algorithm for feature matching. Vehicle detection was carried out by median-based background subtraction. After the computation of the detected foreground and the shadow detection using a spectral ratio technique, region segmentation was used to identify candidates for vehicle positions. Finally, vehicles were tracked using a k- shortest disjoints paths algorithm. This approach allows for the optimization of an entire set of trajectories against all possible position candidates using motion-based optimization. Besides the importance of a new trajectory dataset that allows the development of new behavioural models and the validation of existing ones, this paper also describes the application of state-of-the-art algorithms and methods that significantly minimize the resources needed for such data collection. Keywords: Vehicle trajectories extraction, Driver behaviour, Remote sensin

    Vehicle Detection and Tracking Techniques: A Concise Review

    Get PDF
    Vehicle detection and tracking applications play an important role for civilian and military applications such as in highway traffic surveillance control, management and urban traffic planning. Vehicle detection process on road are used for vehicle tracking, counts, average speed of each individual vehicle, traffic analysis and vehicle categorizing objectives and may be implemented under different environments changes. In this review, we present a concise overview of image processing methods and analysis tools which used in building these previous mentioned applications that involved developing traffic surveillance systems. More precisely and in contrast with other reviews, we classified the processing methods under three categories for more clarification to explain the traffic systems

    Enhanced tracking and recognition of moving objects by reasoning about spatio-temporal continuity.

    Get PDF
    A framework for the logical and statistical analysis and annotation of dynamic scenes containing occlusion and other uncertainties is presented. This framework consists of three elements; an object tracker module, an object recognition/classification module and a logical consistency, ambiguity and error reasoning engine. The principle behind the object tracker and object recognition modules is to reduce error by increasing ambiguity (by merging objects in close proximity and presenting multiple hypotheses). The reasoning engine deals with error, ambiguity and occlusion in a unified framework to produce a hypothesis that satisfies fundamental constraints on the spatio-temporal continuity of objects. Our algorithm finds a globally consistent model of an extended video sequence that is maximally supported by a voting function based on the output of a statistical classifier. The system results in an annotation that is significantly more accurate than what would be obtained by frame-by-frame evaluation of the classifier output. The framework has been implemented and applied successfully to the analysis of team sports with a single camera. Key words: Visua

    Webcams for Bird Detection and Monitoring: A Demonstration Study

    Get PDF
    Better insights into bird migration can be a tool for assessing the spread of avian borne infections or ecological/climatologic issues reflected in deviating migration patterns. This paper evaluates whether low budget permanent cameras such as webcams can offer a valuable contribution to the reporting of migratory birds. An experimental design was set up to study the detection capability using objects of different size, color and velocity. The results of the experiment revealed the minimum size, maximum velocity and contrast of the objects required for detection by a standard webcam. Furthermore, a modular processing scheme was proposed to track and follow migratory birds in webcam recordings. Techniques such as motion detection by background subtraction, stereo vision and lens distortion were combined to form the foundation of the bird tracking algorithm. Additional research to integrate webcam networks, however, is needed and future research should enforce the potential of the processing scheme by exploring and testing alternatives of each individual module or processing step

    Three dimensional information estimation and tracking for moving objects detection using two cameras framework

    Get PDF
    Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects

    Detecting Carried Objects in Short Video Sequences

    Full text link
    Abstract. We propose a new method for detecting objects such as bags carried by pedestrians depicted in short video sequences. In common with earlier work [1, 2] on the same problem, the method starts by averaging aligned foreground regions of a walking pedestrian to produce a rep-resentation of motion and shape (known as a temporal template) that has some immunity to noise in foreground segmentations and phase of the walking cycle. Our key novelty is for carried objects to be revealed by comparing the temporal templates against view-specific exemplars generated offline for unencumbered pedestrians. A likelihood map ob-tained from this match is combined in a Markov random field with a map of prior probabilities for carried objects and a spatial continuity as-sumption, from which we obtain a segmentation of carried objects using the MAP solution. We have re-implemented the earlier state of the art method [1] and demonstrate a substantial improvement in performance for the new method on the challenging PETS2006 dataset [3]. Although developed for a specific problem, the method could be applied to the de-tection of irregularities in appearance for other categories of object that move in a periodic fashion.

    Pedestrian detection and counting in surveillance videos

    Get PDF
    "December 2013.""A Thesis presented to the Faculty of the Graduate School at the University of Missouri In Partial Fulfillment of the Requirements for the Degree Master of Science."Thesis supervisor: Dr. Zhihai He.Pedestrian detection and counting have important application in video surveillance for entrance monitoring, customer behavior analysis, and public service management. In this thesis, we propose an accurate, reliable and fast method for pedestrian detection and counting in video surveillance. To this end, we first develop an effective method for background modeling, subtraction, update, and shadow removal. To effectively differentiate person image patches from other background patches, we develop a head-shoulder classification and detection method. A foreground mask curve analysis method is to determine the possible position of persons, and then use a SVM (Support Vector Machine) classifier with HOG (Histogram of Oriented) feature and bag of words to detect the head-shoulder of people. Based on the foreground detection and head-shoulder classification at each frame, we develop a person counting algorithm in the temporal domain to analyze the frame-level classification results. Our experiments with real-world surveillance videos demonstrate the proposed method has achieved accurate and reliable pedestrian detection and counting.Includes bibliographical references (pages 46-54)
    corecore