9,416 research outputs found

    Fusion of Head and Full-Body Detectors for Multi-Object Tracking

    Full text link
    In order to track all persons in a scene, the tracking-by-detection paradigm has proven to be a very effective approach. Yet, relying solely on a single detector is also a major limitation, as useful image information might be ignored. Consequently, this work demonstrates how to fuse two detectors into a tracking system. To obtain the trajectories, we propose to formulate tracking as a weighted graph labeling problem, resulting in a binary quadratic program. As such problems are NP-hard, the solution can only be approximated. Based on the Frank-Wolfe algorithm, we present a new solver that is crucial to handle such difficult problems. Evaluation on pedestrian tracking is provided for multiple scenarios, showing superior results over single detector tracking and standard QP-solvers. Finally, our tracker ranks 2nd on the MOT16 benchmark and 1st on the new MOT17 benchmark, outperforming over 90 trackers.Comment: 10 pages, 4 figures; Winner of the MOT17 challenge; CVPRW 201

    Tracking interacting targets in multi-modal sensors

    Get PDF
    PhDObject tracking is one of the fundamental tasks in various applications such as surveillance, sports, video conferencing and activity recognition. Factors such as occlusions, illumination changes and limited field of observance of the sensor make tracking a challenging task. To overcome these challenges the focus of this thesis is on using multiple modalities such as audio and video for multi-target, multi-modal tracking. Particularly, this thesis presents contributions to four related research topics, namely, pre-processing of input signals to reduce noise, multi-modal tracking, simultaneous detection and tracking, and interaction recognition. To improve the performance of detection algorithms, especially in the presence of noise, this thesis investigate filtering of the input data through spatio-temporal feature analysis as well as through frequency band analysis. The pre-processed data from multiple modalities is then fused within Particle filtering (PF). To further minimise the discrepancy between the real and the estimated positions, we propose a strategy that associates the hypotheses and the measurements with a real target, using a Weighted Probabilistic Data Association (WPDA). Since the filtering involved in the detection process reduces the available information and is inapplicable on low signal-to-noise ratio data, we investigate simultaneous detection and tracking approaches and propose a multi-target track-beforedetect Particle filtering (MT-TBD-PF). The proposed MT-TBD-PF algorithm bypasses the detection step and performs tracking in the raw signal. Finally, we apply the proposed multi-modal tracking to recognise interactions between targets in regions within, as well as outside the cameras’ fields of view. The efficiency of the proposed approaches are demonstrated on large uni-modal, multi-modal and multi-sensor scenarios from real world detections, tracking and event recognition datasets and through participation in evaluation campaigns

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    Multispectral persistent surveillance

    Get PDF
    The goal of a successful surveillance system to achieve persistence is to track everything that moves, all of the time, over the entire area of interest. The thrust of this thesis is to identify and improve upon the motion detection and object association aspect of this challenge by adding spectral information to the equation. Traditional motion detection and tracking systems rely primarily on single-band grayscale video, while more current research has focused on sensor fusion, specifically combining visible and IR data sources. A further challenge in covering an entire area of responsibility (AOR) is a limited sensor field of view, which can be overcome by either adding more sensors or multi-tasking a single sensor over multiple areas at a reduced frame rate. As an essential tool for sensor design and mission development, a trade study was conducted to measure the potential advantages of adding spectral bands of information in a single sensor with the intention of reducing sensor frame rates. Thus, traditional motion detection and object association algorithms were modified to evaluate system performance using five spectral bands (visible through thermal IR), while adjusting frame rate as a second variable. The goal of this research was to produce an evaluation of system performance as a function of the number of bands and frame rate. As such, performance surfaces were generated to assess relative performance as a function of the number of bands and frame rate

    DeepCrashTest: Translating Dashcam Videos to Virtual Tests forAutomated Driving Systems

    Get PDF
    abstract: The autonomous vehicle technology has come a long way, but currently, there are no companies that are able to offer fully autonomous ride in any conditions, on any road without any human supervision. These systems should be extensively trained and validated to guarantee safe human transportation. Any small errors in the system functionality may lead to fatal accidents and may endanger human lives. Deep learning methods are widely used for environment perception and prediction of hazardous situations. These techniques require huge amount of training data with both normal and abnormal samples to enable the vehicle to avoid a dangerous situation. The goal of this thesis is to generate simulations from real-world tricky collision scenarios for training and testing autonomous vehicles. Dashcam crash videos from the internet can now be utilized to extract valuable collision data and recreate the crash scenarios in a simulator. The problem of extracting 3D vehicle trajectories from videos recorded by an unknown monocular camera source is solved using a modular approach. The framework is divided into two stages: (a) extracting meaningful adversarial trajectories from short crash videos, and (b) developing methods to automatically process and simulate the vehicle trajectories on a vehicle simulator.Dissertation/ThesisVideo DemonstrationMasters Thesis Computer Science 201

    Audio‐Visual Speaker Tracking

    Get PDF
    Target motion tracking found its application in interdisciplinary fields, including but not limited to surveillance and security, forensic science, intelligent transportation system, driving assistance, monitoring prohibited area, medical science, robotics, action and expression recognition, individual speaker discrimination in multi‐speaker environments and video conferencing in the fields of computer vision and signal processing. Among these applications, speaker tracking in enclosed spaces has been gaining relevance due to the widespread advances of devices and technologies and the necessity for seamless solutions in real‐time tracking and localization of speakers. However, speaker tracking is a challenging task in real‐life scenarios as several distinctive issues influence the tracking process, such as occlusions and an unknown number of speakers. One approach to overcome these issues is to use multi‐modal information, as it conveys complementary information about the state of the speakers compared to single‐modal tracking. To use multi‐modal information, several approaches have been proposed which can be classified into two categories, namely deterministic and stochastic. This chapter aims at providing multimedia researchers with a state‐of‐the‐art overview of tracking methods, which are used for combining multiple modalities to accomplish various multimedia analysis tasks, classifying them into different categories and listing new and future trends in this field

    Video object tracking : contributions to object description and performance assessment

    Get PDF
    Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Universidade do Porto. Faculdade de Engenharia. 201

    Real-Time Visual Servo Control of Two-Link and Three DOF Robot Manipulator

    Get PDF
    This project presents experimental results of position-based visual servoing control process of a 3R robot using 2 fixed cameras. Visual servoing concerns several field of research including vision systems, robotics and automatic control. This method deal with real time changes in the relative position of the target-object with respect to robot. It is have good accuracy with independency of Manipulator servo control structure from the target pose coordinates are the additional advantages of this method. The applications of visually guided systems are many: from intelligent homes to automotive industry. Visual servoing are also useful for a wide range of applications and it can be used to control many different systems (manipulator arms, mobile robots, aircraft, etc.). Visual servoing systems are generally divide depends on the number of camera, on the position of the camera with respect to the robot, on the design of the error function to robot. This project presents an approach for visual robot control. Existing approaches are increased in such a way that depth and position information of block or object is estimate during the motion of the robot. That is done by the visual tracking of an object throughout the trajectory. Vision designed robotics has been a major research area for more time. However, one of the open and commonly problems in the area is the requirement for exchange of the experiences and ideas. We also include a number of real–time examples from our own research. Forward and inverse kinematics of 3 DOF robot have been done then experiments on image processing, object shape recognition and pose estimation as well as target-block or object in Cartesian system and visual control of robot manipulator have been prescribed. Experimental results obtained from real-time system implementation of visual servo control and tests of 3DOF robot in lab

    AN ADAPTIVE MULTIPLE-OBJECT TRACKING ARCHITECTURE FOR LONG-DURATION VIDEOS WITH VARIABLE TARGET DENSITY

    Get PDF
    Multiple-Object Tracking (MOT) methods are used to detect targets in individual video frames, e.g., vehicles, people, and other objects, and then record each unique target’s path over time. Current state-of-the-art approaches are extremely complex because most rely on extracting and comparing visual features at every frame to track each object. These approaches are geared toward high-difficulty-tracking scenarios, e.g., crowded airports, and require expensive dedicated hardware, e.g., Graphics Processing Units. In hardware-constrained applications, researchers are turning to older, less complex MOT methods, which reveals a serious scalability issue within the state-of-the-art. Crowded environments are a niche application for MOT, i.e., there are far more residential areas than there are airports. Given complex approaches are not required for low-difficulty-tracking scenarios, i.e., video showing mainly isolated targets, there is an opportunity to utilize more efficient MOT methods for these environments. Nevertheless, little recent research has focused on developing more efficient MOT methods. This thesis describes a novel MOT method, ClusterTracker, that is built to handle variable-difficulty-tracking environments an order of magnitude faster than the state-of-the-art. It achieves this by avoiding visual features and using quadratic-complexity algorithms instead of the cubic-complexity algorithms found in other trackers. ClusterTracker performs spatial clustering on object detections from short frame sequences, treats clusters as tracklets, and then connects successive tracklets with high bounding-box overlap to form tracks. With recorded video, parallel processing can be applied to several steps of ClusterTracker. This thesis evaluates ClusterTracker’s baseline performance on several benchmark datasets, describes its intended operating environments, and identifies its weaknesses. Subsequent modifications patch these weaknesses while also addressing the scalability concerns of more complex MOT methods. The modified architecture uses clustering feedback to separate isolated targets from non-isolated targets, re-processing the latter with a more complex MOT method. Results show ClusterTracker is uniquely suited for such an approach and allows complex MOT methods to be applied to the challenging tracking situations for which they are intended
    corecore