18,879 research outputs found
Evaluation of trackers for Pan-Tilt-Zoom Scenarios
Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in
computer vision for many years. Compared to tracking with a still camera, the
images captured with a PTZ camera are highly dynamic in nature because the
camera can perform large motion resulting in quickly changing capture
conditions. Furthermore, tracking with a PTZ camera involves camera control to
position the camera on the target. For successful tracking and camera control,
the tracker must be fast enough, or has to be able to predict accurately the
next position of the target. Therefore, standard benchmarks do not allow to
assess properly the quality of a tracker for the PTZ scenario. In this work, we
use a virtual PTZ framework to evaluate different tracking algorithms and
compare their performances. We also extend the framework to add target position
prediction for the next frame, accounting for camera motion and processing
delays. By doing this, we can assess if predicting can make long-term tracking
more robust as it may help slower algorithms for keeping the target in the
field of view of the camera. Results confirm that both speed and robustness are
required for tracking under the PTZ scenario.Comment: 6 pages, 2 figures, International Conference on Pattern Recognition
and Artificial Intelligence 201
Innovative observing strategy and orbit determination for Low Earth Orbit Space Debris
We present the results of a large scale simulation, reproducing the behavior
of a data center for the build-up and maintenance of a complete catalog of
space debris in the upper part of the low Earth orbits region (LEO). The
purpose is to determine the performances of a network of advanced optical
sensors, through the use of the newest orbit determination algorithms developed
by the Department of Mathematics of Pisa (DM). Such a network has been proposed
to ESA in the Space Situational Awareness (SSA) framework by Carlo Gavazzi
Space SpA (CGS), Istituto Nazionale di Astrofisica (INAF), DM, and Istituto di
Scienza e Tecnologie dell'Informazione (ISTI-CNR). The conclusion is that it is
possible to use a network of optical sensors to build up a catalog containing
more than 98% of the objects with perigee height between 1100 and 2000 km,
which would be observable by a reference radar system selected as comparison.
It is also possible to maintain such a catalog within the accuracy requirements
motivated by collision avoidance, and to detect catastrophic fragmentation
events. However, such results depend upon specific assumptions on the sensor
and on the software technologies
DroTrack: High-speed Drone-based Object Tracking Under Uncertainty
We present DroTrack, a high-speed visual single-object tracking framework for
drone-captured video sequences. Most of the existing object tracking methods
are designed to tackle well-known challenges, such as occlusion and cluttered
backgrounds. The complex motion of drones, i.e., multiple degrees of freedom in
three-dimensional space, causes high uncertainty. The uncertainty problem leads
to inaccurate location predictions and fuzziness in scale estimations. DroTrack
solves such issues by discovering the dependency between object representation
and motion geometry. We implement an effective object segmentation based on
Fuzzy C Means (FCM). We incorporate the spatial information into the membership
function to cluster the most discriminative segments. We then enhance the
object segmentation by using a pre-trained Convolution Neural Network (CNN)
model. DroTrack also leverages the geometrical angular motion to estimate a
reliable object scale. We discuss the experimental results and performance
evaluation using two datasets of 51,462 drone-captured frames. The combination
of the FCM segmentation and the angular scaling increased DroTrack precision by
up to and decreased the centre location error by pixels on average.
DroTrack outperforms all the high-speed trackers and achieves comparable
results in comparison to deep learning trackers. DroTrack offers high frame
rates up to 1000 frame per second (fps) with the best location precision, more
than a set of state-of-the-art real-time trackers.Comment: 10 pages, 12 figures, FUZZ-IEEE 202
Evaluating a dancer's performance using Kinect-based skeleton tracking
In this work, we describe a novel system that automatically evaluates dance performances against a gold-standard performance and provides visual feedback to the performer in a 3D virtual environment. The system acquires the motion of a performer via Kinect-based human skeleton tracking, making the approach viable for a large range of users, including home enthusiasts. Unlike traditional gaming scenarios, when the motion of a user must by kept in synch with a pre-recorded avatar that is displayed on screen, the technique described in this paper targets online interactive scenarios where dance choreographies can be set, altered, practiced and refined by users. In this work, we have addressed some areas of this application scenario. In particular, a set of appropriate signal processing and soft computing methodologies is proposed for temporally aligning dance movements from two different users and quantitatively evaluating one performance against another
End-to-end Flow Correlation Tracking with Spatial-temporal Attention
Discriminative correlation filters (DCF) with deep convolutional features
have achieved favorable performance in recent tracking benchmarks. However,
most of existing DCF trackers only consider appearance features of current
frame, and hardly benefit from motion and inter-frame information. The lack of
temporal information degrades the tracking performance during challenges such
as partial occlusion and deformation. In this work, we focus on making use of
the rich flow information in consecutive frames to improve the feature
representation and the tracking accuracy. Firstly, individual components,
including optical flow estimation, feature extraction, aggregation and
correlation filter tracking are formulated as special layers in network. To the
best of our knowledge, this is the first work to jointly train flow and
tracking task in a deep learning framework. Then the historical feature maps at
predefined intervals are warped and aggregated with current ones by the guiding
of flow. For adaptive aggregation, we propose a novel spatial-temporal
attention mechanism. Extensive experiments are performed on four challenging
tracking datasets: OTB2013, OTB2015, VOT2015 and VOT2016, and the proposed
method achieves superior results on these benchmarks.Comment: Accepted in CVPR 201
- …