1,358 research outputs found
Correlation Filters for Unmanned Aerial Vehicle-Based Aerial Tracking: A Review and Experimental Evaluation
Aerial tracking, which has exhibited its omnipresent dedication and splendid
performance, is one of the most active applications in the remote sensing
field. Especially, unmanned aerial vehicle (UAV)-based remote sensing system,
equipped with a visual tracking approach, has been widely used in aviation,
navigation, agriculture,transportation, and public security, etc. As is
mentioned above, the UAV-based aerial tracking platform has been gradually
developed from research to practical application stage, reaching one of the
main aerial remote sensing technologies in the future. However, due to the
real-world onerous situations, e.g., harsh external challenges, the vibration
of the UAV mechanical structure (especially under strong wind conditions), the
maneuvering flight in complex environment, and the limited computation
resources onboard, accuracy, robustness, and high efficiency are all crucial
for the onboard tracking methods. Recently, the discriminative correlation
filter (DCF)-based trackers have stood out for their high computational
efficiency and appealing robustness on a single CPU, and have flourished in the
UAV visual tracking community. In this work, the basic framework of the
DCF-based trackers is firstly generalized, based on which, 23 state-of-the-art
DCF-based trackers are orderly summarized according to their innovations for
solving various issues. Besides, exhaustive and quantitative experiments have
been extended on various prevailing UAV tracking benchmarks, i.e., UAV123,
UAV123@10fps, UAV20L, UAVDT, DTB70, and VisDrone2019-SOT, which contain 371,903
frames in total. The experiments show the performance, verify the feasibility,
and demonstrate the current challenges of DCF-based trackers onboard UAV
tracking.Comment: 28 pages, 10 figures, submitted to GRS
How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV
This work explores the feasibility of steering a drone with a (recurrent)
neural network, based on input from a forward looking camera, in the context of
a high-level navigation task. We set up a generic framework for training a
network to perform navigation tasks based on imitation learning. It can be
applied to both aerial and land vehicles. As a proof of concept we apply it to
a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a
room containing a number of obstacles. So far only feedforward neural networks
(FNNs) have been used to train UAV control. To cope with more complex tasks, we
propose the use of recurrent neural networks (RNN) instead and successfully
train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision
based control is a sequential prediction problem, known for its highly
correlated input data. The correlation makes training a network hard,
especially an RNN. To overcome this issue, we investigate an alternative
sampling method during training, namely window-wise truncated backpropagation
through time (WW-TBPTT). Further, end-to-end training requires a lot of data
which often is not available. Therefore, we compare the performance of
retraining only the Fully Connected (FC) and LSTM control layers with networks
which are trained end-to-end. Performing the relatively simple task of crossing
a room already reveals important guidelines and good practices for training
neural control networks. Different visualizations help to explain the behavior
learned.Comment: 12 pages, 30 figure
Predictive Visual Tracking: A New Benchmark and Baseline Approach
As a crucial robotic perception capability, visual tracking has been
intensively studied recently. In the real-world scenarios, the onboard
processing time of the image streams inevitably leads to a discrepancy between
the tracking results and the real-world states. However, existing visual
tracking benchmarks commonly run the trackers offline and ignore such latency
in the evaluation. In this work, we aim to deal with a more realistic problem
of latency-aware tracking. The state-of-the-art trackers are evaluated in the
aerial scenarios with new metrics jointly assessing the tracking accuracy and
efficiency. Moreover, a new predictive visual tracking baseline is developed to
compensate for the latency stemming from the onboard computation. Our
latency-aware benchmark can provide a more realistic evaluation of the trackers
for the robotic applications. Besides, exhaustive experiments have proven the
effectiveness of the proposed predictive visual tracking baseline approach.Comment: 7 pages, 5 figure
- …