2,688 research outputs found
RGB-T Tracking Based on Mixed Attention
RGB-T tracking involves the use of images from both visible and thermal
modalities. The primary objective is to adaptively leverage the relatively
dominant modality in varying conditions to achieve more robust tracking
compared to single-modality tracking. An RGB-T tracker based on mixed attention
mechanism to achieve complementary fusion of modalities (referred to as MACFT)
is proposed in this paper. In the feature extraction stage, we utilize
different transformer backbone branches to extract specific and shared
information from different modalities. By performing mixed attention operations
in the backbone to enable information interaction and self-enhancement between
the template and search images, it constructs a robust feature representation
that better understands the high-level semantic features of the target. Then,
in the feature fusion stage, a modality-adaptive fusion is achieved through a
mixed attention-based modality fusion network, which suppresses the low-quality
modality noise while enhancing the information of the dominant modality.
Evaluation on multiple RGB-T public datasets demonstrates that our proposed
tracker outperforms other RGB-T trackers on general evaluation metrics while
also being able to adapt to longterm tracking scenarios.Comment: 14 pages, 10 figure
Dense Feature Aggregation and Pruning for RGBT Tracking
How to perform effective information fusion of different modalities is a core
factor in boosting the performance of RGBT tracking. This paper presents a
novel deep fusion algorithm based on the representations from an end-to-end
trained convolutional neural network. To deploy the complementarity of features
of all layers, we propose a recursive strategy to densely aggregate these
features that yield robust representations of target objects in each modality.
In different modalities, we propose to prune the densely aggregated features of
all modalities in a collaborative way. In a specific, we employ the operations
of global average pooling and weighted random selection to perform channel
scoring and selection, which could remove redundant and noisy features to
achieve more robust feature representation. Experimental results on two RGBT
tracking benchmark datasets suggest that our tracker achieves clear
state-of-the-art against other RGB and RGBT tracking methods.Comment: arXiv admin note: text overlap with arXiv:1811.0985
The seventh visual object tracking VOT2019 challenge results
180The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on 'real-time' shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.openopenKristan M.; Matas J.; Leonardis A.; Felsberg M.; Pflugfelder R.; Kamarainen J.-K.; Zajc L.C.; Drbohlav O.; Lukezic A.; Berg A.; Eldesokey A.; Kapyla J.; Fernandez G.; Gonzalez-Garcia A.; Memarmoghadam A.; Lu A.; He A.; Varfolomieiev A.; Chan A.; Tripathi A.S.; Smeulders A.; Pedasingu B.S.; Chen B.X.; Zhang B.; Baoyuanwu B.; Li B.; He B.; Yan B.; Bai B.; Li B.; Li B.; Kim B.H.; Ma C.; Fang C.; Qian C.; Chen C.; Li C.; Zhang C.; Tsai C.-Y.; Luo C.; Micheloni C.; Zhang C.; Tao D.; Gupta D.; Song D.; Wang D.; Gavves E.; Yi E.; Khan F.S.; Zhang F.; Wang F.; Zhao F.; De Ath G.; Bhat G.; Chen G.; Wang G.; Li G.; Cevikalp H.; Du H.; Zhao H.; Saribas H.; Jung H.M.; Bai H.; Yu H.; Peng H.; Lu H.; Li H.; Li J.; Li J.; Fu J.; Chen J.; Gao J.; Zhao J.; Tang J.; Li J.; Wu J.; Liu J.; Wang J.; Qi J.; Zhang J.; Tsotsos J.K.; Lee J.H.; Van De Weijer J.; Kittler J.; Ha Lee J.; Zhuang J.; Zhang K.; Wang K.; Dai K.; Chen L.; Liu L.; Guo L.; Zhang L.; Wang L.; Wang L.; Zhang L.; Wang L.; Zhou L.; Zheng L.; Rout L.; Van Gool L.; Bertinetto L.; Danelljan M.; Dunnhofer M.; Ni M.; Kim M.Y.; Tang M.; Yang M.-H.; Paluru N.; Martinel N.; Xu P.; Zhang P.; Zheng P.; Zhang P.; Torr P.H.S.; Wang Q.Z.Q.; Guo Q.; Timofte R.; Gorthi R.K.; Everson R.; Han R.; Zhang R.; You S.; Zhao S.-C.; Zhao S.; Li S.; Li S.; Ge S.; Bai S.; Guan S.; Xing T.; Xu T.; Yang T.; Zhang T.; Vojir T.; Feng W.; Hu W.; Wang W.; Tang W.; Zeng W.; Liu W.; Chen X.; Qiu X.; Bai X.; Wu X.-J.; Yang X.; Chen X.; Li X.; Sun X.; Chen X.; Tian X.; Tang X.; Zhu X.-F.; Huang Y.; Chen Y.; Lian Y.; Gu Y.; Liu Y.; Chen Y.; Zhang Y.; Xu Y.; Wang Y.; Li Y.; Zhou Y.; Dong Y.; Xu Y.; Zhang Y.; Li Y.; Luo Z.W.Z.; Zhang Z.; Feng Z.-H.; He Z.; Song Z.; Chen Z.; Zhang Z.; Wu Z.; Xiong Z.; Huang Z.; Teng Z.; Ni Z.Kristan, M.; Matas, J.; Leonardis, A.; Felsberg, M.; Pflugfelder, R.; Kamarainen, J. -K.; Zajc, L. C.; Drbohlav, O.; Lukezic, A.; Berg, A.; Eldesokey, A.; Kapyla, J.; Fernandez, G.; Gonzalez-Garcia, A.; Memarmoghadam, A.; Lu, A.; He, A.; Varfolomieiev, A.; Chan, A.; Tripathi, A. S.; Smeulders, A.; Pedasingu, B. S.; Chen, B. X.; Zhang, B.; Baoyuanwu, B.; Li, B.; He, B.; Yan, B.; Bai, B.; Li, B.; Li, B.; Kim, B. H.; Ma, C.; Fang, C.; Qian, C.; Chen, C.; Li, C.; Zhang, C.; Tsai, C. -Y.; Luo, C.; Micheloni, C.; Zhang, C.; Tao, D.; Gupta, D.; Song, D.; Wang, D.; Gavves, E.; Yi, E.; Khan, F. S.; Zhang, F.; Wang, F.; Zhao, F.; De Ath, G.; Bhat, G.; Chen, G.; Wang, G.; Li, G.; Cevikalp, H.; Du, H.; Zhao, H.; Saribas, H.; Jung, H. M.; Bai, H.; Yu, H.; Peng, H.; Lu, H.; Li, H.; Li, J.; Li, J.; Fu, J.; Chen, J.; Gao, J.; Zhao, J.; Tang, J.; Li, J.; Wu, J.; Liu, J.; Wang, J.; Qi, J.; Zhang, J.; Tsotsos, J. K.; Lee, J. H.; Van De Weijer, J.; Kittler, J.; Ha Lee, J.; Zhuang, J.; Zhang, K.; Wang, K.; Dai, K.; Chen, L.; Liu, L.; Guo, L.; Zhang, L.; Wang, L.; Wang, L.; Zhang, L.; Wang, L.; Zhou, L.; Zheng, L.; Rout, L.; Van Gool, L.; Bertinetto, L.; Danelljan, M.; Dunnhofer, M.; Ni, M.; Kim, M. Y.; Tang, M.; Yang, M. -H.; Paluru, N.; Martinel, N.; Xu, P.; Zhang, P.; Zheng, P.; Zhang, P.; Torr, P. H. S.; Wang, Q. Z. Q.; Guo, Q.; Timofte, R.; Gorthi, R. K.; Everson, R.; Han, R.; Zhang, R.; You, S.; Zhao, S. -C.; Zhao, S.; Li, S.; Li, S.; Ge, S.; Bai, S.; Guan, S.; Xing, T.; Xu, T.; Yang, T.; Zhang, T.; Vojir, T.; Feng, W.; Hu, W.; Wang, W.; Tang, W.; Zeng, W.; Liu, W.; Chen, X.; Qiu, X.; Bai, X.; Wu, X. -J.; Yang, X.; Chen, X.; Li, X.; Sun, X.; Chen, X.; Tian, X.; Tang, X.; Zhu, X. -F.; Huang, Y.; Chen, Y.; Lian, Y.; Gu, Y.; Liu, Y.; Chen, Y.; Zhang, Y.; Xu, Y.; Wang, Y.; Li, Y.; Zhou, Y.; Dong, Y.; Xu, Y.; Zhang, Y.; Li, Y.; Luo, Z. W. Z.; Zhang, Z.; Feng, Z. -H.; He, Z.; Song, Z.; Chen, Z.; Zhang, Z.; Wu, Z.; Xiong, Z.; Huang, Z.; Teng, Z.; Ni, Z
- …