18,977 research outputs found

    Data association and occlusion handling for vision-based people tracking by mobile robots

    Get PDF
    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets

    Improved data association and occlusion handling for vision-based people tracking by mobile robots

    Get PDF
    This paper presents an approach for tracking multiple persons using a combination of colour and thermal vision sensors on a mobile robot. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is then incorporated into the tracker

    Particle Filters for Colour-Based Face Tracking Under Varying Illumination

    Get PDF
    Automatic human face tracking is the basis of robotic and active vision systems used for facial feature analysis, automatic surveillance, video conferencing, intelligent transportation, human-computer interaction and many other applications. Superior human face tracking will allow future safety surveillance systems which monitor drowsy drivers, or patients and elderly people at the risk of seizure or sudden falls and will perform with lower risk of failure in unexpected situations. This area has actively been researched in the current literature in an attempt to make automatic face trackers more stable in challenging real-world environments. To detect faces in video sequences, features like colour, texture, intensity, shape or motion is used. Among these feature colour has been the most popular, because of its insensitivity to orientation and size changes and fast process-ability. The challenge of colour-based face trackers, however, has been dealing with the instability of trackers in case of colour changes due to the drastic variation in environmental illumination. Probabilistic tracking and the employment of particle filters as powerful Bayesian stochastic estimators, on the other hand, is increasing in the visual tracking field thanks to their ability to handle multi-modal distributions in cluttered scenes. Traditional particle filters utilize transition prior as importance sampling function, but this can result in poor posterior sampling. The objective of this research is to investigate and propose stable face tracker capable of dealing with challenges like rapid and random motion of head, scale changes when people are moving closer or further from the camera, motion of multiple people with close skin tones in the vicinity of the model person, presence of clutter and occlusion of face. The main focus has been on investigating an efficient method to address the sensitivity of the colour-based trackers in case of gradual or drastic illumination variations. The particle filter is used to overcome the instability of face trackers due to nonlinear and random head motions. To increase the traditional particle filter\u27s sampling efficiency an improved version of the particle filter is introduced that considers the latest measurements. This improved particle filter employs a new colour-based bottom-up approach that leads particles to generate an effective proposal distribution. The colour-based bottom-up approach is a classification technique for fast skin colour segmentation. This method is independent to distribution shape and does not require excessive memory storage or exhaustive prior training. Finally, to address the adaptability of the colour-based face tracker to illumination changes, an original likelihood model is proposed based of spatial rank information that considers both the illumination invariant colour ordering of a face\u27s pixels in an image or video frame and the spatial interaction between them. The original contribution of this work lies in the unique mixture of existing and proposed components to improve colour-base recognition and tracking of faces in complex scenes, especially where drastic illumination changes occur. Experimental results of the final version of the proposed face tracker, which combines the methods developed, are provided in the last chapter of this manuscript

    Towards binocular active vision in a robot head system

    Get PDF
    This paper presents the first results of an investigation and pilot study into an active, binocular vision system that combines binocular vergence, object recognition and attention control in a unified framework. The prototype developed is capable of identifying, targeting, verging on and recognizing objects in a highly-cluttered scene without the need for calibration or other knowledge of the camera geometry. This is achieved by implementing all image analysis in a symbolic space without creating explicit pixel-space maps. The system structure is based on the ‘searchlight metaphor’ of biological systems. We present results of a first pilot investigation that yield a maximum vergence error of 6.4 pixels, while seven of nine known objects were recognized in a high-cluttered environment. Finally a “stepping stone” visual search strategy was demonstrated, taking a total of 40 saccades to find two known objects in the workspace, neither of which appeared simultaneously within the Field of View resulting from any individual saccade

    The seventh visual object tracking VOT2019 challenge results

    Get PDF
    180The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on 'real-time' shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.openopenKristan M.; Matas J.; Leonardis A.; Felsberg M.; Pflugfelder R.; Kamarainen J.-K.; Zajc L.C.; Drbohlav O.; Lukezic A.; Berg A.; Eldesokey A.; Kapyla J.; Fernandez G.; Gonzalez-Garcia A.; Memarmoghadam A.; Lu A.; He A.; Varfolomieiev A.; Chan A.; Tripathi A.S.; Smeulders A.; Pedasingu B.S.; Chen B.X.; Zhang B.; Baoyuanwu B.; Li B.; He B.; Yan B.; Bai B.; Li B.; Li B.; Kim B.H.; Ma C.; Fang C.; Qian C.; Chen C.; Li C.; Zhang C.; Tsai C.-Y.; Luo C.; Micheloni C.; Zhang C.; Tao D.; Gupta D.; Song D.; Wang D.; Gavves E.; Yi E.; Khan F.S.; Zhang F.; Wang F.; Zhao F.; De Ath G.; Bhat G.; Chen G.; Wang G.; Li G.; Cevikalp H.; Du H.; Zhao H.; Saribas H.; Jung H.M.; Bai H.; Yu H.; Peng H.; Lu H.; Li H.; Li J.; Li J.; Fu J.; Chen J.; Gao J.; Zhao J.; Tang J.; Li J.; Wu J.; Liu J.; Wang J.; Qi J.; Zhang J.; Tsotsos J.K.; Lee J.H.; Van De Weijer J.; Kittler J.; Ha Lee J.; Zhuang J.; Zhang K.; Wang K.; Dai K.; Chen L.; Liu L.; Guo L.; Zhang L.; Wang L.; Wang L.; Zhang L.; Wang L.; Zhou L.; Zheng L.; Rout L.; Van Gool L.; Bertinetto L.; Danelljan M.; Dunnhofer M.; Ni M.; Kim M.Y.; Tang M.; Yang M.-H.; Paluru N.; Martinel N.; Xu P.; Zhang P.; Zheng P.; Zhang P.; Torr P.H.S.; Wang Q.Z.Q.; Guo Q.; Timofte R.; Gorthi R.K.; Everson R.; Han R.; Zhang R.; You S.; Zhao S.-C.; Zhao S.; Li S.; Li S.; Ge S.; Bai S.; Guan S.; Xing T.; Xu T.; Yang T.; Zhang T.; Vojir T.; Feng W.; Hu W.; Wang W.; Tang W.; Zeng W.; Liu W.; Chen X.; Qiu X.; Bai X.; Wu X.-J.; Yang X.; Chen X.; Li X.; Sun X.; Chen X.; Tian X.; Tang X.; Zhu X.-F.; Huang Y.; Chen Y.; Lian Y.; Gu Y.; Liu Y.; Chen Y.; Zhang Y.; Xu Y.; Wang Y.; Li Y.; Zhou Y.; Dong Y.; Xu Y.; Zhang Y.; Li Y.; Luo Z.W.Z.; Zhang Z.; Feng Z.-H.; He Z.; Song Z.; Chen Z.; Zhang Z.; Wu Z.; Xiong Z.; Huang Z.; Teng Z.; Ni Z.Kristan, M.; Matas, J.; Leonardis, A.; Felsberg, M.; Pflugfelder, R.; Kamarainen, J. -K.; Zajc, L. C.; Drbohlav, O.; Lukezic, A.; Berg, A.; Eldesokey, A.; Kapyla, J.; Fernandez, G.; Gonzalez-Garcia, A.; Memarmoghadam, A.; Lu, A.; He, A.; Varfolomieiev, A.; Chan, A.; Tripathi, A. S.; Smeulders, A.; Pedasingu, B. S.; Chen, B. X.; Zhang, B.; Baoyuanwu, B.; Li, B.; He, B.; Yan, B.; Bai, B.; Li, B.; Li, B.; Kim, B. H.; Ma, C.; Fang, C.; Qian, C.; Chen, C.; Li, C.; Zhang, C.; Tsai, C. -Y.; Luo, C.; Micheloni, C.; Zhang, C.; Tao, D.; Gupta, D.; Song, D.; Wang, D.; Gavves, E.; Yi, E.; Khan, F. S.; Zhang, F.; Wang, F.; Zhao, F.; De Ath, G.; Bhat, G.; Chen, G.; Wang, G.; Li, G.; Cevikalp, H.; Du, H.; Zhao, H.; Saribas, H.; Jung, H. M.; Bai, H.; Yu, H.; Peng, H.; Lu, H.; Li, H.; Li, J.; Li, J.; Fu, J.; Chen, J.; Gao, J.; Zhao, J.; Tang, J.; Li, J.; Wu, J.; Liu, J.; Wang, J.; Qi, J.; Zhang, J.; Tsotsos, J. K.; Lee, J. H.; Van De Weijer, J.; Kittler, J.; Ha Lee, J.; Zhuang, J.; Zhang, K.; Wang, K.; Dai, K.; Chen, L.; Liu, L.; Guo, L.; Zhang, L.; Wang, L.; Wang, L.; Zhang, L.; Wang, L.; Zhou, L.; Zheng, L.; Rout, L.; Van Gool, L.; Bertinetto, L.; Danelljan, M.; Dunnhofer, M.; Ni, M.; Kim, M. Y.; Tang, M.; Yang, M. -H.; Paluru, N.; Martinel, N.; Xu, P.; Zhang, P.; Zheng, P.; Zhang, P.; Torr, P. H. S.; Wang, Q. Z. Q.; Guo, Q.; Timofte, R.; Gorthi, R. K.; Everson, R.; Han, R.; Zhang, R.; You, S.; Zhao, S. -C.; Zhao, S.; Li, S.; Li, S.; Ge, S.; Bai, S.; Guan, S.; Xing, T.; Xu, T.; Yang, T.; Zhang, T.; Vojir, T.; Feng, W.; Hu, W.; Wang, W.; Tang, W.; Zeng, W.; Liu, W.; Chen, X.; Qiu, X.; Bai, X.; Wu, X. -J.; Yang, X.; Chen, X.; Li, X.; Sun, X.; Chen, X.; Tian, X.; Tang, X.; Zhu, X. -F.; Huang, Y.; Chen, Y.; Lian, Y.; Gu, Y.; Liu, Y.; Chen, Y.; Zhang, Y.; Xu, Y.; Wang, Y.; Li, Y.; Zhou, Y.; Dong, Y.; Xu, Y.; Zhang, Y.; Li, Y.; Luo, Z. W. Z.; Zhang, Z.; Feng, Z. -H.; He, Z.; Song, Z.; Chen, Z.; Zhang, Z.; Wu, Z.; Xiong, Z.; Huang, Z.; Teng, Z.; Ni, Z

    Multiple human tracking in RGB-depth data: A survey

    Get PDF
    © The Institution of Engineering and Technology. Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-depth devices has led to many new approaches to MHT, and many of these integrate colour and depth cues to improve each and every stage of the process. In this survey, the authors present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. They identify and introduce existing, publicly available, benchmark datasets and software resources that fuse colour and depth data for MHT. Finally, they present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets

    Learning non-maximum suppression

    Full text link
    Object detectors have hugely profited from moving towards an end-to-end learning paradigm: proposals, features, and the classifier becoming one neural network improved results two-fold on general object detection. One indispensable component is non-maximum suppression (NMS), a post-processing algorithm responsible for merging all detections that belong to the same object. The de facto standard NMS algorithm is still fully hand-crafted, suspiciously simple, and -- being based on greedy clustering with a fixed distance threshold -- forces a trade-off between recall and precision. We propose a new network architecture designed to perform NMS, using only boxes and their score. We report experiments for person detection on PETS and for general object categories on the COCO dataset. Our approach shows promise providing improved localization and occlusion handling.Comment: Added "Supplementary material" titl

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
    corecore