138 research outputs found

    Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System

    Get PDF
    The paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared to existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.Comment: 8 pages, 7 figure

    The visual object tracking VOT2015 challenge results

    Get PDF
    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website

    Real-time Embedded Person Detection and Tracking for Shopping Behaviour Analysis

    Full text link
    Shopping behaviour analysis through counting and tracking of people in shop-like environments offers valuable information for store operators and provides key insights in the stores layout (e.g. frequently visited spots). Instead of using extra staff for this, automated on-premise solutions are preferred. These automated systems should be cost-effective, preferably on lightweight embedded hardware, work in very challenging situations (e.g. handling occlusions) and preferably work real-time. We solve this challenge by implementing a real-time TensorRT optimized YOLOv3-based pedestrian detector, on a Jetson TX2 hardware platform. By combining the detector with a sparse optical flow tracker we assign a unique ID to each customer and tackle the problem of loosing partially occluded customers. Our detector-tracker based solution achieves an average precision of 81.59% at a processing speed of 10 FPS. Besides valuable statistics, heat maps of frequently visited spots are extracted and used as an overlay on the video stream

    Cyclist Detection, Tracking, and Trajectory Analysis in Urban Traffic Video Data

    Full text link
    The major objective of this thesis work is examining computer vision and machine learning detection methods, tracking algorithms and trajectory analysis for cyclists in traffic video data and developing an efficient system for cyclist counting. Due to the growing number of cyclist accidents on urban roads, methods for collecting information on cyclists are of significant importance to the Department of Transportation. The collected information provides insights into solving critical problems related to transportation planning, implementing safety countermeasures, and managing traffic flow efficiently. Intelligent Transportation System (ITS) employs automated tools to collect traffic information from traffic video data. In comparison to other road users, such as cars and pedestrians, the automated cyclist data collection is relatively a new research area. In this work, a vision-based method for gathering cyclist count data at intersections and road segments is developed. First, we develop methodology for an efficient detection and tracking of cyclists. The combination of classification features along with motion based properties are evaluated to detect cyclists in the test video data. A Convolutional Neural Network (CNN) based detector called You Only Look Once (YOLO) is implemented to increase the detection accuracy. In the next step, the detection results are fed into a tracker which is implemented based on the Kernelized Correlation Filters (KCF) which in cooperation with the bipartite graph matching algorithm allows to track multiple cyclists, concurrently. Then, a trajectory rebuilding method and a trajectory comparison model are applied to refine the accuracy of tracking and counting. The trajectory comparison is performed based on semantic similarity approach. The proposed counting method is the first cyclist counting method that has the ability to count cyclists under different movement patterns. The trajectory data obtained can be further utilized for cyclist behavioral modeling and safety analysis

    3D Hand Movement Measurement Framework for Studying Human-Computer Interaction

    Get PDF
    In order to develop better touch and gesture user interfaces, it is important to be able to measure how humans move their hands while interacting with technical devices. The recent advances in high-speed imaging technology and in image-based object tracking techniques have made it possible to accurately measure the hand movement from videos without the need for data gloves or other sensors that would limit the natural hand movements. In this paper, we propose a complete framework to measure hand movements in 3D in human-computer interaction situations. The framework includes the composition of the measurement setup, selecting the object tracking methods, post-processing of the motion trajectories, 3D trajectory reconstruction, and characterizing and visualizing the movement data. We demonstrate the framework in a context where 3D touch screen usability is studied with 3D stimuli.Peer reviewe

    The seventh visual object tracking VOT2019 challenge results

    Get PDF
    180The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on 'real-time' shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.openopenKristan M.; Matas J.; Leonardis A.; Felsberg M.; Pflugfelder R.; Kamarainen J.-K.; Zajc L.C.; Drbohlav O.; Lukezic A.; Berg A.; Eldesokey A.; Kapyla J.; Fernandez G.; Gonzalez-Garcia A.; Memarmoghadam A.; Lu A.; He A.; Varfolomieiev A.; Chan A.; Tripathi A.S.; Smeulders A.; Pedasingu B.S.; Chen B.X.; Zhang B.; Baoyuanwu B.; Li B.; He B.; Yan B.; Bai B.; Li B.; Li B.; Kim B.H.; Ma C.; Fang C.; Qian C.; Chen C.; Li C.; Zhang C.; Tsai C.-Y.; Luo C.; Micheloni C.; Zhang C.; Tao D.; Gupta D.; Song D.; Wang D.; Gavves E.; Yi E.; Khan F.S.; Zhang F.; Wang F.; Zhao F.; De Ath G.; Bhat G.; Chen G.; Wang G.; Li G.; Cevikalp H.; Du H.; Zhao H.; Saribas H.; Jung H.M.; Bai H.; Yu H.; Peng H.; Lu H.; Li H.; Li J.; Li J.; Fu J.; Chen J.; Gao J.; Zhao J.; Tang J.; Li J.; Wu J.; Liu J.; Wang J.; Qi J.; Zhang J.; Tsotsos J.K.; Lee J.H.; Van De Weijer J.; Kittler J.; Ha Lee J.; Zhuang J.; Zhang K.; Wang K.; Dai K.; Chen L.; Liu L.; Guo L.; Zhang L.; Wang L.; Wang L.; Zhang L.; Wang L.; Zhou L.; Zheng L.; Rout L.; Van Gool L.; Bertinetto L.; Danelljan M.; Dunnhofer M.; Ni M.; Kim M.Y.; Tang M.; Yang M.-H.; Paluru N.; Martinel N.; Xu P.; Zhang P.; Zheng P.; Zhang P.; Torr P.H.S.; Wang Q.Z.Q.; Guo Q.; Timofte R.; Gorthi R.K.; Everson R.; Han R.; Zhang R.; You S.; Zhao S.-C.; Zhao S.; Li S.; Li S.; Ge S.; Bai S.; Guan S.; Xing T.; Xu T.; Yang T.; Zhang T.; Vojir T.; Feng W.; Hu W.; Wang W.; Tang W.; Zeng W.; Liu W.; Chen X.; Qiu X.; Bai X.; Wu X.-J.; Yang X.; Chen X.; Li X.; Sun X.; Chen X.; Tian X.; Tang X.; Zhu X.-F.; Huang Y.; Chen Y.; Lian Y.; Gu Y.; Liu Y.; Chen Y.; Zhang Y.; Xu Y.; Wang Y.; Li Y.; Zhou Y.; Dong Y.; Xu Y.; Zhang Y.; Li Y.; Luo Z.W.Z.; Zhang Z.; Feng Z.-H.; He Z.; Song Z.; Chen Z.; Zhang Z.; Wu Z.; Xiong Z.; Huang Z.; Teng Z.; Ni Z.Kristan, M.; Matas, J.; Leonardis, A.; Felsberg, M.; Pflugfelder, R.; Kamarainen, J. -K.; Zajc, L. C.; Drbohlav, O.; Lukezic, A.; Berg, A.; Eldesokey, A.; Kapyla, J.; Fernandez, G.; Gonzalez-Garcia, A.; Memarmoghadam, A.; Lu, A.; He, A.; Varfolomieiev, A.; Chan, A.; Tripathi, A. S.; Smeulders, A.; Pedasingu, B. S.; Chen, B. X.; Zhang, B.; Baoyuanwu, B.; Li, B.; He, B.; Yan, B.; Bai, B.; Li, B.; Li, B.; Kim, B. H.; Ma, C.; Fang, C.; Qian, C.; Chen, C.; Li, C.; Zhang, C.; Tsai, C. -Y.; Luo, C.; Micheloni, C.; Zhang, C.; Tao, D.; Gupta, D.; Song, D.; Wang, D.; Gavves, E.; Yi, E.; Khan, F. S.; Zhang, F.; Wang, F.; Zhao, F.; De Ath, G.; Bhat, G.; Chen, G.; Wang, G.; Li, G.; Cevikalp, H.; Du, H.; Zhao, H.; Saribas, H.; Jung, H. M.; Bai, H.; Yu, H.; Peng, H.; Lu, H.; Li, H.; Li, J.; Li, J.; Fu, J.; Chen, J.; Gao, J.; Zhao, J.; Tang, J.; Li, J.; Wu, J.; Liu, J.; Wang, J.; Qi, J.; Zhang, J.; Tsotsos, J. K.; Lee, J. H.; Van De Weijer, J.; Kittler, J.; Ha Lee, J.; Zhuang, J.; Zhang, K.; Wang, K.; Dai, K.; Chen, L.; Liu, L.; Guo, L.; Zhang, L.; Wang, L.; Wang, L.; Zhang, L.; Wang, L.; Zhou, L.; Zheng, L.; Rout, L.; Van Gool, L.; Bertinetto, L.; Danelljan, M.; Dunnhofer, M.; Ni, M.; Kim, M. Y.; Tang, M.; Yang, M. -H.; Paluru, N.; Martinel, N.; Xu, P.; Zhang, P.; Zheng, P.; Zhang, P.; Torr, P. H. S.; Wang, Q. Z. Q.; Guo, Q.; Timofte, R.; Gorthi, R. K.; Everson, R.; Han, R.; Zhang, R.; You, S.; Zhao, S. -C.; Zhao, S.; Li, S.; Li, S.; Ge, S.; Bai, S.; Guan, S.; Xing, T.; Xu, T.; Yang, T.; Zhang, T.; Vojir, T.; Feng, W.; Hu, W.; Wang, W.; Tang, W.; Zeng, W.; Liu, W.; Chen, X.; Qiu, X.; Bai, X.; Wu, X. -J.; Yang, X.; Chen, X.; Li, X.; Sun, X.; Chen, X.; Tian, X.; Tang, X.; Zhu, X. -F.; Huang, Y.; Chen, Y.; Lian, Y.; Gu, Y.; Liu, Y.; Chen, Y.; Zhang, Y.; Xu, Y.; Wang, Y.; Li, Y.; Zhou, Y.; Dong, Y.; Xu, Y.; Zhang, Y.; Li, Y.; Luo, Z. W. Z.; Zhang, Z.; Feng, Z. -H.; He, Z.; Song, Z.; Chen, Z.; Zhang, Z.; Wu, Z.; Xiong, Z.; Huang, Z.; Teng, Z.; Ni, Z

    Visual motion tracking and sensor fusion for kite power systems

    Get PDF
    An estimation approach is presented for kite power systems with groundbased actuation and generation. Line-based estimation of the kite state, including position and heading, limits the achievable cycle efficiency of such airborne wind energy systems due to significant estimation delay and line sag. We propose a filtering scheme to fuse onboard inertial measurements with ground-based line data for ground-based systems in pumping operation. Estimates are computed using an extended Kalman filtering scheme with a sensor-driven kinematic process model which propagates and corrects for inertial sensor biases. We further propose a visual motion tracking approach to extract estimates of the kite position from ground-based video streams. The approach combines accurate object detection with fast motion tracking to ensure long-term object tracking in real time. We present experimental results of the visual motion tracking and inertial sensor fusion on a ground-based kite power system in pumping operation and compare both methods to an existing estimation scheme based on line measurements

    Online Object Tracking with Proposal Selection

    Get PDF
    Tracking-by-detection approaches are some of the most successful object trackers in recent years. Their success is largely determined by the detector model they learn initially and then update over time. However, under challenging conditions where an object can undergo transformations, e.g., severe rotation, these methods are found to be lacking. In this paper, we address this problem by formulating it as a proposal selection task and making two contributions. The first one is introducing novel proposals estimated from the geometric transformations undergone by the object, and building a rich candidate set for predicting the object location. The second one is devising a novel selection strategy using multiple cues, i.e., detection score and edgeness score computed from state-of-the-art object edges and motion boundaries. We extensively evaluate our approach on the visual object tracking 2014 challenge and online tracking benchmark datasets, and show the best performance.Comment: ICCV 201

    A follow-me algorithm for AR.Drone using MobileNet-SSD and PID control

    Get PDF
    Treballs Finals de Grau d'Enginyeria InformĂ tica, Facultat de MatemĂ tiques, Universitat de Barcelona, Any: 2018, Director: LluĂ­s Garrido Ostermann[en] In recent years the industry of quadcopters has experimented a boost. The appearance of inexpensive drones has led to the growth of the recreational use of this vehicles, which opens the door to the creation of new applications and technologies. This thesis presents a vision-based autonomous control system for an AR.Drone 2.0. A tracking algorithm is developed using onboard vision systems without relying on additional external inputs. In particular, the tracking algorithm is the combination of a trained MobileNet-SSD object detector and a KCF tracker. The noise induced by the tracker is decreased with a Kalman filter. Furthermore, PID controllers are implemented for the motion control of the quadcopter, which process the output of the tracking algorithm to move the drone to the desired position. The final implementation was tested indoors and the system yields acceptable results
    • …
    corecore