14 research outputs found

    Застосування структурного опису зображень для вирішення задач інтелектуального аналізу відеопослідовностей

    No full text
    У даній роботі розглянуто використання опису зображень і видеопослідовностей у вигляді безлічі структурних елементів для вирішення завдань детектування, відстеження та розпізнавання рухомих об'єктів. Проведено теоретичні дослідження даної проблеми даний формальний опис зображень і видеопослідовностей у вигляді безлічі структурних елементів, дано визначення опису виділених і об'єктів, що відслідковуються, розглянуто властивості структурних елементів, що належать одному об'єкту, властивості опису виділених об'єктів, властивості описів об'єктів, що відслідковуються, властивості функції перетворення/модифікації опису об'єктів, що відслідковуються від кадра до кадра, яка необхідна для здійснення трекінгу.In this paper, we consider the use of the description of images and video sequences in the form of a set of structural elements for solving problems of detection, tracking and recognition of moving objects. Theoretical studies of this problem have been carried out. A formal description of images and video sequences in the form of a number of structural elements is given, a definition of the description of detected and tracked objects is given, properties of structural elements belonging to one object, properties of detected objects description, properties of tracked objects descriptions, properties of the transformation/modification function of tracked objects description from frame to frame, which is necessary for tracking, are considered

    Выделение и отслеживание объектов на основе использования анализа движения

    No full text
    В данной работе предложено решение задачи выделения и отслеживания объектов в видеоряде с неподвижным фоном на основе анализа движения в кадре и представления изображения и объектов как множества структурных элементов. Описание объектов имеет двухуровневую иерархию, что позволяет гибко адаптировать его при изменении объекта. Аддитивное описание объекта в виде множества структурных элементов позволяет осуществлять трекинг в условиях частичного перекрытия.У даній роботі запропоновано вирішення задачі виділення і відстеження об'єктів у відеоряді з нерухомим фоном на основі аналізу руху в кадрі і представлення зображення і об'єктів як множини структурних елементів. Опис об'єктів має дворівневу ієрархію, що дозволяє гнучко адаптувати його при зміні об'єкта. Адитивний опис об'єкта у вигляді множини структурних елементів дозволяє здійснювати трекінг в умовах часткового перекриття.In this paper is proposed the solution of the problem of detection and tracking objects in a video with a static background, based on motion analysis and presenting of frames and objects as a set of structural elements Object descriptor has a two-level hierarchy that allows flexible adapt it when object is changed. Such object descriptor as a set of structural elements allows for tracking in a partial occlusion case

    Confidence-Level-Based New Adaptive Particle Filter for Nonlinear Object Tracking Regular Paper

    Get PDF
    Nonlinear object tracking from noisy measurements is a basic skill and a challenging task of mobile robotics, especially under dynamic environments. The particle filter is a useful tool for nonlinear object tracking with non-Gaussian noise. Nonlinear object tracking needs the real-time processing capability of the particle filter. While the number in a traditional particle filter is fixed, that can lead to a lot of unnecessary computation. To address this issue, a confidence-level-based new adaptive particle filter (NAPF) algorithm is proposed in this paper. In this algorithm the idea of confidence interval is utilized. The least number of particles for the next time instant is estimated according to the confidence level and the variance of the estimated state. Accordingly, an improved systematic re-sampling algorithm is utilized for the new improved particle filter. NAPF can effectively reduce the computation while ensuring the accuracy of nonlinear object tracking. The simulation results and the ball tracking results of the robot verify the effectiveness of the algorithm

    Real time motion estimation using a neural architecture implemented on GPUs

    Get PDF
    This work describes a neural network based architecture that represents and estimates object motion in videos. This architecture addresses multiple computer vision tasks such as image segmentation, object representation or characterization, motion analysis and tracking. The use of a neural network architecture allows for the simultaneous estimation of global and local motion and the representation of deformable objects. This architecture also avoids the problem of finding corresponding features while tracking moving objects. Due to the parallel nature of neural networks, the architecture has been implemented on GPUs that allows the system to meet a set of requirements such as: time constraints management, robustness, high processing speed and re-configurability. Experiments are presented that demonstrate the validity of our architecture to solve problems of mobile agents tracking and motion analysis.This work was partially funded by the Spanish Government DPI2013-40534-R grant and Valencian Government GV/2013/005 grant

    Robust individual pig tracking

    Get PDF
    The locations of pigs in the group housing enable activity monitoring and improve animal welfare. Vision-based methods for tracking individual pigs are noninvasive but have low tracking accuracy owing to long-term pig occlusion. In this study, we developed a vision-based method that accurately tracked individual pigs in group housing. We prepared and labeled datasets taken from an actual pig farm, trained a faster region-based convolutional neural network to recognize pigs’ bodies and heads, and tracked individual pigs across video frames. To quantify the tracking performance, we compared the proposed method with the global optimization (GO) method with the cost function and the simple online and real-time tracking (SORT) method on four additional test datasets that we prepared, labeled, and made publicly available. The predictive model detects pigs’ bodies accurately, with F1-scores of 0.75 to 1.00, on the four test datasets. The proposed method achieves the largest multi-object tracking accuracy (MOTA) values at 0.75, 0.98, and 1.00 for three test datasets. In the remaining dataset, the proposed method has the second-highest MOTA of 0.73. The proposed tracking method is robust to long-term occlusion, outperforms the competitive baselines in most datasets, and has practical utility in helping to track individual pigs accurately

    Scalable 3D Tracking of Multiple Interacting Objects

    Full text link
    We consider the problem of tracking multiple interact-ing objects in 3D, using RGBD input and by considering a hypothesize-and-test approach. Due to their interaction, objects to be tracked are expected to occlude each other in the field of view of the camera observing them. A naive approach would be to employ a Set of Independent Track-ers (SIT) and to assign one tracker to each object. This approach scales well with the number of objects but fails as occlusions become stronger due to their disjoint consid-eration. The solution representing the current state of the art employs a single Joint Tracker (JT) that accounts for all objects simultaneously. This directly resolves ambigui-ties due to occlusions but has a computational complexity that grows geometrically with the number of tracked ob-jects. We propose a middle ground, namely an Ensemble of Collaborative Trackers (ECT), that combines best traits from both worlds to deliver a practical and accurate solu-tion to the multi-object 3D tracking problem. We present quantitative and qualitative experiments with several syn-thetic and real world sequences of diverse complexity. Ex-periments demonstrate that ECT manages to track far more complex scenes than JT at a computational time that is only slightly larger than that of SIT. 1

    Collaborative tracking for multiple objects in the presence of inter-occlusions

    Get PDF

    Sample and Pixel Weighting Strategies for Robust Incremental Visual Tracking

    Get PDF
    In this paper, we introduce the incremental temporally weighted principal component analysis (ITWPCA) algorithm, based on singular value decomposition update, and the incremental temporally weighted visual tracking with spatial penalty (ITWVTSP) algorithm for robust visual tracking. ITWVTSP uses ITWPCA for computing incrementally a robust low dimensional subspace representation (model) of the tracked object. The robustness is based on the capacity of weighting the contribution of each single sample to the subspace generation to reduce the impact of bad quality samples, reducing the risk of model drift. Furthermore, ITWVTSP can exploit the a priori knowledge about important regions of a tracked object. This is done by penalizing the tracking error on some predefined regions of the tracked object, which increases the accuracy of tracking. Several tests are performed on several challenging video sequences, showing the robustness and accuracy of the proposed algorithm, as well as its superiority with respect to state-of-the-art techniques
    corecore