88 research outputs found

    Seguimiento de objetos basados en múltiples características

    Full text link
    Este trabajo de fin de grado tiene como objetivo principal el seguimiento de objetos en secuencias de video basándonos en sus caracteristicas de color y de forma. Para ello es necesario diseñar un algoritmo de seguimiento, que localice un objeto a través del tiempo dada su posición inicial y estime su posición en instantes posteriores. En este TFG se hará uso del filtro de partículas clásico basado en sus características de color en el espacio RGB. El primer paso fue hacer compatible el algoritmo con otros espacios de color, y se escogió HSV como alternativa. Posteriormente se llevó a cabo la implementación de dos artículos [2] y [3] para añadir una característica adicional al filtro de partículas inicial, extrayendo así más información acerca del objeto. Una característica que complementa la información de color a la perfección es la orientación del objeto, por lo que extraemos dicha información haciendo uso de histogramas de gradiente [4]. Este TFG también incluye una técnica novedosa [3] que saca provecho de la posición de las partículas generadas. Además de detectar el objeto, extrae información del área que le rodea, discriminando así entre lo que sería el objeto y lo que pertenecería al fondo de la imagen. Las partículas que proporcionen información acerca del objeto tendrán más importancia que aquellas que la proporcionen acerca del fondo. Una vez tenemos el resultado del tracker se comparan con anotaciones manuales de la posición del objeto (ground-truth) mediante métricas de evaluación, en la que la eficacia del algoritmo vendrá dada por el área de solape entre el resultado del tracker y el groundtruth. Por último, se comparan los resultados arrojados por el filtro de partículas inicial, basado en histogramas de color; y el filtro de partículas modificado para extraer información adicional de la forma y orientación del objeto, así como de información de lo que rodea al objeto. Se podrá observar cierta mejoría en los resultados, ya que cuanta más información del objeto tenga el tracker, mejor podrá detectarlo y estimar futuras posiciones.This final degree thesis has as main goal, the video-tracking of objects based in their color and shape features. This requires the use of a tracking algorithm, that locate the target over time, given his initial position, and estimates his later positions. In this FDT is going to be use the classic particle filter based in their color features of the RGB space. The first step, was to make the current algorithm compatible with another color space, and HSV was chosen as alternative. Subsequently, took place the implementation of two papers [2] and [3] in order to add an aditional feature to the initial particle Filter, extracting more information about the target. One feature that complements color information very well is orientation, so we extract that information using gradient histograms [4]. This FDT includes a novel technique [3] that benefits from the position of the generated particles too. Apart from detecting the target, it extracts surrounding area information, discriminating against what corresponds to the target and what corresponds to the image background. Particles that provide information about the target will have mor importance that those that provide information about the background. Once we have the result of the tracker, it will be compared with manual annotations about target’s postition (ground-truth) through evaluation metrics, where effectiveness is provided by the overlapping area between the tracker result and the ground-truth. Finally, the results thrown by the initial particle Filter, based in color histograms; and the particle filter modified to extract aditional information about target’s shape and orientation, as well as its surrounding information, will be compared. It will be posible to see slight improvement in the results, because for the more information has the tracker about the target, the better the tracker will detect the target and estimate future postions

    Self-correcting Bayesian target tracking

    Get PDF
    The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorAbstract Visual tracking, a building block for many applications, has challenges such as occlusions,illumination changes, background clutter and variable motion dynamics that may degrade the tracking performance and are likely to cause failures. In this thesis, we propose Track-Evaluate-Correct framework (self-correlation) for existing trackers in order to achieve a robust tracking. For a tracker in the framework, we embed an evaluation block to check the status of tracking quality and a correction block to avoid upcoming failures or to recover from failures. We present a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN). The self-correcting tracking is done similarly to a selfaware system where parameters are tuned in the model or different models are fused or selected in a piece-wise way in order to deal with tracking challenges and failures. In the DBN model representation, the parameter tuning, fusion and model selection are done based on evaluation and correction variables that correspond to the evaluation and correction, respectively. The inferences of variables in the DBN model are used to explain the operation of self-correcting tracking. The specific contributions under the generic self-correcting framework are correlation-based selfcorrecting tracking for an extended object with model points and tracker-level fusion as described below. For improving the probabilistic tracking of extended object with a set of model points, we use Track-Evaluate-Correct framework in order to achieve self-correcting tracking. The framework combines the tracker with an on-line performance measure and a correction technique. We correlate model point trajectories to improve on-line the accuracy of a failed or an uncertain tracker. A model point tracker gets assistance from neighbouring trackers whenever degradation in its performance is detected using the on-line performance measure. The correction of the model point state is based on the correlation information from the states of other trackers. Partial Least Square regression is used to model the correlation of point tracker states from short windowed trajectories adaptively. Experimental results on data obtained from optical motion capture systems show the improvement in tracking performance of the proposed framework compared to the baseline tracker and other state-of-the-art trackers. The proposed framework allows appropriate re-initialisation of local trackers to recover from failures that are caused by clutter and missed detections in the motion capture data. Finally, we propose a tracker-level fusion framework to obtain self-correcting tracking. The fusion framework combines trackers addressing different tracking challenges to improve the overall performance. As a novelty of the proposed framework, we include an online performance measure to identify the track quality level of each tracker to guide the fusion. The trackers in the framework assist each other based on appropriate mixing of the prior states. Moreover, the track quality level is used to update the target appearance model. We demonstrate the framework with two Bayesian trackers on video sequences with various challenges and show its robustness compared to the independent use of the trackers used in the framework, and also compared to other state-of-the-art trackers. The appropriate online performance measure based appearance model update and prior mixing on trackers allows the proposed framework to deal with tracking challenges

    Coherent Selection of Independent Trackers for Real-time Object Tracking

    Get PDF
    International audienceThis paper presents a new method for combining several independent and heterogeneous tracking algorithms for the task of online single-object tracking. The proposed algorithm runs several trackers in parallel, where each of them relies on a different set of complementary low-level features. Only one tracker is selected at a given frame, and the choice is based on a spatio-temporal coherence criterion and normalised confidence estimates. The key idea is that the individual trackers are kept completely independent, which reduces the risk of drift in situations where for example a tracker with an inaccurate or inappropriate appearance model negatively impacts the performance of the others. Moreover, the proposed approach is able to switch between different tracking methods when the scene conditions or the object appearance rapidly change. We experimentally show with a set of Online Adaboost-based trackers that this formulation of multiple trackers improves the tracking results in comparison to more classical combinations of trackers. And we further improve the overall performance and computational efficiency by introducing a selective update step in the tracking framework

    Flash communication pattern analysis of fireflies based on computer vision

    Get PDF
    Previous methods for detecting the flashing behavior of fireflies were using either a photomultiplier tube, a stopwatch, or videography. Limitations and problems are associated with these methods, i.e., errors in data collection and analysis, and it is time-consuming. This study aims to applied a computer vision approach to reduce the time of data collection and analysis as compared to the videography methods by illuminance calculation, time of flash occurrence, and optimize the position coordinate automatically and tracking each firefly individually. The Validation of the approach was performed by comparing the flashing data of male fireflies, Sclerotia aquatilis that was obtained from the analysis of the behavioral video. The pulse duration, flash interval, and flash patterns of S. aquatilis were similar to a reference study. The accuracy ratio of the tracking algorithm for tracking multiple fireflies was 0.94. The time consumption required to analyze the video decreased up to 96.82% and 76.91% when compared with videography and the stopwatch method, respectively. Therefore, this program could be employed as an alternative technique for the study of fireflies flashing behavior

    A Fusion Framework for Camouflaged Moving Foreground Detection in the Wavelet Domain

    Full text link
    Detecting camouflaged moving foreground objects has been known to be difficult due to the similarity between the foreground objects and the background. Conventional methods cannot distinguish the foreground from background due to the small differences between them and thus suffer from under-detection of the camouflaged foreground objects. In this paper, we present a fusion framework to address this problem in the wavelet domain. We first show that the small differences in the image domain can be highlighted in certain wavelet bands. Then the likelihood of each wavelet coefficient being foreground is estimated by formulating foreground and background models for each wavelet band. The proposed framework effectively aggregates the likelihoods from different wavelet bands based on the characteristics of the wavelet transform. Experimental results demonstrated that the proposed method significantly outperformed existing methods in detecting camouflaged foreground objects. Specifically, the average F-measure for the proposed algorithm was 0.87, compared to 0.71 to 0.8 for the other state-of-the-art methods.Comment: 13 pages, accepted by IEEE TI

    An Efficient GA Based Detection Approach for Visual Surveillance System

    Get PDF
    ABSTRACT: Now-a-days, for an intelligent surveillance system, identification of an object from a video has attracted a great deal of interest. To detect the object from a video one need to perform some segmentation techniques. In real time application, Object segmentation and identification are two essential building block of smart surveillance system. In addition, some conditions make video object detection difficult such as non rigid object motion, target appearance variations due to changes in illumination and background clutter. This method is proposed on a multi object moving background based on Genetic algorithm. The video is preprocessed before segmentation. Motion segmentation is done to segment an object from a video. For motion detection, a genetic algorithm is used.In this, a Non maximum suppression filter is proposed to remove the unwanted object motion. This result is then used for object identification. Cellular automata based segmentation is performed to detect a particular object from a video. This method can detect any object at any drastic change in illumination
    • …
    corecore