4 research outputs found

    Online Parameter Tuning for Object Tracking Algorithms

    Get PDF
    International audienceObject tracking quality usually depends on video scene conditions (e.g. illumination, density of objects, object occlusion level). In order to overcome this limitation, this article presents a new control approach to adapt the object tracking process to the scene condition variations. More precisely, this approach learns how to tune the tracker parameters to cope with the tracking context variations. The tracking context, or context, of a video sequence is defined as a set of six features: density ofmobile objects, their occlusion level, their contrastwith regard to the surrounding background, their contrast variance, their 2D area and their 2D area variance. In an offline phase, training video sequences are classified by clustering their contextual features. Each context cluster is then associated to satisfactory tracking parameters. In the online control phase, once a context change is detected, the tracking parameters are tuned using the learned values. The approach has been experimentedwith three different tracking algorithms and on long, complex video datasets. This article brings two significant contributions: (1) a classification method of video sequences to learn offline tracking parameters and (2) a newmethod to tune online tracking parameters using tracking context

    Multi-Object Tracking of Pedestrian Driven by Context

    Get PDF
    International audienceThe characteristics like density of objects, their contrast with respect to surrounding background, their occlu-sion level and many more describe the context of the scene. The variation of the context represents ambiguous task to be solved by tracker. In this paper we present a new long term tracking framework boosted by context around each track-let. The framework works by first learning the database of optimal tracker parameters for various context offline. During the testing, the context surrounding each tracklet is extracted and match against database to select best tracker parameters. The tracker parameters are tuned for each tracklet in the scene to highlight its discrimination with respect to surrounding context rather than tuning the parameters for whole scene. The proposed framework is trained on 9 public video sequences and tested on 3 unseen sets. It outperforms the state-of-art pedestrian trackers in scenarios of motion changes, appearance changes and occlusion of objects

    tracker independent drift detection and correction using segmented objects and features

    Get PDF
    Object tracking has been an active research topic in the field of video processing. However, automated object tracking, under uncontrolled environments, is still difficult to achieve and encounters various challenges that cause the tracker to drift away from the target object. %Object tracking methods with fixed models, that are predefined prior to the tracking task, normally fail because of the inevitable appearance changes that can be either object or environment-related. To effectively handle object or environment tracking challenges, recent powerful tracking approaches are learning-based, meaning they learn object appearance changes while tracking online. The output of such trackers is, however, limited to a bounding box representation, the center of which is considered as the estimated object location. Such bounding box may not provide accurate foreground/background discrimination and may not handle highly non-rigid objects. Moreover, the bounding box may not surround the object completely, or it may not be centered around it, which affects the accuracy of the overall tracking process. Our main objective in this work is to reduce drifts of state-of-the-art tracking algorithms (trackers) using object segmentation so to produce more accurate bounding box. To enhance the quality of state-of-the-art trackers, this work investigates two main venues: first tracker-independent drift detection and correction using object features and second, selection of best performing parameters of Graph Cut object segmentation and of support vector machines using artificial immune system. In addition, this work proposes a framework for the evaluation and ranking of different trackers using easily interpretable performance measures, in a way to account for the presence of outliers. For tracker-independent drift detection, we use saliency features or objectness using saliency, the ratio of the salient region corresponding to the target object with respect to the estimated bounding box is used to indicate the occurrence of tracking drift with no prior information about the target model. With objectness measures, we use both relative area and score of the detected candidate boxes according to the objectness measure to indicate the occurrenece of the tracking drift. For drift correction, we investigate the application of object segmentation on the estimated bounding box to re-locate it around the target object. Due to its ability to lead to a global near optimal solution, we use the Graph Cut object segmentation method. We modify the Graph Cut model to incorporate an automatic seed selection module based on interest points, in addition to a template mask, to automatically initialize the segmentation across frames. However, the integration of segmentation in the tracking loop has its computational burden. In addition, the segmentation quality might be affected by tracking challenges, such as motion blur and occlusion. Accordingly, object segmentation is applied only when a drift is detected. Simulation results show that the proposed approach improves the tracking quality of five recent trackers. Researchers often use long and tedious trial and error approaches for determining the best performing parameter configuration of a video-processing algorithm, particularly with the diverse nature of video sequences. However, such configuration does not guarantee the best performance. A little research attention has been given to study the algorithm's sensitivity to its parameters. Artificial immune system is an emergent biologically motivated computing paradigm that has the ability to reach optimal or near-optimal solutions through mutation and cloning. This work proposes the use of artificial immune system for the selection of best performing parameters of two video processing algorithms: support vector machines for object tracking and Graph Cut based object segmentation. An increasing number of trackers are being developed and when introducing a new tracker, it is important to facilitate its evaluation and ranking in relation to others, using easy to interpret performance measures. Recent studies have shown that some measures are correlated and cannot reflect the different aspects of tracking performance when used individually. In addition, they do not incorporate robust statistics to account for the presence of outliers that might lead to insignificant results. This work proposes a framework for effective scoring and ranking of different trackers by using less correlated quality metrics, coupled with a robust estimator against dispersion. In addition, a unified performance index is proposed to facilitate the evaluation process

    Exploration of cyber-physical systems for GPGPU computer vision-based detection of biological viruses

    Get PDF
    This work presents a method for a computer vision-based detection of biological viruses in PAMONO sensor images and, related to this, methods to explore cyber-physical systems such as those consisting of the PAMONO sensor, the detection software, and processing hardware. The focus is especially on an exploration of Graphics Processing Units (GPU) hardware for “General-Purpose computing on Graphics Processing Units” (GPGPU) software and the targeted systems are high performance servers, desktop systems, mobile systems, and hand-held systems. The first problem that is addressed and solved in this work is to automatically detect biological viruses in PAMONO sensor images. PAMONO is short for “Plasmon Assisted Microscopy Of Nano-sized Objects”. The images from the PAMONO sensor are very challenging to process. The signal magnitude and spatial extension from attaching viruses is small, and it is not visible to the human eye on raw sensor images. Compared to the signal, the noise magnitude in the images is large, resulting in a small Signal-to-Noise Ratio (SNR). With the VirusDetectionCL method for a computer vision-based detection of viruses, presented in this work, an automatic detection and counting of individual viruses in PAMONO sensor images has been made possible. A data set of 4000 images can be evaluated in less than three minutes, whereas a manual evaluation by an expert can take up to two days. As the most important result, sensor signals with a median SNR of two can be handled. This enables the detection of particles down to 100 nm. The VirusDetectionCL method has been realized as a GPGPU software. The PAMONO sensor, the detection software, and the processing hardware form a so called cyber-physical system. For different PAMONO scenarios, e.g., using the PAMONO sensor in laboratories, hospitals, airports, and in mobile scenarios, one or more cyber-physical systems need to be explored. Depending on the particular use case, the demands toward the cyber-physical system differ. This leads to the second problem for which a solution is presented in this work: how can existing software with several degrees of freedom be automatically mapped to a selection of hardware architectures with several hardware configurations to fulfill the demands to the system? Answering this question is a difficult task. Especially, when several possibly conflicting objectives, e.g., quality of the results, energy consumption, and execution time have to be optimized. An extensive exploration of different software and hardware configurations is expensive and time-consuming. Sometimes it is not even possible, e.g., if the desired architecture is not yet available on the market or the design space is too big to be explored manually in reasonable time. A Pareto optimal selection of software parameters, hardware architectures, and hardware configurations has to be found. To achieve this, three parameter and design space exploration methods have been developed. These are named SOG-PSE, SOG-DSE, and MOGEA-DSE. MOGEA-DSE is the most advanced method of these three. It enables a multi-objective, energy-aware, measurement-based or simulation-based exploration of cyber-physical systems. This can be done in a hardware/software codesign manner. In addition, offloading of tasks to a server and approximate computing can be taken into account. With the simulation-based exploration, systems that do not exist can be explored. This is useful if a system should be equipped, e.g., with the next generation of GPUs. Such an exploration can reveal bottlenecks of the existing software before new GPUs are bought. With MOGEA-DSE the overall goal—to develop a method to automatically explore suitable cyber-physical systems for different PAMONO scenarios—could be achieved. As a result, a rapid, reliable detection and counting of viruses in PAMONO sensor data using high-performance, desktop, laptop, down to hand-held systems has been made possible. The fact that this could be achieved even for a small, hand-held device is the most important result of MOGEA-DSE. With the automatic parameter and design space exploration 84% energy could be saved on the hand-held device compared to a baseline measurement. At the same time, a speedup of four and an F-1 quality score of 0.995 could be obtained. The speedup enables live processing of the sensor data on the embedded system with a very high detection quality. With this result, viruses can be detected and counted on a mobile, hand-held device in less than three minutes and with real-time visualization of results. This opens up completely new possibilities for biological virus detection that were not possible before
    corecore