673 research outputs found

    Frontal-view gait recognition by intra- and inter-frame rectangle size distribution

    Full text link
    peer reviewedCurrent trends seem to accredit gait as a sensible biometric feature for human identification, at least in a multimodal system. In addition to being a robust feature, gait is hard to fake and requires no cooperation from the user. As in many video systems, the recognition confidence relies on the angle of view of the camera and on the illumination conditions, inducing a sensitivity to operational conditions that one may wish to lower. In this paper we present an efficient approach capable of recognizing people in frontal-view video sequences. The approach uses an intra-frame description of silhouettes which consists of a set of rectangles that will fit into any closed silhouette. A dynamic, inter-frame, dimension is then added by aggregating the size distributions of these rectangles over multiple successive frames. For each new frame, the inter-frame gait signature is updated and used to estimate the identity of the person detected in the scene. Finally, in order to smooth the decision on the identity, a majority vote is applied to previous results. In the final part of this article, we provide experimental results and discuss the accuracy of the classification for our own database of 21 known persons, and for a public database of 25 persons

    What are the optimal walking tests to assess disability progression?

    Full text link
    Background. Therapy success is assumed when there is no evidence of disease activity. Clues to show it include an MRI, the relapses history, questionnaires, and clinical measures to assess the disability progression. Especially gait analysis plays a major role as gait impairment is considered by patients as the most disabling symptom. Too often only the walking speed is measured. New technologies (eg GAIMS, see ECTRIMS 2012-15) measure many spatiotemporal gait parameters, even during long tests (\eg 6min, 500m), without equipping patients with markers or sensors. Moreover, various tests can be done, depending on the length and type of walk (comfortable pace --C--, as fast as possible --F--, tandem gait --T--). Objective. Determine if there is an advantage to perform various walking tests, and which test or combination of tests brings the higher amount of information about the patient state in a reasonable amount of acquisition time. Methods. The system GAIMS provided 434 recordings of the gait parameters of healthy people and 60 recordings of MS patients with EDSS<= 4. They performed 12 tests (25ft C+F+T each twice, 20m C+F+T, 100m C+F, 500m F). To assess the ability of these clinical outcome measures to detect disability progression, we evaluate the possibility of differentiating the persons below a given EDSS threshold (0.25) from those above it based only on the measured gait parameters. For individual tests, we use the classifier of Azrour (ESANN 2014). All subsets of the tests are also considered, by combining the individual classifiers and determining automatically the optimal relative importance of the tests with the linear support vector machine (SVM) technique. The ability to detect the disability progression is quantified by the performance (area under the ROC curve --AUC-- and the maximum achievable balanced accuracy --MBA--) of the corresponding classifiers. Results. The best test alone is the 500m F (note that the walking speed measured during it is the gait parameter best correlated with the EDSS). Combining several tests leads to a better performance. A performance (MBA=95.7%, AUC=0.983) close to the best achievable one can be obtained with 6 tests only (25ft C twice, 25tf F twice, 20m C, 20m T). Conclusions. The clinical gait analysis can help to detect disability progression. While considering different types of walking tests improves the ability of taking decisions, we showed that performing 6 tests for a total of 70.48m suffices

    A New Three Object Triangulation Algorithm for Mobile Robot Positioning

    Full text link
    Positioning is a fundamental issue in mobile robot applications. It can be achieved in many ways. Among them, triangulation based on angles measured with the help of beacons is a proven technique. Most of the many triangulation algorithms proposed so far have major limitations. For example, some of them need a particular beacon ordering, have blind spots, or only work within the triangle defined by the three beacons. More reliable methods exist; however, they have an increasing complexity or they require to handle certain spatial arrangements separately. In this paper, we present a simple and new three object triangulation algorithm, named ToTal, that natively works in the whole plane, and for any beacon ordering. We also provide a comprehensive comparison between many algorithms, and show that our algorithm is faster and simpler than comparable algorithms. In addition to its inherent efficiency, our algorithm provides a very useful and unique reliability measure, assessable anywhere in the plane, which can be used to identify pathological cases, or as a validation gate in Kalman filters.Peer reviewe

    Morphological erosions and openings: fast algorithms based on anchors

    Full text link
    Several efficient algorithms for computing erosions and openings have been proposed recently. They improve on VAN HERK's algorithm in terms of number of comparisons for large structuring elements. In this paper we introduce a theoretical framework of anchors that aims at a better understanding of the process involved in the computation of erosions and openings. It is shown that the knowledge of opening anchors of a signal f is sufficient to perform both the erosion and the opening of f. Then we propose an algorithm for one-dimensional erosions and openings which exploits opening anchors. This algorithm improves on the fastest algorithms available in literature by approximately 30% in terms of computation speed, for a range of structuring element sizes and image content

    ViBe: A universal background subtraction algorithm for video sequences

    Full text link
    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based on the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudocode and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques. There is a dedicated web page for ViBe at http://www.telecom.ulg.ac.be/research/vibe

    Summarizing the performances of a background subtraction algorithm measured on several videos

    Full text link
    There exist many background subtraction algorithms to detect motion in videos. To help comparing them, datasets with ground-truth data such as CDNET or LASIESTA have been proposed. These datasets organize videos in categories that represent typical challenges for background subtraction. The evaluation procedure promoted by their authors consists in measuring performance indicators for each video separately and to average them hierarchically, within a category first, then between categories, a procedure which we name "summarization". While the summarization by averaging performance indicators is a valuable effort to standardize the evaluation procedure, it has no theoretical justification and it breaks the intrinsic relationships between summarized indicators. This leads to interpretation inconsistencies. In this paper, we present a theoretical approach to summarize the performances for multiple videos that preserves the relationships between performance indicators. In addition, we give formulas and an algorithm to calculate summarized performances. Finally, we showcase our observations on CDNET 2014.Comment: Copyright 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Semantic Background Subtraction

    Full text link
    peer reviewedWe introduce the notion of semantic background subtraction, a novel framework for motion detection in video sequences. The key innovation consists to leverage object-level semantics to address the variety of challenging scenarios for background subtraction. Our framework combines the information of a semantic segmentation algorithm, expressed by a probability for each pixel, with the output of any background subtraction algorithm to reduce false positive detections produced by illumination changes, dynamic backgrounds, strong shadows, and ghosts. In addition, it maintains a fully semantic background model to improve the detection of camouflaged foreground objects. Experiments led on the CDNet dataset show that we managed to improve, significantly, almost all background subtraction algorithms of the CDNet leaderboard, and reduce the mean overall error rate of all the 34 algorithms (resp. of the best 5 algorithms) by roughly 50% (resp. 20%). Note that a C++ implementation of the framework is available at http://www.telecom.ulg.ac.be/semantic

    LaBGen: A method based on motion detection for generating the background of a scene

    Full text link
    Given a video sequence acquired with a fixed camera, the generation of the stationary background of the scene is a challenging problem which aims at computing a reference image for a motionless background. For that purpose, we developed our method named LaBGen, which emerged as the best one during the Scene Background Modeling and Initialization (SBMI) workshop organized in 2015, and the IEEE Scene Background Modeling Contest (SBMC) organized in 2016. LaBGen combines a pixel-wise temporal median filter and a patch selection mechanism based on motion detection. To detect motion, a background subtraction algorithm decides, for each frame, which pixels belong to the background. In this paper, we describe the LaBGen method extensively, evaluate it on the SBI 2016 dataset and compare its performance with other background generation methods. We also study its computational complexity, the performance sensitivity with respect to its parameters, and the stability of the predicted background image over time with respect to the chosen background subtraction algorithm. We provide an open source C++ implementation at http://www.telecom.ulg.ac.be/labgen
    • …
    corecore