16 research outputs found

    Keypoints-based background model and foreground pedestrian extraction for future smart cameras

    No full text
    International audienceIn this paper, we present a method for background modeling using only keypoints, and detection of foreground moving pedestrians using background keypoints substraction followed by adaBoost classification of foreground keypoints. A first experimental evaluation shows very promising detection performances in real-time

    Interest points harvesting in video sequences for efficient person identification

    No full text
    International audienceWe propose and evaluate a new approach for identification of persons, based on harvesting of interest point descriptors in video sequences. By accumulating interest points on several sufficiently time-spaced images during person silhouette or face tracking within each camera, the collected interest points capture appearance variability. Our method can in particular be applied to global person re-identification in a network of cameras. We present a first experimental evaluation conducted on a publicly available set of videos in a commercial mall, with very promising inter-camera pedestrian reidentification performances (a precision of 82% for a recall of 78%). Our matching method is very fast: ~ 1/8s for re-identification of one target person among 10 previously seen persons, and a logarithmic dependence with the number of stored person models, making re-identification among hundreds of persons computationally feasible in less than ~ 1/5s second. Finally, we also present a first feasibility test for on-the-fly face recognition, with encouraging results

    VidĂ©osurveillance intelligente : rĂ©-identification de personnes par signature utilisant des descripteurs de points d'intĂ©rĂȘt collectĂ©s sur des sĂ©quences

    No full text
    National audienceNous prĂ©sentons et Ă©valuons une mĂ©thode de rĂ©-identification de personnes pour les systĂšmes de surveillance multicamĂ©ras. Notre approche utilise la mise en correspondance de signatures fondĂ©es sur les descripteurs de points d'intĂ©rĂȘt collectĂ©s sur de courtes sĂ©quences vidĂ©os. Une des originalitĂ©s de notre travail est d'accumuler les points d'intĂ©rĂȘt Ă  des instants suffisamment espacĂ©s durant le suivi de personne, de façon Ă  capturer dans la signature la variabilitĂ© d'apparence des personnes. Une premiĂšre Ă©valuation expĂ©rimentale a Ă©tĂ© effectuĂ©e sur une base publique d'enregistrements Ă  basse rĂ©solution dans un centre commercial, et les performances de re-identification sont trĂšs prometteuses (une prĂ©cision de 82% pour un rappel de 78%). De plus, notre technique de rĂ©-identification est particuliĂšrement rapide : ~1/8 s pour une requĂȘte Ă  comparer Ă  10 personnes vues prĂ©cĂ©demment, et surtout une dĂ©pendance logarithmique avec le nombre de modĂšles stockĂ©s, de sorte que la rĂ©-identification parmi des milliers de personnes prendrait moins de ÂŒ s de calcul

    Person re-identification in multi-camera system by signature based on interest point descriptors collected on short video sequences

    No full text
    International audienceWe present and evaluate a person re-identification scheme for multi-camera surveillance system. Our approach uses matching of signatures based on interest-points descriptors collected on short video sequences. One of the originalities of our method is to accumulate interest points on several sufficiently time-spaced images during person tracking within each camera, in order to capture appearance variability. A first experimental evaluation conducted on a publicly available set of low-resolution videos in a commercial mall shows very promising inter-camera person re-identification performances (a precision of 82% for a recall of 78%). It should also be noted that our matching method is very fast: ~ 1/8s for re-identification of one target person among 10 previously seen persons, and a logarithmic dependence with the number of stored person models, making reidentification among hundreds of persons computationally feasible in less than ~ 1/5 second

    Image subset communication for resource-constrained applications in wireless sensor networks

    Get PDF

    Detection and recognition of end-of-speed-limit and supplementary signs for improved european speed limit support

    No full text
    International audienceWe present two new features for our prototype of European Speed Limit Support system: detection and recognition of end-of-speed-limit signs, as well as a framework for detection and recognition of supplementary signs located below main signs and modifying their scope (particular lane, class of vehicle, etc...). The end-of-speed-limit signs are globallyrecognized by a Multi-Layer Perceptron (MLP) neural network. The supplementary signs are detected by applying a rectangle-detection in a region below recognized speed-limit signs, followed by a MLP neural network recognition. A common French+German end-of-speed-limit signs recognition has been designed and successfully tested, yielding 82% detection+recognition. Results for detection and recognition of a first kind of supplementary sign (French exit-lane) are already satisfactory (78% correct detection rate), and our framework can easily be extended to handle other types of supplementary signs. To our knowledge, we are the first team presenting results on detection and recognition of supplementary signs below speed signs, which is a crucial feature for a reliable Speed Limit Support

    DĂ©tection et rĂ©-identification de piĂ©tons par points d'intĂ©rĂȘt entre camĂ©ras disjointes

    No full text
    MaĂźtre de thĂšse : Fabien MoutardeWith the development of video-protection, the number of cameras deployed is increasing rapidly. To effectively exploit these videos, it is essential to develop tools that automate monitoring, or at least part of their analysis. One of the difficulties, and poorly resolved problems in this area, is the tracking of people in a large space (metro, shopping center, airport, etc.) covered by a network of non-overlapping cameras. In this thesis, we propose and experiment a new method for the re-identification of pedestrians between disjoint cameras. Our technique is based on the detection and accumulation (during tracking within one camera) of interest points characterized by a local descriptor. We present and evaluate a keypoints-based method for modeling a scene background and detecting new (moving) objects in it. Then we present and evaluate our method for identifying a person by matching the interest points found in several images. One of the originalities of our method is to accumulate interest points on sufficiently time-spaced images during person tracking, in order to capture appearance variability. We produce quantitative results on the performance of such a system to allow an objective comparison with other features (SIFT, Color, HOG). Finally, we propose and test possible improvements, particularly for the automatic selection of moments or interest points, to obtain a set of points for each individual which are the most varied and more discriminating to those of other people. This probabilistic variant of our method brings tremendous improvement to performance, which rises at 95% first rank correct identification among 40 persons, which is above state-of-the-art.Avec le dĂ©veloppement de la vidĂ©o-protection, le nombre de camĂ©ras dĂ©ployĂ©es augmente rapidement. Pour exploiter efficacement ces vidĂ©os, il est indispensable de concevoir des outils d'aide Ă  la surveillance qui automatisent au moins partiellement leur analyse. Un des problĂšmes difficiles est le suivi de personnes dans un grand espace (mĂ©tro, centre commercial, aĂ©roport, etc.) couvert par un rĂ©seau de camĂ©ras sans recouvrement. Dans cette thĂšse nous proposons et expĂ©rimentons une nouvelle mĂ©thode pour la rĂ©-identification de piĂ©tons entre camĂ©ras disjointes. Notre technique est fondĂ©e sur la dĂ©tection et l'accumulation de points d'intĂ©rĂȘt caractĂ©risĂ©s par un descripteur local. D'abord, on propose puis Ă©value une mĂ©thode utilisant les points d'intĂ©rĂȘts pour la modĂ©lisation de scĂšne, puis la dĂ©tection d'objets mobiles. Ensuite, la rĂ©-identification des personnes se fait en collectant un ensemble de points d'intĂ©rĂȘt durant une fenĂȘtre temporelle, puis en cherchant pour chacun d'eux leur correspondant le plus similaire parmi tous les descripteurs enregistrĂ©s prĂ©cĂ©demment, et stockĂ©s dans un KD-tree. Enfin, nous proposons et testons des pistes d'amĂ©lioration, en particulier pour la sĂ©lection automatique des instants ou des points d'intĂ©rĂȘt, afin d'obtenir pour chaque individu un ensemble de points qui soient Ă  la fois les plus variĂ©s possibles, et les plus discriminants par rapport aux autres personnes. Les performances de rĂ©-identification de notre algorithme, environ 95% d'identification correcte au premier rang parmi 40 personnes, dĂ©passent l'Ă©tat de l'art, ainsi que celles obtenues dans nos comparaisons avec d'autres descripteurs (histogramme de couleur, HOG, SIFT)

    Pedestrian detection and re-identification using interest points between non overlapping cameras

    No full text
    Avec le dĂ©veloppement de la vidĂ©o-protection, le nombre de camĂ©ras dĂ©ployĂ©es augmente rapidement. Pour exploiter efficacement ces vidĂ©os, il est indispensable de concevoir des outils d'aide Ă  la surveillance qui automatisent au moins partiellement leur analyse. Un des problĂšmes difficiles est le suivi de personnes dans un grand espace (mĂ©tro, centre commercial, aĂ©roport, etc.) couvert par un rĂ©seau de camĂ©ras sans recouvrement. Dans cette thĂšse nous proposons et expĂ©rimentons une nouvelle mĂ©thode pour la rĂ©-identification de piĂ©tons entre camĂ©ras disjointes. Notre technique est fondĂ©e sur la de tection et l'accumulation de points d'intĂ©rĂȘt caractĂ©risĂ©s par un descripteur local. D'abord, on propose puis Ă©value une mĂ©thode utilisant les points d'intĂ©rĂȘts pour la modĂ©lisation de scĂšne, puis la dĂ©tection d'objets mobiles. Ensuite, la rĂ©-identification des personnes se fait en collectant un ensemble de points d'intĂ©rĂȘt durant une fenĂȘtre temporelle, puis en cherchant pour chacun d'eux leur correspondant le plus similaire parmi tous les descripteurs enregistrĂ©s prĂ©cĂ©demment, et stockĂ©s dans un KD-tree. Enfin, nous proposons et testons des pistes d'amĂ©lioration, en particulier pour la sĂ©lection automatique des instants ou des points d'intĂ©rĂȘt, afin d'obtenir pour chaque individu un ensemble de points qui soient Ă  la fois les plus variĂ©s possibles, et les plus discriminants par rapport aux autres personnes. Les performances de rĂ©-identification de notre algorithme, environ 95% d'identification correcte au premier rang parmi 40 personnes, dĂ©passent l'Ă©tat de l'art, ainsi que celles obtenues dans nos comparaisons avec d'autres descripteurs (histogramme de couleur, HOG, SIFT).PARIS-MINES ParisTech (751062310) / SudocSudocFranceF
    corecore