4 research outputs found

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Ultra-Low Power IoT Smart Visual Sensing Devices for Always-ON Applications

    Get PDF
    This work presents the design of a Smart Ultra-Low Power visual sensor architecture that couples together an ultra-low power event-based image sensor with a parallel and power-optimized digital architecture for data processing. By means of mixed-signal circuits, the imager generates a stream of address events after the extraction and binarization of spatial gradients. When targeting monitoring applications, the sensing and processing energy costs can be reduced by two orders of magnitude thanks to either the mixed-signal imaging technology, the event-based data compression and the use of event-driven computing approaches. From a system-level point of view, a context-aware power management scheme is enabled by means of a power-optimized sensor peripheral block, that requests the processor activation only when a relevant information is detected within the focal plane of the imager. When targeting a smart visual node for triggering purpose, the event-driven approach brings a 10x power reduction with respect to other presented visual systems, while leading to comparable results in terms of detection accuracy. To further enhance the recognition capabilities of the smart camera system, this work introduces the concept of event-based binarized neural networks. By coupling together the theory of binarized neural networks and focal-plane processing, a 17.8% energy reduction is demonstrated on a real-world data classification with a performance drop of 3% with respect to a baseline system featuring commercial visual sensors and a Binary Neural Network engine. Moreover, if coupling the BNN engine with the event-driven triggering detection flow, the average power consumption can be as low as the sleep power of 0.3mW in case of infrequent events, which is 8x lower than a smart camera system featuring a commercial RGB imager

    Suivi visuel d'objets dans un réseau de caméras intelligentes embarquées

    Get PDF
    Multi-object tracking constitutes a major step in several computer vision applications. The requirements of these applications in terms of performance, processing time, energy consumption and the ease of deployment of a visual tracking system, make the use of low power embedded platforms essential. In this thesis, we designed a multi-object tracking system that achieves real time processing on a low cost and a low power embedded smart camera. The tracking pipeline was extended to work in a network of cameras with nonoverlapping field of views. The tracking pipeline is composed of a detection module based on a background subtraction method and on a tracker using the probabilistic Gaussian Mixture Probability Hypothesis Density (GMPHD) filter. The background subtraction, we developed, is a combination of the segmentation resulted from the Zipfian Sigma-Delta method with the gradient of the input image. This combination allows reliable detection with low computing complexity. The output of the background subtraction is processed using a connected components analysis algorithm to extract the features of moving objects. The features are used as input to an improved version of GMPHD filter. Indeed, the original GMPHD do not manage occlusion problems. We integrated two new modules in GMPHD filter to handle occlusions between objects. If there are no occlusions, the motion feature of objects is used for tracking. When an occlusion is detected, the appearance features of the objects are saved to be used for re-identification at the end of the occlusion. The proposed tracking pipeline was optimized and implemented on an embedded smart camera composed of the Raspberry Pi version 1 board and the camera module RaspiCam. The results show that besides the low complexity of the pipeline, the tracking quality of our method is close to the stat of the art methods. A frame rate of 15 − 30 was achieved on the smart camera depending on the image resolution. In the second part of the thesis, we designed a distributed approach for multi-object tracking in a network of non-overlapping cameras. The approach was developed based on the fact that each camera in the network runs a GMPHD filter as a tracker. Our approach is based on a probabilistic formulation that models the correspondences between objects as an appearance probability and space-time probability. The appearance of an object is represented by a vector of m dimension, which can be considered as a histogram. The space-time features are represented by the transition time between two input-output regions in the network and the transition probability from a region to another. Transition time is modeled as a Gaussian distribution with known mean and covariance. The distributed aspect of the proposed approach allows a tracking over the network with few communications between the cameras. Several simulations were performed to validate the approach. The obtained results are promising for the use of this approach in a real network of smart cameras.Le suivi d’objets est de plus en plus utilisĂ© dans les applications de vision par ordinateur. Compte tenu des exigences des applications en termes de performance, du temps de traitement, de la consommation d’énergie et de la facilitĂ© du dĂ©ploiement des systĂšmes de suivi, l’utilisation des architectures embarquĂ©es de calcul devient primordiale. Dans cette thĂšse, nous avons conçu un systĂšme de suivi d’objets pouvant fonctionner en temps rĂ©el sur une camĂ©ra intelligente de faible coĂ»t et de faible consommation Ă©quipĂ©e d’un processeur embarquĂ© ayant une architecture lĂ©gĂšre en ressources de calcul. Le systĂšme a Ă©tĂ© Ă©tendu pour le suivi d’objets dans un rĂ©seau de camĂ©ras avec des champs de vision non-recouvrant. La chaĂźne algorithmique est composĂ©e d’un Ă©tage de dĂ©tection basĂ© sur la soustraction de fond et d’un Ă©tage de suivi utilisant un algorithme probabiliste Gaussian Mixture Probability Hypothesis Density (GMPHD). La mĂ©thode de soustraction de fond que nous avons proposĂ©e combine le rĂ©sultat fournie par la mĂ©thode Zipfian Sigma-Delta avec l’information du gradient de l’image d’entrĂ©e dans le but d’assurer une bonne dĂ©tection avec une faible complexitĂ©. Le rĂ©sultat de soustraction est traitĂ© par un algorithme d’analyse des composantes connectĂ©es afin d’extraire les caractĂ©ristiques des objets en mouvement. Les caractĂ©ristiques constituent les observations d’une version amĂ©liorĂ©e du filtre GMPHD. En effet, le filtre GMPHD original ne traite pas les occultations se produisant entre les objets. Nous avons donc intĂ©grĂ© deux modules dans le filtre GMPHD pour la gestion des occultations. Quand aucune occultation n’est dĂ©tectĂ©e, les caractĂ©ristiques de mouvement des objets sont utilisĂ©es pour le suivi. Dans le cas d’une occultation, les caractĂ©ristiques d’apparence des objets, reprĂ©sentĂ©es par des histogrammes en niveau de gris sont sauvegardĂ©es et utilisĂ©es pour la rĂ©-identification Ă  la fin de l’occultation. Par la suite, la chaĂźne de suivi dĂ©veloppĂ©e a Ă©tĂ© optimisĂ©e et implĂ©mentĂ©e sur une camĂ©ra intelligente embarquĂ©e composĂ©e de la carte Raspberry Pi version 1 et du module camĂ©ra RaspiCam. Les rĂ©sultats obtenus montrent une qualitĂ© de suivi proche des mĂ©thodes de l’état de l’art et une cadence d’images de 15 − 30 fps sur la camĂ©ra intelligente selon la rĂ©solution des images. Dans la deuxiĂšme partie de la thĂšse, nous avons conçu un systĂšme distribuĂ© de suivi multi-objet pour un rĂ©seau de camĂ©ras avec des champs non recouvrants. Le systĂšme prend en considĂ©ration que chaque camĂ©ra exĂ©cute un filtre GMPHD. Le systĂšme est basĂ© sur une approche probabiliste qui modĂ©lise la correspondance entre les objets par une probabilitĂ© d’apparence et une probabilitĂ© spatio-temporelle. L’apparence d’un objet est reprĂ©sentĂ©e par un vecteur de m Ă©lĂ©ments qui peut ĂȘtre considĂ©rĂ© comme un histogramme. La caractĂ©ristique spatio-temporelle est reprĂ©sentĂ©e par le temps de transition des objets et la probabilitĂ© de transition d’un objet d’une rĂ©gion d’entrĂ©e-sortie Ă  une autre. Le temps de transition est modĂ©lisĂ© par une loi normale dont la moyenne et la variance sont supposĂ©es ĂȘtre connues. L’aspect distribuĂ© de l’approche proposĂ©e assure un suivi avec peu de communication entre les noeuds du rĂ©seau. L’approche a Ă©tĂ© testĂ©e en simulation et sa complexitĂ© a Ă©tĂ© analysĂ©e. Les rĂ©sultats obtenus sont prometteurs pour le fonctionnement de l’approche dans un rĂ©seau de camĂ©ras intelligentes rĂ©el

    Embedded Smart Camera for High Speed Vision

    No full text
    corecore