5 research outputs found

    Multi-step Multi-camera View Planning for Real-Time Visual Object Tracking

    Full text link
    Abstract. We present a new method for planning the optimal next view for a probabilistic visual object tracking task. Our method uses a variable number of cameras, can plan an action sequence several time steps into the future, and allows for real-time usage due to a computation time which is linear both in the number of cameras and the number of time steps. The algorithm can also handle object loss in one, more or all cameras, interdependencies in the camera’s information contribution, and variable action costs. We evaluate our method by comparing it to previous approaches with a prere-corded sequence of real world images. From K. Franke et al., Pattern Recognition, 28th DAGM Symposium, Springer, 2006, (pp. 536–545).

    Low and Variable Frame Rate Face Tracking Using an IP PTZ Camera

    Get PDF
    RÉSUMÉ En vision par ordinateur, le suivi d'objets avec des caméras PTZ a des applications dans divers domaines, tels que la surveillance vidéo, la surveillance du trafic, la surveillance de personnes et la reconnaissance de visage. Toutefois, un suivi plus précis, efficace, et fiable est requis pour une utilisation courante dans ces domaines. Dans cette thèse, le suivi est appliqué au haut du corps d'un humain, en incluant son visage. Le suivi du visage permet de déterminer son emplacement pour chaque trame d'une vidéo. Il peut être utilisé pour obtenir des images du visage d'un humain dans des poses différentes. Dans ce travail, nous proposons de suivre le visage d'un humain à l’aide d'une caméra IP PTZ (caméra réseau orientable). Une caméra IP PTZ répond à une commande via son serveur Web intégré et permet un accès distribué à partir d'Internet. Le suivi avec ce type de caméra inclut un bon nombre de défis, tels que des temps de réponse irrégulier aux commandes de contrôle, des taux de trame faibles et irréguliers, de grand mouvements de la cible entre deux trames, des occlusions, des modifications au champ de vue, des changements d'échelle, etc. Dans notre travail, nous souhaitons solutionner les problèmes des grands mouvements de la cible entre deux trames consécutives, du faible taux de trame, des modifications de l'arrière-plan, et du suivi avec divers changements d'échelle. En outre, l'algorithme de suivi doit prévoir les temps de réponse irréguliers de la caméra. Notre solution se compose d’une phase d’initialisation pour modéliser la cible (haut du corps), d’une adaptation du filtre de particules qui utilise le flux optique pour générer des échantillons à chaque trame (APF-OFS), et du contrôle de la caméra. Chaque composante exige des stratégies différentes. Lors de l'initialisation, on suppose que la caméra est statique. Ainsi, la détection du mouvement par soustraction d’arrière-plan est utilisée pour détecter l'emplacement initial de la personne. Ensuite, pour supprimer les faux positifs, un classificateur Bayesien est appliqué sur la région détectée afin de localiser les régions avec de la peau. Ensuite, une détection du visage basée sur la méthode de Viola et Jones est effectuée sur les régions de la peau. Si un visage est détecté, le suivi est lancé sur le haut du corps de la personne.----------ABSTRACT Object tracking with PTZ cameras has various applications in different computer vision topics such as video surveillance, traffic monitoring, people monitoring and face recognition. Accurate, efficient, and reliable tracking is required for this task. Here, object tracking is applied to human upper body tracking and face tracking. Face tracking determines the location of the human face for each input image of a video. It can be used to get images of the face of a human target under different poses. We propose to track the human face by means of an Internet Protocol (IP) Pan-Tilt-Zoom (PTZ) camera (i.e. a network-based camera that pans, tilts and zooms). An IP PTZ camera responds to command via its integrated web server. It allows a distributed access from Internet (access from everywhere, but with non-defined delay). Tracking with such camera includes many challenges such as irregular response times to camera control commands, low and irregular frame rate, large motions of the target between two frames, target occlusion, changing field of view (FOV), various scale changes, etc. In our work, we want to cope with the problem of large inter-frame motion of targets, low usable frame rate, background changes, and tracking with various scale changes. In addition, the tracking algorithm should handle the camera response time and zooming. Our solution consists of a system initialization phase which is the processing before camera motion and a tracker based on an Adaptive Particle Filter using Optical Flow based Sampling (APF-OFS) tracker, and camera control that are the processing after the motion of the camera. Each part requires different strategies. For initialization, when the camera is stationary, motion detection for a static camera is used to detect the initial location of the person face entering an area. For motion detection in the FOV of the camera, a background subtraction method is applied. Then to remove false positives, Bayesian skin classifier is applied on the detected motion region to discriminate skin regions from non skin regions. Face detection based on Viola and Jones face detector can be performed on the detected skin regions independently of their face size and position within the image

    Systems and Algorithms for Automated Collaborative Observation using Networked Robotic Cameras

    Get PDF
    The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. Recently, the fast evolving advances in network and computer technologies and decreasing size and cost of sensors and robots enable us to further extend the MOMR system architecture to incorporate heterogeneous components such as humans, robots, sensors, and automated agents. The requests for controlling robot actuation are generated by all the participants. We term it as the MOMR++ system. However, to reach the best potential and performance of the system, there are many technical challenges needing to be addressed. In this dissertation, we address two major challenges in the MOMR++ system development. We first address the robot coordination and planning issue in the application of an autonomous crowd surveillance system. The system consists of multiple robotic pan-tilt-zoom (PTZ) cameras assisted with a fixed wide-angle camera. The wide-angle camera provides an overview of the scene and detects moving objects, which are required for close-up views using the PTZ cameras. When applied to the pedestrian surveillance application and compared to a previous work, the system achieves increasing number of observed objects by over 210% in heavy traffic scenarios. The key issue here is given the limited number (e.g., p (p > 0)) of PTZ cameras and many more (e.g., n (n >> p)) observation requests, how to coordinate the cameras to best satisfy all the requests. We formulate this problem as a new camera resource allocation problem. Given p cameras, n observation requests, and [epsilon] being approximation bound, we develop an approximation algorithm running in O(n/[epsilon]³ + p²/[epsilon]⁶) time, and an exact algorithm, when p = 2, running in O(n³) time. We then address the automatic object content analysis and recognition issue in the application of an autonomous rare bird species detection system. We set up the system in the forest near Brinkley, Arkansas. The camera monitors the sky, detects motions, and preserves video data for only those targeted bird species. During the one-year search, the system reduces the raw video data of 29.41TB to only 146.7MB (reduction rate 99.9995%). The key issue here is to automatically recognize the flying bird species. We verify the bird body axis dynamic information by an extended Kalman filter (EKF) and compare the bird dynamic state with the prior knowledge of the targeted bird species. We quantify the uncertainty in recognition due to the measurement uncertainty and develop a novel Probable Observation Data Set (PODS)-based EKF method. In experiments with real video data, the algorithm achieves 95% area under the receiver operating characteristic (ROC) curve. Through the exploration of the two MOMR++ systems, we conclude that the new MOMR++ system architecture enables much wider range of participants, enhances the collaboration and interaction between participants so that information can be exchanged in between, suppresses the chance of any individual bias or mistakes in the observation process, and further frees humans from the control/observation process by providing automatic control/observation. The new MOMR++ system architecture is a promising direction for future telerobtics advances

    Estimació del moviment de robots mitjançant contorns actius

    Get PDF
    Aquesta tesi versa sobre l'estimació del moviment d'un robot mòbil a partir dels canvis en les imatges captades per una càmera muntada sobre el robot. El moviment es dedueix amb un algorisme prèviament proposat en el marc de la navegació qualitativa. Per tal d'emprar aquest algorisme en casos reals s'ha fet un estudi de la seva precisió. Per augmentar-ne l'aplicabilitat, s'ha adaptat l'algorisme al cas d'una càmera amb moviments d'orientació i de zoom.Quan els efectes perspectius no són importants, dues vistes d'una escena captades pel robot es poden relacionar amb una transformació afí (o afinitat), que normalment es calcula a partir de correspondències de punts. En aquesta tesi es vol seguir un enfoc alternatiu, i alhora complementari, fent servir la silueta d'un objecte modelada mitjançant un contorn actiu. El marc es el següent: a mesura que el robot es va movent, la projecció de l'objecte a la imatge va canviant i el contorn actiu es deforma convenientment per adaptar-s'hi; de les deformacions d'aquest contorn, expressades en espai de forma, se'n pot extreure el moviment del robot fins a un factor d'escala. Els contorns actius es caracteritzen per la rapidesa en la seva extracció i la seva robustesa a oclusions parcials. A més, un contorn és fàcil de trobar fins i tot en escenes poc texturades, on sovint és difícil trobar punts característics i la seva correspondència.La primera part d'aquest treball té l'objectiu de caracteritzar la precisió i la incertesa en l'estimació del moviment. Per avaluar la precisió, primer es duen a terme un parell d'experiències pràctiques, que mostren la potencialitat de l'algorisme en entorns reals i amb diferents robots. Estudiant la geometria epipolar que relaciona dues vistes d'un objecte planar es demostra que la direcció epipolar afí es pot recuperar en el cas que el moviment de la càmera estigui lliure de ciclorotació. Amb una bateria d'experiments, tant en simulació com reals, es fa servir la direcció epipolar per caracteritzar la precisió global de l'afinitat en diferents situacions, com ara, davant de diferents formes dels contorns, condicions de visualització extremes i soroll al sistema.Pel que fa a la incertesa, gràcies a que la implementació es basa en el filtre de Kalman, per a cada estimació del moviment també es té una estimació de la incertesa associada, però expressada en espai de forma. Per tal propagar la incertesa de l'espai de forma a l'espai de moviment 3D s'han seguit dos camins diferents: un analític i l'altre estadístic. Aquest estudi ha permès determinar quins graus de llibertat es recuperen amb més precisió, i quines correlacions existeixen entre les diferents components. Finalment, s'ha desenvolupat un algorisme que permet propagar la incertesa del moviment en temps de vídeo. Una de les limitacions més importants d'aquesta metodologia és que cal que la projecció de l'objecte estigui dins de la imatge i en condicions de visualització de perspectiva dèbil durant tota la seqüència. En la segona part d'aquest treball, s'estudia el seguiment de contorns actius en el marc de la visió activa per tal de superar aquesta limitació. És una relació natural, atès que el seguiment de contorns actius es pot veure com una tècnica per fixar el focus d'atenció. En primer lloc, s'han estudiat les propietats de les càmeres amb zoom i s'ha proposat un nou algorisme per determinar la profunditat de la càmera respecte a un objecte qualsevol. L'algorisme inclou un senzill calibratge geomètric que no implica cap coneixement sobre els paràmetres interns de la càmera. Finalment, per tal d'orientar la càmera adequadament, compensant en la mesura del possible els moviments del robot, s'ha desenvolupat un algorisme per al control dels mecanismes de zoom, capcineig i guinyada, i s'ha adaptat l'algorisme d'estimació del moviment incorporant-hi els girs coneguts del capcineig i la guinyada.This thesis deals with the motion estimation of a mobile robot from changes in the images acquired by a camera mounted on the robot itself. The motion is deduced with an algorithm previously proposed in the framework of qualitative navigation. In order to employ this algorithm in real situations, a study of its accuracy has been performed. Moreover, relationships with the active vision paradigm have been analyzed, leading to an increase in its applicability.When perspective effects are not significant, two views of a scene are related by an affine transformation (or affinity), that it is usually computed from point correspondences. In this thesis we explore an alternative and at the same time complementary approach, using the contour of an object modeled by means of an active contour. The framework is the following: when the robot moves, the projection of the object in the image changes and the active contour adapts conveniently to it; from the deformation of this contour, expressed in shape space, the robot egomotion can be extracted up to a scale factor. Active contours are characterized by the speed of their extraction and their robustness to partial occlusions. Moreover, a contour is easy to find even in poorly textured scenes, where often it is difficult to find point features and their correspondences.The goal of the first part of this work is to characterize the accuracy and the uncertainty in the motion estimation. Some practical experiences are carried out to evaluate the accuracy, showing the potentiality of the algorithm in real environments and with different robots. We have studied also the epipolar geometry relating two views of a planar object. We prove that the affine epipolar direction between two images can be recovered from a shape vector when the camera motion is free of cyclorotation. With a battery of simulated as well as real experiments, the epipolar direction allows us to analyze the global accuracy of the affinity in a variety of situations: different contour shapes, extreme visualization conditions and presence of noise.Regarding uncertainty, since the implementation is based on a Kalman filter, for each motion estimate we have also its covariance matrix expressed in shape space. In order to propagate the uncertainty from shape space to 3D motion space, two different approaches have been followed: an analytical and a statistical one. This study has allowed us to determine which degrees of freedom are recovered with more accuracy, and what correlations exist between the different motion components. Finally, an algorithm to propagate the motion uncertainty at video rate has been proposed.One of the most important limitations of this methodology is that the object must project onto the image under weak-perspective visualization conditions all along the sequence. In the second part of this work, active contour tracking is studied within the framework of active vision to overcome this limitation. Both relate naturally, as active contour tracking can be seen as a focus-of-attention strategy.First, the properties of zooming cameras are studied and a new algorithm is proposed to estimate the depth of the camera with respect to an object. The algorithm includes a simple geometric calibration that does not require any knowledge about the camera internal parameters.Finally, in order to orientate the camera so as to suitably compensate for robot motion when possible, a new algorithm has been proposed for the control of zoom, pan and tilt mechanisms, and the motion estimation algorithm has been updated conveniently to incorporate the active camera state information

    Zoom on Target While Tracking

    No full text
    corecore