5 research outputs found

    Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology

    Get PDF
    The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system

    Low and Variable Frame Rate Face Tracking Using an IP PTZ Camera

    Get PDF
    RÉSUMÉ En vision par ordinateur, le suivi d'objets avec des caméras PTZ a des applications dans divers domaines, tels que la surveillance vidéo, la surveillance du trafic, la surveillance de personnes et la reconnaissance de visage. Toutefois, un suivi plus précis, efficace, et fiable est requis pour une utilisation courante dans ces domaines. Dans cette thèse, le suivi est appliqué au haut du corps d'un humain, en incluant son visage. Le suivi du visage permet de déterminer son emplacement pour chaque trame d'une vidéo. Il peut être utilisé pour obtenir des images du visage d'un humain dans des poses différentes. Dans ce travail, nous proposons de suivre le visage d'un humain à l’aide d'une caméra IP PTZ (caméra réseau orientable). Une caméra IP PTZ répond à une commande via son serveur Web intégré et permet un accès distribué à partir d'Internet. Le suivi avec ce type de caméra inclut un bon nombre de défis, tels que des temps de réponse irrégulier aux commandes de contrôle, des taux de trame faibles et irréguliers, de grand mouvements de la cible entre deux trames, des occlusions, des modifications au champ de vue, des changements d'échelle, etc. Dans notre travail, nous souhaitons solutionner les problèmes des grands mouvements de la cible entre deux trames consécutives, du faible taux de trame, des modifications de l'arrière-plan, et du suivi avec divers changements d'échelle. En outre, l'algorithme de suivi doit prévoir les temps de réponse irréguliers de la caméra. Notre solution se compose d’une phase d’initialisation pour modéliser la cible (haut du corps), d’une adaptation du filtre de particules qui utilise le flux optique pour générer des échantillons à chaque trame (APF-OFS), et du contrôle de la caméra. Chaque composante exige des stratégies différentes. Lors de l'initialisation, on suppose que la caméra est statique. Ainsi, la détection du mouvement par soustraction d’arrière-plan est utilisée pour détecter l'emplacement initial de la personne. Ensuite, pour supprimer les faux positifs, un classificateur Bayesien est appliqué sur la région détectée afin de localiser les régions avec de la peau. Ensuite, une détection du visage basée sur la méthode de Viola et Jones est effectuée sur les régions de la peau. Si un visage est détecté, le suivi est lancé sur le haut du corps de la personne.----------ABSTRACT Object tracking with PTZ cameras has various applications in different computer vision topics such as video surveillance, traffic monitoring, people monitoring and face recognition. Accurate, efficient, and reliable tracking is required for this task. Here, object tracking is applied to human upper body tracking and face tracking. Face tracking determines the location of the human face for each input image of a video. It can be used to get images of the face of a human target under different poses. We propose to track the human face by means of an Internet Protocol (IP) Pan-Tilt-Zoom (PTZ) camera (i.e. a network-based camera that pans, tilts and zooms). An IP PTZ camera responds to command via its integrated web server. It allows a distributed access from Internet (access from everywhere, but with non-defined delay). Tracking with such camera includes many challenges such as irregular response times to camera control commands, low and irregular frame rate, large motions of the target between two frames, target occlusion, changing field of view (FOV), various scale changes, etc. In our work, we want to cope with the problem of large inter-frame motion of targets, low usable frame rate, background changes, and tracking with various scale changes. In addition, the tracking algorithm should handle the camera response time and zooming. Our solution consists of a system initialization phase which is the processing before camera motion and a tracker based on an Adaptive Particle Filter using Optical Flow based Sampling (APF-OFS) tracker, and camera control that are the processing after the motion of the camera. Each part requires different strategies. For initialization, when the camera is stationary, motion detection for a static camera is used to detect the initial location of the person face entering an area. For motion detection in the FOV of the camera, a background subtraction method is applied. Then to remove false positives, Bayesian skin classifier is applied on the detected motion region to discriminate skin regions from non skin regions. Face detection based on Viola and Jones face detector can be performed on the detected skin regions independently of their face size and position within the image

    Cooperative object tracking with multiple ptz cameras

    No full text
    Research in visual surveillance systems is shifting from using few stationary, passive cameras to employing large heterogeneous sensor networks. One promising type of sensor in particular is the Pan-Tilt-Zoom (PTZ) camera, which can cover a potentially much larger area than passive cameras, and can obtain much higher resolution imagery through zoom capacity. In this paper, a system that can track objects with multiple calibrated PTZ cameras in a cooperative fashion is presented. Tracking and calibration results are combined with several image processing techniques in a statistical segmentation framework, through which the cameras can hand over targets to each other. A prototype system is presented that operates in real time. 1

    Arquitectura para la gestión y coordinación de sistemas multisensor

    Get PDF
    Hoy en día podemos encontrarnos con numerosos entornos que cuentan con un gran conjunto de sensores heterogéneos espacialmente distribuidos. Entornos que pueden ir desde sistemas de vigilancia que incluyen un despliegue de múltiples sensores como cámaras o sensores de localización, a entornos de monitorización de procesos industriales, donde se supervisa cada uno de los procesos para comprobar el correcto funcionamiento del sistema. Inclusive actualmente podemos hablar de términos más modernos como el de Smart Cities que permiten la optimización de los recursos de una ciudad a través de su constante monitorización. En todos estos entornos es común encontrarnos con multitud de sensores percibiendo información del entorno. Estos entornos son de indudable utilidad en cada uno de sus ámbitos, y según avanza la tecnología podemos mejorarlos con la integración de más y mejores sensores. Esto resulta de utilidad porque nos permite monitorizar con más precisión y calidad las diferentes entidades o parámetros del entorno. Sin embargo esto puede suponer un problema a la hora de gestionar y coordinar diferentes tareas sobre el entorno, debido principalmente a la gran cantidad de información generada que tiene que ser analizada. En este sentido, la fusión de datos se volverá un aliado indispensable para mejorar los procesos de análisis y toma de decisiones. En estos entornos además podemos encontrarnos con sensores u actuadores que pueden requerir de cierta gestión, como por ejemplo el manejo de una cámara PTZ para monitorizar regiones o entidades de interés del entorno. Esta gestión se puede hacer de manera manual a través de un operador humano, pero puede convertirse en una tarea compleja cuando contamos con multitud de sensores similares que puedan requerir una gestión coordinada. En este sentido, en la tesis se abordará este problema de gestión y coordinación desde el punto de vista de un sistema multi-agente. Para ello diseñaremos una arquitectura general, que podrá ser aplicada a diferentes casos de uso. Sobre este diseño, evaluaremos su aplicación a dos entornos diferentes. El primer entorno consistirá en un sistema de vigilancia multi-cámara, donde será necesario realizar un control autónomo de las cámaras PTZ para monitorizar las diferentes entidades de interés. El segundo entorno se desarrollará sobre un sistema de vigilancia marítima, donde además de las cámaras podremos contar con otro tipo de sensores como radares o estaciones AIS. La experimentación llevada a cabo en esta tesis a través de la aplicación multi-cámara, ha permitido evaluar la arquitectura desde el punto de vista de la integración en un entornoreal. La aplicación sobre el entorno marítimo, en este caso desarrollado sobre un entorno simulado, nos ha permitido además evaluar la adecuación de de la arquitectura a diferentes sensores e información de las entidades. En ambos casos hemos podido observar la utilidad del sistema multi-agente desarrollado, así como su adecuación a este tipo de entornos, que dada su naturaleza son inherentemente distribuidos.Nowadays it is common to find several heterogeneous sensors spatially distributed in environments. Environments that can range from surveillance systems that include a set of multiple sensors such as cameras or location sensors. To environments monitoring industrial processes, where it is important to monitor several processes to ensure a proper system operation. Moreover we can talk about more modern terms as Smart Cities, which allow optimizing the city resources through its constant monitoring. Thus, in all those deployments, it is common to find multiple sensors perceiving information from the environment. These environments are undoubtedly useful in each of their fields, and as technology advances, we can improve the integration of more and better sensors. This is useful because it allows a better and more accurately monitoring of the different entities or environmental parameters. However this can be a problem when it comes to managing and coordinating different tasks over the environment, mainly due to the large amount of information generated required to be analyzed. In this sense, data fusion will become an indispensable ally to improve the analysis and decision making processes. Moreover, in these environments we can find multiple sensors or actuators that may require some kind of management, such as managing a PTZ camera to monitor regions or entities of interest. This management can be done manually by a human operator, but can become a complex task when we have multiple sensors requiring a coordinated management. In this sense, this thesis tackle the problem of multi-sensor management and coordination from the point of view of a multi-agent system. It is proposed a general multi-agent design that can be applied to different use cases. This design will be evaluated in two different environments. The first one will consist on a multi-camera surveillance system, where the multi-agent system must achieve an autonomous control of PTZ cameras to monitor different entities of interest. The second environment will rely on a maritime surveillance system, where in addition to the cameras, we will introduce other sensor types such as radar or AIS stations. Some experiments were carried out to evaluate the multi-agent system designed. The experiments done in the multi-camera application, allowed us to evaluate the architecture from the point of view of its integration in a real environment. Meanwhile the application in the maritime environment, developed over a simulated environment, let us evaluate the suitability of the architecture to different sensors and information. In both cases we have been able to observe the usefulness of the developed multi-agent system and its adaptation to these environments inherently distributed.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: María Araceli Sanchís de Miguel.- Vocal: Javier Bajo Pérez.- Secretario: Ángel Arroyo Castill
    corecore