1,652 research outputs found

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology

    Get PDF
    The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system

    Improving Indoor Security Surveillance by Fusing Data from BIM, UWB and Video

    Get PDF
    Indoor physical security, as a perpetual and multi-layered phenomenon, is a time-intensive and labor-consuming task. Various technologies have been leveraged to develop automatic access control, intrusion detection, or video monitoring systems. Video surveillance has been significantly enhanced by the advent of Pan-Tilt-Zoom (PTZ) cameras and advanced video processing, which together enable effective monitoring and recording. The development of ubiquitous object identification and tracking technologies provides the opportunity to accomplish automatic access control and tracking. Intrusion detection has also become possible through deploying networks of motion sensors for alerting about abnormal behaviors. However, each of the above-mentioned technologies has its own limitations. This thesis presents a fully automated indoor security solution that leverages an Ultra-wideband (UWB) Real-Time Locating System (RTLS), PTZ surveillance cameras and a Building Information Model (BIM) as three sources of environmental data. Providing authorized persons with UWB tags, unauthorized intruders are distinguished as the mismatch observed between the detected tag owners and the persons detected in the video, and intrusion alert is generated. PTZ cameras allow for wide-area monitoring and motion-based recording. Furthermore, the BIM is used for space modeling and mapping the locations of intruders in the building. Fusing UWB tracking, video and spatial data can automate the entire security procedure from access control to intrusion alerting and behavior monitoring. Other benefits of the proposed method include more complex query processing and interoperability with other BIM-based solutions. A prototype system is implemented that demonstrates the feasibility of the proposed method

    Integrating Multicamera Surveillance Systems into Multiagent Location Systems

    Get PDF
    Proceedings of: Workshop on User-Centric Technologies and Applications (CONTEXTS 2011) Salamanca, April 6-8 , 2011Users are increasingly demanding personalized services based on their context, being one of the key features of that context the user's position. There are a wide number of possible solutions to deal with the positioning issue, which, for different situations, may have different accuracy requirements. This paper presents this issue from the point of view of an existing multicamera surveillance system which requires to be integrated into a multiagent positioning system, including a tracking example with the presented architecture.This work was supported in part by Projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    Indoor pedestrian dead reckoning calibration by visual tracking and map information

    Get PDF
    Currently, Pedestrian Dead Reckoning (PDR) systems are becoming more attractive in market of indoor positioning. This is mainly due to the development of cheap and light Micro Electro-Mechanical Systems (MEMS) on smartphones and less requirement of additional infrastructures in indoor areas. However, it still faces the problem of drift accumulation and needs the support from external positioning systems. Vision-aided inertial navigation, as one possible solution to that problem, has become very popular in indoor localization with satisfied performance than individual PDR system. In the literature however, previous studies use fixed platform and the visual tracking uses feature-extraction-based methods. This paper instead contributes a distributed implementation of positioning system and uses deep learning for visual tracking. Meanwhile, as both inertial navigation and optical system can only provide relative positioning information, this paper contributes a method to integrate digital map with real geographical coordinates to supply absolute location. This hybrid system has been tested on two common operation systems of smartphones as iOS and Android, based on corresponded data collection apps respectively, in order to test the robustness of method. It also uses two different ways for calibration, by time synchronization of positions and heading calibration based on time steps. According to the results, localization information collected from both operation systems has been significantly improved after integrating with visual tracking data

    Towards high-accuracy augmented reality GIS for architecture and geo-engineering

    Get PDF
    L’architecture et la géo-ingénierie sont des domaines où les professionnels doivent prendre des décisions critiques. Ceux-ci requièrent des outils de haute précision pour les assister dans leurs tâches quotidiennes. La Réalité Augmentée (RA) présente un excellent potentiel pour ces professionnels en leur permettant de faciliter l’association des plans 2D/3D représentatifs des ouvrages sur lesquels ils doivent intervenir, avec leur perception de ces ouvrages dans la réalité. Les outils de visualisation s’appuyant sur la RA permettent d’effectuer ce recalage entre modélisation spatiale et réalité dans le champ de vue de l’usager. Cependant, ces systèmes de RA nécessitent des solutions de positionnement en temps réel de très haute précision. Ce n’est pas chose facile, spécialement dans les environnements urbains ou sur les sites de construction. Ce projet propose donc d’investiguer les principaux défis que présente un système de RA haute précision basé sur les panoramas omnidirectionels.Architecture and geo-engineering are application domains where professionals need to take critical decisions. These professionals require high-precision tools to assist them in their daily decision taking process. Augmented Reality (AR) shows great potential to allow easier association between the abstract 2D drawings and 3D models representing infrastructure under reviewing and the actual perception of these objects in the reality. The different visualization tools based on AR allow to overlay the virtual models and the reality in the field of view of the user. However, the architecture and geo-engineering context requires high-accuracy and real-time positioning from these AR systems. This is not a trivial task, especially in urban environments or on construction sites where the surroundings may be crowded and highly dynamic. This project investigates the accuracy requirements of mobile AR GIS as well as the main challenges to address when tackling high-accuracy AR based on omnidirectional panoramas

    Vođenje hodajućeg robota u strukturiranom prostoru zasnovano na računalnome vidu

    Get PDF
    Locomotion of a biped robot in a scenario with obstacles requires a high degree of coordination between perception and walking. This article presents key ideas of a vision-based strategy for guidance of walking robots in structured scenarios. Computer vision techniques are employed for reactive adaptation of step sequences allowing a robot to step over or upon or walk around obstacles. Highly accurate feedback information is achieved by a combination of line-based scene analysis and real-time feature tracking. The proposed vision-based approach was evaluated by experiments with a real humanoid robot.Lokomocija dvonožnog robota u prostoru s preprekama zahtijeva visoki stupanj koordinacije između percepcije i hodanja. U članku se opisuju ključne postavke strategije vođenja hodajućih robota zasnovane na računalnome vidu. Tehnike računalnoga vida primijenjene za reaktivnu adaptaciju slijeda koraka omogućuju robotu zaobilaženje prepreka, ali i njihovo prekoračivanje te penjanje na njih. Visoka točnost povratne informacije postignuta je kombinacijom analize linijskih segmenata u sceni i praćenjem značajki scene u stvarnome vremenu. Predloženi je sustav vođenja hodajućih robota eksperimentalno provjeren na stvarnome čovjekolikome robotu
    corecore