322 research outputs found

    Improving Indoor Security Surveillance by Fusing Data from BIM, UWB and Video

    Get PDF
    Indoor physical security, as a perpetual and multi-layered phenomenon, is a time-intensive and labor-consuming task. Various technologies have been leveraged to develop automatic access control, intrusion detection, or video monitoring systems. Video surveillance has been significantly enhanced by the advent of Pan-Tilt-Zoom (PTZ) cameras and advanced video processing, which together enable effective monitoring and recording. The development of ubiquitous object identification and tracking technologies provides the opportunity to accomplish automatic access control and tracking. Intrusion detection has also become possible through deploying networks of motion sensors for alerting about abnormal behaviors. However, each of the above-mentioned technologies has its own limitations. This thesis presents a fully automated indoor security solution that leverages an Ultra-wideband (UWB) Real-Time Locating System (RTLS), PTZ surveillance cameras and a Building Information Model (BIM) as three sources of environmental data. Providing authorized persons with UWB tags, unauthorized intruders are distinguished as the mismatch observed between the detected tag owners and the persons detected in the video, and intrusion alert is generated. PTZ cameras allow for wide-area monitoring and motion-based recording. Furthermore, the BIM is used for space modeling and mapping the locations of intruders in the building. Fusing UWB tracking, video and spatial data can automate the entire security procedure from access control to intrusion alerting and behavior monitoring. Other benefits of the proposed method include more complex query processing and interoperability with other BIM-based solutions. A prototype system is implemented that demonstrates the feasibility of the proposed method

    Next generation flight management systems for manned and unmanned aircraft operations - automated separation assurance and collision avoidance functionalities

    Get PDF
    The demand for improved safety, efficiency and dynamic demand-capacity balancing due to the rapid growth of the aviation sector and the increasing proliferation of Unmanned Aircraft Systems (UAS) in different classes of airspace pose significant challenges to avionics system developers. The design of Next Generation Flight Management Systems (NG-FMS) for manned and unmanned aircraft operations is performed by addressing the challenges identified by various Air Traffic Management (ATM) modernisation programmes and UAS Traffic Management (UTM) system initiatives. In particular, this research focusses on introducing automated Separation Assurance and Collision Avoidance (SA&CA) functionalities (mathematical models) in the NG-FMS. The innovative NG-FMS is also capable of supporting automated negotiation and validation of 4-Dimensional Trajectory (4DT) intents in coordination with novel ground-based Next Generation Air Traffic Management (NG-ATM) systems. One of the key research contributions is the development of a unified method for cooperative and non-cooperative SA&CA, addressing the technical and regulatory challenges of manned and unmanned aircraft coexistence in all classes of airspace. Analytical models are presented and validated to compute the overall avoidance volume in the airspace surrounding a tracked object, supporting automated SA&CA functionalities. The scientific basis of this approach is to assess real-time measurements and associated uncertainties affecting navigation states (of the host aircraft platform), tracking observables (of the static or moving object) and platform dynamics, and translate them to unified range and bearing uncertainty descriptors. The SA&CA unified approach provides an innovative analytical framework to generate high-fidelity dynamic geo-fences suitable for integration in the NG-FMS and in the ATM/UTM/defence decision support tools

    Video analytics for security systems

    Get PDF
    This study has been conducted to develop robust event detection and object tracking algorithms that can be implemented in real time video surveillance applications. The aim of the research has been to produce an automated video surveillance system that is able to detect and report potential security risks with minimum human intervention. Since the algorithms are designed to be implemented in real-life scenarios, they must be able to cope with strong illumination changes and occlusions. The thesis is divided into two major sections. The first section deals with event detection and edge based tracking while the second section describes colour measurement methods developed to track objects in crowded environments. The event detection methods presented in the thesis mainly focus on detection and tracking of objects that become stationary in the scene. Objects such as baggage left in public places or vehicles parked illegally can cause a serious security threat. A new pixel based classification technique has been developed to detect objects of this type in cluttered scenes. Once detected, edge based object descriptors are obtained and stored as templates for tracking purposes. The consistency of these descriptors is examined using an adaptive edge orientation based technique. Objects are tracked and alarm events are generated if the objects are found to be stationary in the scene after a certain period of time. To evaluate the full capabilities of the pixel based classification and adaptive edge orientation based tracking methods, the model is tested using several hours of real-life video surveillance scenarios recorded at different locations and time of day from our own and publically available databases (i-LIDS, PETS, MIT, ViSOR). The performance results demonstrate that the combination of pixel based classification and adaptive edge orientation based tracking gave over 95% success rate. The results obtained also yield better detection and tracking results when compared with the other available state of the art methods. In the second part of the thesis, colour based techniques are used to track objects in crowded video sequences in circumstances of severe occlusion. A novel Adaptive Sample Count Particle Filter (ASCPF) technique is presented that improves the performance of the standard Sample Importance Resampling Particle Filter by up to 80% in terms of computational cost. An appropriate particle range is obtained for each object and the concept of adaptive samples is introduced to keep the computational cost down. The objective is to keep the number of particles to a minimum and only to increase them up to the maximum, as and when required. Variable standard deviation values for state vector elements have been exploited to cope with heavy occlusion. The technique has been tested on different video surveillance scenarios with variable object motion, strong occlusion and change in object scale. Experimental results show that the proposed method not only tracks the object with comparable accuracy to existing particle filter techniques but is up to five times faster. Tracking objects in a multi camera environment is discussed in the final part of the thesis. The ASCPF technique is deployed within a multi-camera environment to track objects across different camera views. Such environments can pose difficult challenges such as changes in object scale and colour features as the objects move from one camera view to another. Variable standard deviation values of the ASCPF have been utilized in order to cope with sudden colour and scale changes. As the object moves from one scene to another, the number of particles, together with the spread value, is increased to a maximum to reduce any effects of scale and colour change. Promising results are obtained when the ASCPF technique is tested on live feeds from four different camera views. It was found that not only did the ASCPF method result in the successful tracking of the moving object across different views but also maintained the real time frame rate due to its reduced computational cost thus indicating that the method is a potential practical solution for multi camera tracking applications

    Development of situation recognition, environment monitoring and patient condition monitoring service modules for hospital robots

    Get PDF
    An aging society and economic pressure have caused an increase in the patient-to-staff ratio leading to a reduction in healthcare quality. In order to combat the deficiencies in the delivery of patient healthcare, the European Commission in the FP6 scheme approved the financing of a research project for the development of an Intelligent Robot Swarm for Attendance, Recognition, Cleaning and Delivery (iWARD). Each iWARD robot contained a mobile, self-navigating platform and several modules attached to it to perform their specific tasks. As part of the iWARD project, the research described in this thesis is interested to develop hospital robot modules which are able to perform the tasks of surveillance and patient monitoring in a hospital environment for four scenarios: Intruder detection, Patient behavioural analysis, Patient physical condition monitoring, and Environment monitoring. Since the Intruder detection and Patient behavioural analysis scenarios require the same equipment, they can be combined into one common physical module called Situation recognition module. The other two scenarios are to be served by their separate modules: Environment monitoring module and Patient condition monitoring module. The situation recognition module uses non-intrusive machine vision-based concepts. The system includes an RGB video camera and a 3D laser sensor, which monitor the environment in order to detect an intruder, or a patient lying on the floor. The system deals with various image-processing and sensor fusion techniques. The environment monitoring module monitors several parameters of the hospital environment: temperature, humidity and smoke. The patient condition monitoring system remotely measures the following body conditions: body temperature, heart rate, respiratory rate, and others, using sensors attached to the patient’s body. The system algorithm and module software is implemented in C/C++ and uses the OpenCV image analysis and processing library and is successfully tested on Linux (Ubuntu) Platform. The outcome of this research has significant contribution to the robotics application area in the hospital environment

    Localisation and tracking of people using distributed UWB sensors

    Get PDF
    In vielen Überwachungs- und Rettungsszenarien ist die Lokalisierung und Verfolgung von Personen in InnenrĂ€umen auf nichtkooperative Weise erforderlich. FĂŒr die Erkennung von Objekten durch WĂ€nde in kurzer bis mittlerer Entfernung, ist die Ultrabreitband (UWB) Radartechnologie aufgrund ihrer hohen zeitlichen Auflösung und DurchdringungsfĂ€higkeit Erfolg versprechend. In dieser Arbeit wird ein Prozess vorgestellt, mit dem Personen in InnenrĂ€umen mittels UWB-Sensoren lokalisiert werden können. Er umfasst neben der Erfassung von Messdaten, AbstandschĂ€tzungen und dem Erkennen von Mehrfachzielen auch deren Ortung und Verfolgung. Aufgrund der schwachen Reflektion von Personen im Vergleich zum Rest der Umgebung, wird zur Personenerkennung zuerst eine Hintergrundsubtraktionsmethode verwendet. Danach wird eine konstante Falschalarmrate Methode zur Detektion und AbstandschĂ€tzung von Personen angewendet. FĂŒr Mehrfachziellokalisierung mit einem UWB-Sensor wird eine Assoziationsmethode entwickelt, um die SchĂ€tzungen des Zielabstandes den richtigen Zielen zuzuordnen. In Szenarien mit mehreren Zielen kann es vorkommen, dass ein nĂ€her zum Sensor positioniertes Ziel ein anderes abschattet. Ein Konzept fĂŒr ein verteiltes UWB-Sensornetzwerk wird vorgestellt, in dem sich das Sichtfeld des Systems durch die Verwendung mehrerer Sensoren mit unterschiedlichen Blickfeldern erweitert lĂ€sst. Hierbei wurde ein Prototyp entwickelt, der durch Fusion von Sensordaten die Verfolgung von Mehrfachzielen in Echtzeit ermöglicht. Dabei spielen insbesondere auch Synchronisierungs- und Kooperationsaspekte eine entscheidende Rolle. Sensordaten können durch Zeitversatz und systematische Fehler gestört sein. Falschmessungen und Rauschen in den Messungen beeinflussen die Genauigkeit der SchĂ€tzergebnisse. Weitere Erkenntnisse ĂŒber die ZielzustĂ€nde können durch die Nutzung zeitlicher Informationen gewonnen werden. Ein Mehrfachzielverfolgungssystem wird auf der Grundlage des Wahrscheinlichkeitshypothesenfilters (Probability Hypothesis Density Filter) entwickelt, und die Unterschiede in der Systemleistung werden bezĂŒglich der von den Sensoren ausgegebene Informationen, d.h. die Fusion von Ortungsinformationen und die Fusion von Abstandsinformationen, untersucht. Die Information, dass ein Ziel detektiert werden sollte, wenn es aufgrund von Abschattungen durch andere Ziele im Szenario nicht erkannt wurde, wird als dynamische Überdeckungswahrscheinlichkeit beschrieben. Die dynamische Überdeckungswahrscheinlichkeit wird in das Verfolgungssystem integriert, wodurch weniger Sensoren verwendet werden können, wĂ€hrend gleichzeitig die Performanz des SchĂ€tzers in diesem Szenario verbessert wird. Bei der Methodenauswahl und -entwicklung wurde die Anforderung einer Echtzeitanwendung bei unbekannten Szenarien berĂŒcksichtigt. Jeder untersuchte Aspekt der Mehrpersonenlokalisierung wurde im Rahmen dieser Arbeit mit Hilfe von Simulationen und Messungen in einer realistischen Umgebung mit UWB Sensoren verifiziert.Indoor localisation and tracking of people in non-cooperative manner is important in many surveillance and rescue applications. Ultra wideband (UWB) radar technology is promising for through-wall detection of objects in short to medium distances due to its high temporal resolution and penetration capability. This thesis tackles the problem of localisation of people in indoor scenarios using UWB sensors. It follows the process from measurement acquisition, multiple target detection and range estimation to multiple target localisation and tracking. Due to the weak reflection of people compared to the rest of the environment, a background subtraction method is initially used for the detection of people. Subsequently, a constant false alarm rate method is applied for detection and range estimation of multiple persons. For multiple target localisation using a single UWB sensor, an association method is developed to assign target range estimates to the correct targets. In the presence of multiple targets it can happen that targets closer to the sensor induce shadowing over the environment hindering the detection of other targets. A concept for a distributed UWB sensor network is presented aiming at extending the field of view of the system by using several sensors with different fields of view. A real-time operational prototype has been developed taking into consideration sensor cooperation and synchronisation aspects, as well as fusion of the information provided by all sensors. Sensor data may be erroneous due to sensor bias and time offset. Incorrect measurements and measurement noise influence the accuracy of the estimation results. Additional insight of the targets states can be gained by exploiting temporal information. A multiple person tracking framework is developed based on the probability hypothesis density filter, and the differences in system performance are highlighted with respect to the information provided by the sensors i.e. location information fusion vs range information fusion. The information that a target should have been detected when it is not due to shadowing induced by other targets is described as dynamic occlusion probability. The dynamic occlusion probability is incorporated into the tracking framework, allowing fewer sensors to be used while improving the tracker performance in the scenario. The method selection and development has taken into consideration real-time application requirements for unknown scenarios at every step. Each investigated aspect of multiple person localization within the scope of this thesis has been verified using simulations and measurements in a realistic environment using M-sequence UWB sensors

    Coopération de réseaux de caméras ambiantes et de vision embarquée sur robot mobile pour la surveillance de lieux publics

    Get PDF
    Actuellement, il y a une demande croissante pour le dĂ©ploiement de robots mobile dans des lieux publics. Pour alimenter cette demande, plusieurs chercheurs ont dĂ©ployĂ© des systĂšmes robotiques de prototypes dans des lieux publics comme les hĂŽpitaux, les supermarchĂ©s, les musĂ©es, et les environnements de bureau. Une principale prĂ©occupation qui ne doit pas ĂȘtre nĂ©gligĂ©, comme des robots sortent de leur milieu industriel isolĂ© et commencent Ă  interagir avec les humains dans un espace de travail partagĂ©, est une interaction sĂ©curitaire. Pour un robot mobile Ă  avoir un comportement interactif sĂ©curitaire et acceptable - il a besoin de connaĂźtre la prĂ©sence, la localisation et les mouvements de population Ă  mieux comprendre et anticiper leurs intentions et leurs actions. Cette thĂšse vise Ă  apporter une contribution dans ce sens en mettant l'accent sur les modalitĂ©s de perception pour dĂ©tecter et suivre les personnes Ă  proximitĂ© d'un robot mobile. Comme une premiĂšre contribution, cette thĂšse prĂ©sente un systĂšme automatisĂ© de dĂ©tection des personnes visuel optimisĂ© qui prend explicitement la demande de calcul prĂ©vue sur le robot en considĂ©ration. DiffĂ©rentes expĂ©riences comparatives sont menĂ©es pour mettre clairement en Ă©vidence les amĂ©liorations de ce dĂ©tecteur apporte Ă  la table, y compris ses effets sur la rĂ©activitĂ© du robot lors de missions en ligne. Dans un deuxiĂš contribution, la thĂšse propose et valide un cadre de coopĂ©ration pour fusionner des informations depuis des camĂ©ras ambiant affixĂ© au mur et de capteurs montĂ©s sur le robot mobile afin de mieux suivre les personnes dans le voisinage. La mĂȘme structure est Ă©galement validĂ©e par des donnĂ©es de fusion Ă  partir des diffĂ©rents capteurs sur le robot mobile au cours de l'absence de perception externe. Enfin, nous dĂ©montrons les amĂ©liorations apportĂ©es par les modalitĂ©s perceptives dĂ©veloppĂ©s en les dĂ©ployant sur notre plate-forme robotique et illustrant la capacitĂ© du robot Ă  percevoir les gens dans les lieux publics supposĂ©s et respecter leur espace personnel pendant la navigation.This thesis deals with detection and tracking of people in a surveilled public place. It proposes to include a mobile robot in classical surveillance systems that are based on environment fixed sensors. The mobile robot brings about two important benefits: (1) it acts as a mobile sensor with perception capabilities, and (2) it can be used as means of action for service provision. In this context, as a first contribution, it presents an optimized visual people detector based on Binary Integer Programming that explicitly takes the computational demand stipulated into consideration. A set of homogeneous and heterogeneous pool of features are investigated under this framework, thoroughly tested and compared with the state-of-the-art detectors. The experimental results clearly highlight the improvements the different detectors learned with this framework bring to the table including its effect on the robot's reactivity during on-line missions. As a second contribution, the thesis proposes and validates a cooperative framework to fuse information from wall mounted cameras and sensors on the mobile robot to better track people in the vicinity. Finally, we demonstrate the improvements brought by the developed perceptual modalities by deploying them on our robotic platform and illustrating the robot's ability to perceive people in supposed public areas and respect their personal space during navigation

    Bayesian Estimation-Based Pedestrian Tracking in Microcells

    Get PDF
    We consider a pedestrian tracking system where sensor nodes are placed only at specific points so that the monitoring region is divided into multiple smaller regions referred to as microcells. In the proposed pedestrian tracking system, sensor nodes composed of pairs of binary sensors can detect pedestrian arrival and departure events. In this paper, we focus on pedestrian tracking in microcells. First, we investigate actual pedestrian trajectories in a microcell on the basis of observations using video sequences, after which we prepare a pedestrian mobility model. Next, we propose a method for pedestrian tracking in microcells based on the developed pedestrian mobility model. In the proposed method, we extend the Bayesian estimation to account for time-series information to estimate the correspondence between pedestrian arrival and departure events. Through simulations, we show that the tracking success ratio of the proposed method is increased by 35.8% compared to a combinatorial optimization-based tracking method

    Human-Centric Machine Vision

    Get PDF
    http://www.intechopen.com/books/human-centric-machine-visio

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans
    • 

    corecore