309 research outputs found

    Dynamic Reconfiguration in Camera Networks: A Short Survey

    Get PDF
    There is a clear trend in camera networks towards enhanced functionality and flexibility, and a fixed static deployment is typically not sufficient to fulfill these increased requirements. Dynamic network reconfiguration helps to optimize the network performance to the currently required specific tasks while considering the available resources. Although several reconfiguration methods have been recently proposed, e.g., for maximizing the global scene coverage or maximizing the image quality of specific targets, there is a lack of a general framework highlighting the key components shared by all these systems. In this paper we propose a reference framework for network reconfiguration and present a short survey of some of the most relevant state-of-the-art works in this field, showing how they can be reformulated in our framework. Finally we discuss the main open research challenges in camera network reconfiguration

    Toward Global Sensing Quality Maximization: A Configuration Optimization Scheme for Camera Networks

    Full text link
    The performance of a camera network monitoring a set of targets depends crucially on the configuration of the cameras. In this paper, we investigate the reconfiguration strategy for the parameterized camera network model, with which the sensing qualities of the multiple targets can be optimized globally and simultaneously. We first propose to use the number of pixels occupied by a unit-length object in image as a metric of the sensing quality of the object, which is determined by the parameters of the camera, such as intrinsic, extrinsic, and distortional coefficients. Then, we form a single quantity that measures the sensing quality of the targets by the camera network. This quantity further serves as the objective function of our optimization problem to obtain the optimal camera configuration. We verify the effectiveness of our approach through extensive simulations and experiments, and the results reveal its improved performance on the AprilTag detection tasks. Codes and related utilities for this work are open-sourced and available at https://github.com/sszxc/MultiCam-Simulation.Comment: The 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022

    Long Range Automated Persistent Surveillance

    Get PDF
    This dissertation addresses long range automated persistent surveillance with focus on three topics: sensor planning, size preserving tracking, and high magnification imaging. field of view should be reserved so that camera handoff can be executed successfully before the object of interest becomes unidentifiable or untraceable. We design a sensor planning algorithm that not only maximizes coverage but also ensures uniform and sufficient overlapped camera’s field of view for an optimal handoff success rate. This algorithm works for environments with multiple dynamic targets using different types of cameras. Significantly improved handoff success rates are illustrated via experiments using floor plans of various scales. Size preserving tracking automatically adjusts the camera’s zoom for a consistent view of the object of interest. Target scale estimation is carried out based on the paraperspective projection model which compensates for the center offset and considers system latency and tracking errors. A computationally efficient foreground segmentation strategy, 3D affine shapes, is proposed. The 3D affine shapes feature direct and real-time implementation and improved flexibility in accommodating the target’s 3D motion, including off-plane rotations. The effectiveness of the scale estimation and foreground segmentation algorithms is validated via both offline and real-time tracking of pedestrians at various resolution levels. Face image quality assessment and enhancement compensate for the performance degradations in face recognition rates caused by high system magnifications and long observation distances. A class of adaptive sharpness measures is proposed to evaluate and predict this degradation. A wavelet based enhancement algorithm with automated frame selection is developed and proves efficient by a considerably elevated face recognition rate for severely blurred long range face images

    Veliki nadzorni sustav: detekcija i praćenje sumnjivih obrazaca pokreta u prometnim gužvama

    Get PDF
    The worldwide increasing sentiment of insecurity gave birth to a new era, shaking thereby the intelligent video-surveillance systems design and deployment. The large-scale use of these means has prompted the creation of new needs in terms of analysis and interpretation. For this purpose, behavior recognition and scene understanding related applications have become more captivating to a significant number of computer vision researchers, particularly when crowded scenes are concerned. So far, motion analysis and tracking remain challenging due to significant visual ambiguities, which encourage looking into further keys. By this work, we present a new framework to recognize various motion patterns, extract abnormal behaviors and track them over a multi-camera traffic surveillance system. We apply a density-based technique to cluster motion vectors produced by optical flow, and compare them with motion pattern models defined earlier. Non-identified clusters are treated as suspicious and simultaneously tracked over an overlapping camera network for as long as possible. To aiming the network configuration, we designed an active camera scheduling strategy where camera assignment was realized via an improved Weighted Round-Robin algorithm. To validate our approach, experiment results are presented and discussed.Širom svijeta rasprostranjeni osjećaj nesigurnosti postavio je temelje za dizajniranje i implementaciju inteligentnih sustava nadzora. Velika upotreba ovih sredstava potaknula je stvaranje novih potreba analize i interpretacije. U ovu svrhu, prepoznavanje ponašanja i razumijevanje prizora postaju sve privlačnije povezane primjene značajnom broju istraživača računalne vizije, posebno kada se radi o vrlo prometnim prizorima. Analiza pokreta i slijeđenja ostalo je izazovno područje zbog značajnih vizualnih nejasnoća koje zahtijevaju daljnja istraživanja. U radu je prikazan novi okvir za prepoznavanje različitih uzoraka pokreta, izoliranje neprirodnih ponašanja i njihovo praćenje pomoću nadzornog sustava prometa s više kamera. Primjenjuje se na gustoći zasnovana tehnika skupa vektora pokreta sastavljenih iz optičkog toka te uspoređenih s ranije definiranim modelima uzoraka. Neidentificirani skupovi tretiraju se kao sumnjivi i istovremeno su praćeni mrežom s više preklapajućih kamera što je duže moguće. S ciljem konfiguriranja mreže, dizajnirana je strategija raspoređivanja aktivnih kamera gdje je dodjela kamere ostvarena pomoću unaprijeđenog "Weighted Round-Robin" algoritma

    Toward Sensor Modular Autonomy for Persistent Land Intelligence Surveillance and Reconnaissance (ISR)

    Get PDF
    Currently, most land Intelligence, Surveillance and Reconnaissance (ISR) assets (e.g. EO/IR cameras) are simply data collectors. Understanding, decision making and sensor control are performed by the human operators, involving high cognitive load. Any automation in the system has traditionally involved bespoke design of centralised systems that are highly specific for the assets/targets/environment under consideration, resulting in complex, non-flexible systems that exhibit poor interoperability. We address a concept of Autonomous Sensor Modules (ASMs) for land ISR, where these modules have the ability to make low-level decisions on their own in order to fulfil a higher-level objective, and plug in, with the minimum of preconfiguration, to a High Level Decision Making Module (HLDMM) through a middleware integration layer. The dual requisites of autonomy and interoperability create challenges around information fusion and asset management in an autonomous hierarchical system, which are addressed in this work. This paper presents the results of a demonstration system, known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT), which was shown in realistic base protection scenarios with live sensors and targets. The SAPIENT system performed sensor cueing, intelligent fusion, sensor tasking, target hand-off and compensation for compromised sensors, without human control, and enabled rapid integration of ISR assets at the time of system deployment, rather than at design-time. Potential benefits include rapid interoperability for coalition operations, situation understanding with low operator cognitive burden and autonomous sensor management in heterogenous sensor systems

    Intelligent middleware for adaptive sensing of tennis coaching sessions

    Get PDF
    In professional tennis training matches, the coach needs to be able to view play from the most appropriate angle in order to monitor players activities. In this paper, we present a system which can adapt the operation of a series of cameras in order to maintain optimal system performance based on a set of wireless sensors. This setup is used as a testbed for an agent based intelligent middleware that can correlate data from many different wired and wireless sensors and provide effective in-situ decision making. The proposed solution is flexible enough to allow the addition of new sensors and actuators. Within this setup we also provide details of a case study for the embedded control of cameras through the use of Ubisense data

    Calibration with concurrent PT axes

    Get PDF
    The introduction of active (pan-tilt-zoom or PTZ) cameras in Smart Rooms in addition to fixed static cameras allows to improve resolution in volumetric reconstruction, adding the capability to track smaller objects with higher precision in actual 3D world coordinates. To accomplish this goal, precise camera calibration data should be available for any pan, tilt, and zoom settings of each PTZ camera. The PTZ calibration method proposed in this paper introduces a novel solution to the problem of computing extrinsic and intrinsic parameters for active cameras. We first determine the rotation center of the camera expressed under an arbitrary world coordinate origin. Then, we obtain an equation relating any rotation of the camera with the movement of the principal point to define extrinsic parameters for any value of pan and tilt. Once this position is determined, we compute how intrinsic parameters change as a function of zoom. We validate our method by evaluating the re-projection error and its stability for points inside and outside the calibration set.Postprint (published version
    corecore