37 research outputs found

    Energy Consumption and Latency Analysis for Wireless Multimedia Sensor Networks

    Get PDF
    Energy and bandwidth are limited resources in wireless sensor networks, and communication consumes significant amount of energy. When wireless vision sensors are used to capture and transfer image and video data, the problems of limited energy and bandwidth become even more pronounced. Thus, message traffic should be decreased to reduce the communication cost. In many applications, the interest is to detect composite and semantically higher-level events based on information from multiple sensors. Rather than sending all the information to the sinks and performing composite event detection at the sinks or control-center, it is much more efficient to push the detection of semantically high-level events within the network, and perform composite event detection in a peer-to-peer and energy-efficient manner across embedded smart cameras. In this paper, three different operation scenarios are analyzed for a wireless vision sensor network. A detailed quantitative comparison of these operation scenarios are presented in terms of energy consumption and latency. This quantitative analysis provides the motivation for, and emphasizes (1) the importance of performing high-level local processing and decision making at the embedded sensor level and (2) need for peer-to-peer communication solutions for wireless multimedia sensor networks

    Distributed multi-class road user tracking in multi-camera network for smart traffic applications

    Get PDF
    Reliable tracking of road users is one of the important tasks in smart traffic applications. In these applications, a network of cameras is often used to extend the coverage. However, efficient usage of information from cameras which observe the same road user from different view points is seldom explored. In this paper, we present a distributed multi-camera tracker which efficiently uses information from all cameras with overlapping views to accurately track various classes of road users. Our method is designed for deployment on smart camera networks so that most computer vision tasks are executed locally on smart cameras and only concise high-level information is sent to a fusion node for global joint tracking. We evaluate the performance of our tracker on a challenging real-world traffic dataset in an aspect of Turn Movement Count (TMC) application and achieves high accuracy of 93%and 83% on vehicles and cyclist respectively. Moreover, performance testing in anomaly detection shows that the proposed method provides reliable detection of abnormal vehicle and pedestrian trajectories

    Visual on-line learning in distributed camera networks

    Get PDF
    Automatic detection of persons is an important application in visual surveillance. In general, state-of-the-art systems have two main disadvantages: First, usually a general detector has to be learned that is applicable to a wide range of scenes. Thus, the training is time-consuming and requires a huge amount of labeled data. Second, the data is usually processed centralized, which leads to a huge network traffic. Thus, the goal of this paper is to overcome these problems, which is realized by a person detection system, that is based on distributed smart cameras (DSCs). Assuming that we have a large number of cameras with partly overlapping views, the main idea is to reduce the model complexity of the detector by training a specific detector for each camera. These detectors are initialized by a pre-trained classifier, that is then adapted for a specific camera by co-training. In particular, for co-training we apply an on-line learning method (i.e., boosting for feature selection), where the information exchange is realized via mapping the overlapping views onto each other by using a homography. Thus, we have a compact scenedependent representation, which allows to train and to evaluate the classifiers on an embedded device. Moreover, since the information transfer is reduced to exchanging positions the required network-traffic is minimal. The power of the approach is demonstrated in various experiments on different publicly available data sets. In fact, we show that on-line learning and applying DSCs can benefit from each other. Index Terms — visual on-line learning, object detection, multi-camera networks 1

    The future of camera networks: staying smart in a chaotic world

    Get PDF
    Camera networks become smart when they can interpret video data on board, in order to carry out tasks as a collective, such as target tracking and (re-)identi cation of objects of interest. Unlike today’s deployments, which are mainly restricted to lab settings and highly controlled high-value applications, future smart camera networks will be messy and unpredictable. They will operate on a vast scale, drawing on mobile resources connected in networks structured in complex and changing ways. They will comprise heterogeneous and decentralised aggregations of visual sensors, which will come together in temporary alliances, in unforeseen and rapidly unfolding scenarios. The potential to include and harness citizen-contributed mobile streaming, body-worn video, and robot- mounted cameras, alongside more traditional xed or PTZ cameras, and supported by other non-visual sensors, leads to a number of di cult and important challenges. In this position paper, we discuss a variety of potential uses for such complex smart camera networks, and some of the challenges that arise when staying smart in the presence of such complexity. We present a general discussion on the challenges of heterogeneity, coordination, self-recon gurability, mobility, and collaboration in camera networks

    Automated Static Camera Calibration with Intelligent Vehicles

    Full text link
    Connected and cooperative driving requires precise calibration of the roadside infrastructure for having a reliable perception system. To solve this requirement in an automated manner, we present a robust extrinsic calibration method for automated geo-referenced camera calibration. Our method requires a calibration vehicle equipped with a combined GNSS/RTK receiver and an inertial measurement unit (IMU) for self-localization. In order to remove any requirements for the target's appearance and the local traffic conditions, we propose a novel approach using hypothesis filtering. Our method does not require any human interaction with the information recorded by both the infrastructure and the vehicle. Furthermore, we do not limit road access for other road users during calibration. We demonstrate the feasibility and accuracy of our approach by evaluating our approach on synthetic datasets as well as a real-world connected intersection, and deploying the calibration on real infrastructure. Our source code is publicly available.Comment: 7 pages, 3 figures, accepted for presentation at the 34th IEEE Intelligent Vehicles Symposium (IV 2023), June 4 - June 7, 2023, Anchorage, Alaska, United States of Americ

    Voronoi-Based Coverage Control of Pan/Tilt/Zoom Camera Networks

    Get PDF
    A challenge of pan/tilt/zoom (PTZ) camera networks for efficient and flexible visual monitoring is automated active network reconfiguration in response to environmental stimuli. In this paper, given an event/activity distribution over a convex environment, we propose a new provably correct reactive coverage control algorithm for PTZ camera networks that continuously (re)configures camera orientations and zoom levels (i.e., angles of view) in order to locally maximize their total coverage quality. Our construction is based on careful modeling of visual sensing quality that is consistent with the physical nature of cameras, and we introduce a new notion of conic Voronoi diagrams, based on our sensing quality measures, to solve the camera network allocation problem: that is, to determine where each camera should focus in its field of view given all the other cameras\u27 configurations. Accordingly, we design simple greedy gradient algorithms for both continuous- and discrete-time first-order PTZ camera dynamics that asymptotically converge a locally optimal coverage configuration. Finally, we provide numerical and experimental evidence demonstrating the effectiveness of the proposed coverage algorithms

    Metropolitan intelligent surveillance systems for urban areas by harnessing IoT and edge computing paradigms

    Get PDF
    Copyright © 2018 John Wiley & Sons, Ltd. Recent technological advances led to the rapid and uncontrolled proliferation of intelligent surveillance systems (ISSs), serving to supervise urban areas. Driven by pressing public safety and security requirements, modern cities are being transformed into tangled cyber-physical environments, consisting of numerous heterogeneous ISSs under different administrative domains with low or no capabilities for reuse and interaction. This isolated pattern renders itself unsustainable in city-wide scenarios that typically require to aggregate, manage, and process multiple video streams continuously generated by distributed ISS sources. A coordinated approach is therefore required to enable an interoperable ISS for metropolitan areas, facilitating technological sustainability to prevent network bandwidth saturation. To meet these requirements, this paper combines several approaches and technologies, namely the Internet of Things, cloud computing, edge computing and big data, into a common framework to enable a unified approach to implementing an ISS at an urban scale, thus paving the way for the metropolitan intelligent surveillance system (MISS). The proposed solution aims to push data management and processing tasks as close to data sources as possible, thus increasing performance and security levels that are usually critical to surveillance systems. To demonstrate the feasibility and the effectiveness of this approach, the paper presents a case study based on a distributed ISS scenario in a crowded urban area, implemented on clustered edge devices that are able to off-load tasks in a “horizontal” manner in the context of the developed MISS framework. As demonstrated by the initial experiments, the MISS prototype is able to obtain face recognition results 8 times faster compared with the traditional off-loading pattern, where processing tasks are pushed “vertically” to the cloud

    Condition-based maintenance for major airport baggage systems

    Get PDF
    Purpose: The aim of this paper is to develop a contribution to knowledge that adds to theempirical evidence of predictive condition-based maintenance by demonstrating how theavailability and reliability of current assets can be improved without costly capital investment,resulting in overall system performance improvements.Methodology: The empirical, experimental approach, technical action research (TAR), wasdesigned to study a major Middle-Eastern airport baggage handling operation. A predictivecondition-based maintenance prototype station was installed to monitor the condition of ahighly complex system of static and moving assets.Findings. The research provides evidence that the performance frontier for airport baggagehandling systems can be improved using automated dynamic monitoring of the vibration anddigital image data on baggage trays as they pass a service station. The introduction of low-endinnovation, which combines advanced technology and low-cost hardware, reduced assetfailures in this complex, high speed operating environment.Originality/Value: The originality derives from the application of existing hardware with thecombination of Edge and Cloud computing software through architectural innovation resultingin adaptations to an existing baggage handling system within the context of a time-criticallogistics system.Keywords: IoT, Condition-based maintenance, Predictive maintenance, Edge computing, IoT,Technical Action Research, Theory of Performance Frontiers,Case Stud

    Camera Networks Dimensioning and Scheduling with Quasi Worst-Case Transmission Time

    Get PDF
    This paper describes a method to compute frame size estimates to be used in quasi Worst-Case Transmission Times (qWCTT) for cameras that transmit frames over IP-based communication networks. The precise determination of qWCTT allows us to model the network access scheduling problem as a multiframe problem and to re-use theoretical results for network scheduling. The paper presents a set of experiments, conducted in an industrial testbed, that validate the qWCTT estimation. We believe that a more precise estimation will lead to savings for network infrastructure and to better network utilization
    corecore