146,054 research outputs found
Secure, Mobile Visual Sensor Networks Architecture
As Wireless Sensor Network-based solutions are proliferating they are facing new challenges: they must be capable of adapting to rapidly changing environments and requirements while their nodes should have low power consumption as they usually run on batteries. Moreover, the security aspect is crucial since they frequently transmit and process very sensitive data, while it is important to be able to support real-time video or processed images over their limited bandwidth links. SMART targets to design and implement a highly reconfigurable Wireless Visual Sensor Node (WVSN) defined as a miniaturized, light-weight, secure, low-cost, battery powered sensing device, enriched with video and data compression capabilities
Energy Consumption Of Visual Sensor Networks: Impact Of Spatio-Temporal Coverage
Wireless visual sensor networks (VSNs) are expected to play a major role in
future IEEE 802.15.4 personal area networks (PAN) under recently-established
collision-free medium access control (MAC) protocols, such as the IEEE
802.15.4e-2012 MAC. In such environments, the VSN energy consumption is
affected by the number of camera sensors deployed (spatial coverage), as well
as the number of captured video frames out of which each node processes and
transmits data (temporal coverage). In this paper, we explore this aspect for
uniformly-formed VSNs, i.e., networks comprising identical wireless visual
sensor nodes connected to a collection node via a balanced cluster-tree
topology, with each node producing independent identically-distributed
bitstream sizes after processing the video frames captured within each network
activation interval. We derive analytic results for the energy-optimal
spatio-temporal coverage parameters of such VSNs under a-priori known bounds
for the number of frames to process per sensor and the number of nodes to
deploy within each tier of the VSN. Our results are parametric to the
probability density function characterizing the bitstream size produced by each
node and the energy consumption rates of the system of interest. Experimental
results reveal that our analytic results are always within 7% of the energy
consumption measurements for a wide range of settings. In addition, results
obtained via a multimedia subsystem show that the optimal spatio-temporal
settings derived by the proposed framework allow for substantial reduction of
energy consumption in comparison to ad-hoc settings. As such, our analytic
modeling is useful for early-stage studies of possible VSN deployments under
collision-free MAC protocols prior to costly and time-consuming experiments in
the field.Comment: to appear in IEEE Transactions on Circuits and Systems for Video
Technology, 201
Collaborative Solutions to Visual Sensor Networks
Visual sensor networks (VSNs) merge computer vision, image processing and wireless sensor network disciplines to solve problems in multi-camera applications in large surveillance areas. Although potentially powerful, VSNs also present unique challenges that could hinder their practical deployment because of the unique camera features including the extremely higher data rate, the directional sensing characteristics, and the existence of visual occlusions.
In this dissertation, we first present a collaborative approach for target localization in VSNs. Traditionally; the problem is solved by localizing targets at the intersections of the back-projected 2D cones of each target. However, the existence of visual occlusions among targets would generate many false alarms. Instead of resolving the uncertainty about target existence at the intersections, we identify and study the non-occupied areas in 2D cones and generate the so-called certainty map of targets non-existence. We also propose distributed integration of local certainty maps by following a dynamic itinerary where the entire map is progressively clarified.
The accuracy of target localization is affected by the existence of faulty nodes in VSNs. Therefore, we present the design of a fault-tolerant localization algorithm that would not only accurately localize targets but also detect the faults in camera orientations, tolerate these errors and further correct them before they cascade. Based on the locations of detected targets in the fault-tolerated final certainty map, we construct a generative image model that estimates the camera orientations, detect inaccuracies and correct them.
In order to ensure the required visual coverage to accurately localize targets or tolerate the faulty nodes, we need to calculate the coverage before deploying sensors. Therefore, we derive the closed-form solution for the coverage estimation based on the certainty-based detection model that takes directional sensing of cameras and existence of visual occlusions into account.
The effectiveness of the proposed collaborative and fault-tolerant target localization algorithms in localization accuracy as well as fault detection and correction performance has been validated through the results obtained from both simulation and real experiments. In addition, conducted simulation shows extreme consistency with results from theoretical closed-form solution for visual coverage estimation, especially when considering the boundary effect
A sparsity-driven approach to multi-camera tracking in visual sensor networks
In this paper, a sparsity-driven approach is presented for multi-camera tracking in visual sensor networks (VSNs). VSNs consist of image sensors, embedded processors and wireless transceivers which are powered by batteries. Since the energy and bandwidth resources are limited, setting up a tracking system in VSNs is a challenging problem. Motivated by the goal of tracking in a bandwidth-constrained environment, we present a sparsity-driven method to compress the features extracted by the camera nodes, which are then transmitted across the network for distributed inference. We have designed special overcomplete dictionaries that match the structure of the features, leading to very parsimonious yet accurate representations. We have tested our method in indoor and outdoor people tracking scenarios. Our experimental results demonstrate how our approach leads to communication savings without significant loss in tracking performance
The Coverage Problem in Video-Based Wireless Sensor Networks: A Survey
Wireless sensor networks typically consist of a great number of tiny low-cost electronic devices with limited sensing and computing capabilities which cooperatively communicate to collect some kind of information from an area of interest. When wireless nodes of such networks are equipped with a low-power camera, visual data can be retrieved, facilitating a new set of novel applications. The nature of video-based wireless sensor networks demands new algorithms and solutions, since traditional wireless sensor networks approaches are not feasible or even efficient for that specialized communication scenario. The coverage problem is a crucial issue of wireless sensor networks, requiring specific solutions when video-based sensors are employed. In this paper, it is surveyed the state of the art of this particular issue, regarding strategies, algorithms and general computational solutions. Open research areas are also discussed, envisaging promising investigation considering coverage in video-based wireless sensor networks
Multi-Agent Framework in Visual Sensor Networks
21 pages, 21 figures.-- Journal special issue on Visual Sensor Networks.The recent interest in the surveillance of public, military, and commercial scenarios is increasing the need to develop and deploy intelligent and/or automated distributed visual surveillance systems. Many applications based on distributed resources use the so-called software agent technology. In this paper, a multi-agent framework is applied to coordinate videocamera-based surveillance. The ability to coordinate agents improves the global image and task distribution efficiency. In our proposal, a software agent is embedded in each camera and controls the capture parameters. Then coordination is based on the exchange of high-level messages among agents. Agents use an internal symbolic model to interpret the current situation from the messages from all other agents to improve global coordination.This work was funded by projects CICYT TSI2005-07344, CICYT TEC2005-07186, and CAM MADRINET S-0505/TIC/0255.Publicad
Enabling visual analysis in wireless sensor networks
This demo showcases some of the results obtained by the GreenEyes project, whose main objective is to enable visual analysis on resource-constrained multimedia sensor networks. The demo features a multi-hop visual sensor network operated by BeagleBones Linux computers with IEEE 802.15.4 communication capabilities, and capable of recognizing and tracking objects according to two different visual paradigms. In the traditional compress-then-analyze (CTA) paradigm, JPEG compressed images are transmitted through the network from a camera node to a central controller, where the analysis takes place. In the alternative analyze-then-compress (ATC) paradigm, the camera node extracts and compresses local binary visual features from the acquired images (either locally or in a distributed fashion) and transmits them to the central controller, where they are used to perform object recognition/tracking. We show that, in a bandwidth constrained scenario, the latter paradigm allows to reach better results in terms of application frame rates, still ensuring excellent analysis performance
- …