27,451 research outputs found
Full-View Coverage Problems in Camera Sensor Networks
Camera Sensor Networks (CSNs) have emerged as an information-rich sensing modality with many potential applications and have received much research attention over the past few years. One of the major challenges in research for CSNs is that camera sensors are different from traditional scalar sensors, as different cameras from different positions can form distinct views of the object in question. As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera coverage, since the face image (or the targeted aspect) of the object may be missed. The angle between the object\u27s facing direction and the camera\u27s viewing direction is used to measure the quality of sensing in CSNs instead. This distinction makes the coverage verification and deployment methodology dedicated to conventional sensor networks unsuitable.
A new coverage model called full-view coverage can precisely characterize the features of coverage in CSNs. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera\u27s viewing direction is sufficiently close to the object\u27s facing direction. In this dissertation, we consider three areas of research for CSNS: 1. an analytical theory for full-view coverage; 2. energy efficiency issues in full-view coverage CSNs; 3. Multi-dimension full-view coverage theory. For the first topic, we propose a novel analytical full-view coverage theory, where the set of full-view covered points is produced by numerical methodology. Based on this theory, we solve the following problems. First, we address the full-view coverage holes detection problem and provide the healing solutions. Second, we propose -Full-View-Coverage algorithms in camera sensor networks. Finally, we address the camera sensor density minimization problem for triangular lattice based deployment in full-view covered camera sensor networks, where we argue that there is a flaw in the previous literature, and present our corresponding solution. For the second topic, we discuss lifetime and full-view coverage guarantees through distributed algorithms in camera sensor networks. Another energy issue we discuss is about object tracking problems in full-view coverage camera sensor networks. Next, the third topic addresses multi-dimension full-view coverage problem where we propose a novel 3D full-view coverage model, and we tackle the full-view coverage optimization problem in order to minimize the number of camera sensors and demonstrate a valid solution.
This research is important due to the numerous applications for CSNs. Especially some deployment can be in remote locations, it is critical to efficiently obtain accurate meaningful data
TALON - The Telescope Alert Operation Network System: Intelligent Linking of Distributed Autonomous Robotic Telescopes
The internet has brought about great change in the astronomical community,
but this interconnectivity is just starting to be exploited for use in
instrumentation. Utilizing the internet for communicating between distributed
astronomical systems is still in its infancy, but it already shows great
potential. Here we present an example of a distributed network of telescopes
that performs more efficiently in synchronous operation than as individual
instruments. RAPid Telescopes for Optical Response (RAPTOR) is a system of
telescopes at LANL that has intelligent intercommunication, combined with
wide-field optics, temporal monitoring software, and deep-field follow-up
capability all working in closed-loop real-time operation. The Telescope ALert
Operations Network (TALON) is a network server that allows intercommunication
of alert triggers from external and internal resources and controls the
distribution of these to each of the telescopes on the network. TALON is
designed to grow, allowing any number of telescopes to be linked together and
communicate. Coupled with an intelligent alert client at each telescope, it can
analyze and respond to each distributed TALON alert based on the telescopes
needs and schedule.Comment: Presentation at SPIE 2004, Glasgow, Scotland (UK
Sensor node localisation using a stereo camera rig
In this paper, we use stereo vision processing techniques to
detect and localise sensors used for monitoring simulated
environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power,
compact microcontroller-based sensing nodes that employ
LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure.
To facilitate our research, we have constructed a controlled
environment where nodes and cameras can be deployed and
potentially hazardous chemical or physical plumes can be
introduced to simulate environmental pollution events in a
controlled manner. In this paper we show how 3D spatial
localisation of sensors becomes a straightforward task when
a stereo camera rig is used rather than a more usual 2D
CCTV camera
Video analysis of events within chemical sensor networks
This paper describes how we deploy video surveillance techniques to monitor the activities within a sensor network in order to detect environmental events. This approach combines video and sensor networks in a completely different
way to what would be considered the norm. Sensor networks
consist of a collection of autonomous, self-powered
nodes which sample their environment to detect anything
from chemical pollutants to atypical sound patterns which
they report through an ad hoc network. In order to reduce
power consumption nodes have the capacity to communicate
with neighbouring nodes only. Typically these communications
are via radio waves but in this paper the sensor nodes communicate to a base station through patterns emitted
by LEDs and captured by a video camera. The LEDs are chemically coated to react to their environment and on doing so emit light which is then picked up by video analysis.
There are several advantages to this approach and to demonstrate we have constructed a controlled test environment.
In this paper we introduce and briefly describe this
environment and the sensor nodes but focus mainly on the
video capture, image processing and data visualisation techniques
used to indicate these events to a user monitoring the
network
On Achieving Diversity in the Presence of Outliers in Participatory Camera Sensor Networks
This paper addresses the problem of collection and
delivery of a representative subset of pictures, in participatory camera networks, to maximize coverage when a significant portion of the pictures may be redundant or irrelevant. Consider, for example, a rescue mission where volunteers and survivors of a large-scale disaster scout a wide area to capture pictures of
damage in distressed neighborhoods, using handheld cameras, and report them to a rescue station. In this participatory camera network, a significant amount of pictures may be redundant (i.e., similar pictures may be reported by many) or irrelevant (i.e., may
not document an event of interest). Given this pool of pictures, we aim to build a protocol to store and deliver a smaller subset of pictures, among all those taken, that minimizes redundancy and eliminates irrelevant objects and outliers. While previous work addressed removal of redundancy alone, doing so in the presence of outliers is tricky, because outliers, by their very nature, are different from other objects, causing redundancy minimizing algorithms to favor their inclusion, which is at odds with the goal of finding a representative subset. To eliminate both outliers and redundancy at the same time, two seemingly opposite objectives must be met together. The contribution of this
paper lies in a new prioritization technique (and its in-network
implementation) that minimizes redundancy among delivered
pictures, while also reducing outliers.unpublishedis peer reviewe
AWARE: Platform for Autonomous self-deploying and operation of Wireless sensor-actuator networks cooperating with unmanned AeRial vehiclEs
This paper presents the AWARE platform that seeks to enable the cooperation of autonomous aerial vehicles with ground wireless sensor-actuator networks comprising both static and mobile nodes carried by vehicles or people. Particularly, the paper presents the middleware, the wireless sensor network, the node deployment by means of an autonomous helicopter, and the surveillance and tracking functionalities of the platform. Furthermore, the paper presents the first general experiments of the AWARE project that took place in March 2007 with the assistance of the Seville fire brigades
- …