12,841 research outputs found

    An Agent-Based Distributed Coordination Mechanism for Wireless Visual Sensor Nodes Using Dynamic Programming

    No full text
    The efficient management of the limited energy resources of a wireless visual sensor network is central to its successful operation. Within this context, this article focuses on the adaptive sampling, forwarding, and routing actions of each node in order to maximise the information value of the data collected. These actions are inter-related in a multi-hop routing scenario because each node’s energy consumption must be optimally allocated between sampling and transmitting its own data, receiving and forwarding the data of other nodes, and routing any data. Thus, we develop two optimal agent-based decentralised algorithms to solve this distributed constraint optimization problem. The first assumes that the route by which data is forwarded to the base station is fixed, and then calculates the optimal sampling, transmitting, and forwarding actions that each node should perform. The second assumes flexible routing, and makes optimal decisions regarding both the integration of actions that each node should choose, and also the route by which the data should be forwarded to the base station. The two algorithms represent a trade-off in optimality, communication cost, and processing time. In an empirical evaluation on sensor networks (whose underlying communication networks exhibit loops), we show that the algorithm with flexible routing is able to deliver approximately twice the quantity of information to the base station compared to the algorithm using fixed routing (where an arbitrary choice of route is made). However, this gain comes at a considerable communication and computational cost (increasing both by a factor of 100 times). Thus, while the algorithm with flexible routing is suitable for networks with a small numbers of nodes, it scales poorly, and as the size of the network increases, the algorithm with fixed routing is favoured

    Sensor node localisation using a stereo camera rig

    Get PDF
    In this paper, we use stereo vision processing techniques to detect and localise sensors used for monitoring simulated environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power, compact microcontroller-based sensing nodes that employ LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure. To facilitate our research, we have constructed a controlled environment where nodes and cameras can be deployed and potentially hazardous chemical or physical plumes can be introduced to simulate environmental pollution events in a controlled manner. In this paper we show how 3D spatial localisation of sensors becomes a straightforward task when a stereo camera rig is used rather than a more usual 2D CCTV camera

    AWARE: Platform for Autonomous self-deploying and operation of Wireless sensor-actuator networks cooperating with unmanned AeRial vehiclEs

    Get PDF
    This paper presents the AWARE platform that seeks to enable the cooperation of autonomous aerial vehicles with ground wireless sensor-actuator networks comprising both static and mobile nodes carried by vehicles or people. Particularly, the paper presents the middleware, the wireless sensor network, the node deployment by means of an autonomous helicopter, and the surveillance and tracking functionalities of the platform. Furthermore, the paper presents the first general experiments of the AWARE project that took place in March 2007 with the assistance of the Seville fire brigades

    Selecting source image sensor nodes based on 2-hop information to improve image transmissions to mobile robot sinks in search \& rescue operations

    Full text link
    We consider Robot-assisted Search &\& Rescue operations enhanced with some fixed image sensor nodes capable of capturing and sending visual information to a robot sink. In order to increase the performance of image transfer from image sensor nodes to the robot sinks we propose a 2-hop neighborhood information-based cover set selection to determine the most relevant image sensor nodes to activate. Then, in order to be consistent with our proposed approach, a multi-path extension of Greedy Perimeter Stateless Routing (called T-GPSR) wherein routing decisions are also based on 2-hop neighborhood information is proposed. Simulation results show that our proposal reduces packet losses, enabling fast packet delivery and higher visual quality of received images at the robot sink

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Engineering ambient visual sensors

    Get PDF
    Visual sensors are an indispensable prerequisite for those AmI environments that require a surveillance component. One practical issue concerns maximizing the operational longevity of such sensors as the operational lifetime of an AmI environment itself is dependent on that of its constituent components. In this paper, the intelligent agent paradigm is considered as a basis for managing a camera collective such that the conflicting demands of power usage optimization and system performance are reconciled
    corecore