15,134 research outputs found

    Sensor node localisation using a stereo camera rig

    Get PDF
    In this paper, we use stereo vision processing techniques to detect and localise sensors used for monitoring simulated environmental events within an experimental sensor network testbed. Our sensor nodes communicate to the camera through patterns emitted by light emitting diodes (LEDs). Ultimately, we envisage the use of very low-cost, low-power, compact microcontroller-based sensing nodes that employ LED communication rather than power hungry RF to transmit data that is gathered via existing CCTV infrastructure. To facilitate our research, we have constructed a controlled environment where nodes and cameras can be deployed and potentially hazardous chemical or physical plumes can be introduced to simulate environmental pollution events in a controlled manner. In this paper we show how 3D spatial localisation of sensors becomes a straightforward task when a stereo camera rig is used rather than a more usual 2D CCTV camera

    Field test of multi-hop image sensing network prototype on a city-wide scale

    Get PDF
    Open Access funded by Chongqing University of Posts and Telecommuniocations Under a Creative Commons license, https://creativecommons.org/licenses/by-nc-nd/4.0/Wireless multimedia sensor network drastically stretches the horizon of traditional monitoring and surveillance systems, of which most existing research have utilised Zigbee or WiFi as the communication technology. Both technologies use ultra high frequencies (mainly 2.4 GHz) and suffer from relatively short transmission range (i.e. 100 m line-of-sight). The objective of this paper is to assess the feasibility and potential of transmitting image information using RF modules with lower frequencies (e.g. 433 MHz) in order to achieve a larger scale deployment such as a city scenario. Arduino platform is used for its low cost and simplicity. The details of hardware properties are elaborated in the article, followed by an investigation of optimum configurations for the system. Upon an initial range testing outcome of over 2000 m line-of-sight transmission distance, the prototype network has been installed in a real life city plot for further examination of performance. A range of suitable applications has been proposed along with suggestions for future research.Peer reviewe

    Video analysis of events within chemical sensor networks

    Get PDF
    This paper describes how we deploy video surveillance techniques to monitor the activities within a sensor network in order to detect environmental events. This approach combines video and sensor networks in a completely different way to what would be considered the norm. Sensor networks consist of a collection of autonomous, self-powered nodes which sample their environment to detect anything from chemical pollutants to atypical sound patterns which they report through an ad hoc network. In order to reduce power consumption nodes have the capacity to communicate with neighbouring nodes only. Typically these communications are via radio waves but in this paper the sensor nodes communicate to a base station through patterns emitted by LEDs and captured by a video camera. The LEDs are chemically coated to react to their environment and on doing so emit light which is then picked up by video analysis. There are several advantages to this approach and to demonstrate we have constructed a controlled test environment. In this paper we introduce and briefly describe this environment and the sensor nodes but focus mainly on the video capture, image processing and data visualisation techniques used to indicate these events to a user monitoring the network

    Autonomous monitoring of cliff nesting seabirds using computer vision

    Get PDF
    In this paper we describe a proposed system for automatic visual monitoring of seabird populations. Image sequences of cliff face nesting sites are captured using time-lapse digital photography. We are developing image processing software which is designed to automatically interpret these images, determine the number of birds present, and monitor activity. We focus primarily on the the development of low-level image processing techniques to support this goal. We first describe our existing work in video processing, and show how it is suitable for this problem domain. Image samples from a particular nest site are presented, and used to describe the associated challenges. We conclude by showing how we intend to develop our work to construct a distributed system capable of simultaneously monitoring a number of sites in the same locality

    WSN and RFID integration to support intelligent monitoring in smart buildings using hybrid intelligent decision support systems

    Get PDF
    The real time monitoring of environment context aware activities is becoming a standard in the service delivery in a wide range of domains (child and elderly care and supervision, logistics, circulation, and other). The safety of people, goods and premises depends on the prompt reaction to potential hazards identified at an early stage to engage appropriate control actions. This requires capturing real time data to process locally at the device level or communicate to backend systems for real time decision making. This research examines the wireless sensor network and radio frequency identification technology integration in smart homes to support advanced safety systems deployed upstream to safety and emergency response. These systems are based on the use of hybrid intelligent decision support systems configured in a multi-distributed architecture enabled by the wireless communication of detection and tracking data to support intelligent real-time monitoring in smart buildings. This paper introduces first the concept of wireless sensor network and radio frequency identification technology integration showing the various options for the task distribution between radio frequency identification and hybrid intelligent decision support systems. This integration is then illustrated in a multi-distributed system architecture to identify motion and control access in a smart building using a room capacity model for occupancy and evacuation, access rights and a navigation map automatically generated by the system. The solution shown in the case study is based on a virtual layout of the smart building which is implemented using the capabilities of the building information model and hybrid intelligent decision support system.The Saudi High Education Ministry and Brunel University (UK

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator
    corecore