117,071 research outputs found

    Field test of multi-hop image sensing network prototype on a city-wide scale

    Get PDF
    Open Access funded by Chongqing University of Posts and Telecommuniocations Under a Creative Commons license, https://creativecommons.org/licenses/by-nc-nd/4.0/Wireless multimedia sensor network drastically stretches the horizon of traditional monitoring and surveillance systems, of which most existing research have utilised Zigbee or WiFi as the communication technology. Both technologies use ultra high frequencies (mainly 2.4 GHz) and suffer from relatively short transmission range (i.e. 100 m line-of-sight). The objective of this paper is to assess the feasibility and potential of transmitting image information using RF modules with lower frequencies (e.g. 433 MHz) in order to achieve a larger scale deployment such as a city scenario. Arduino platform is used for its low cost and simplicity. The details of hardware properties are elaborated in the article, followed by an investigation of optimum configurations for the system. Upon an initial range testing outcome of over 2000 m line-of-sight transmission distance, the prototype network has been installed in a real life city plot for further examination of performance. A range of suitable applications has been proposed along with suggestions for future research.Peer reviewe

    Autonomous real-time surveillance system with distributed IP cameras

    Get PDF
    An autonomous Internet Protocol (IP) camera based object tracking and behaviour identification system, capable of running in real-time on an embedded system with limited memory and processing power is presented in this paper. The main contribution of this work is the integration of processor intensive image processing algorithms on an embedded platform capable of running at real-time for monitoring the behaviour of pedestrians. The Algorithm Based Object Recognition and Tracking (ABORAT) system architecture presented here was developed on an Intel PXA270-based development board clocked at 520 MHz. The platform was connected to a commercial stationary IP-based camera in a remote monitoring station for intelligent image processing. The system is capable of detecting moving objects and their shadows in a complex environment with varying lighting intensity and moving foliage. Objects moving close to each other are also detected to extract their trajectories which are then fed into an unsupervised neural network for autonomous classification. The novel intelligent video system presented is also capable of performing simple analytic functions such as tracking and generating alerts when objects enter/leave regions or cross tripwires superimposed on live video by the operator

    Pando: Personal Volunteer Computing in Browsers

    Full text link
    The large penetration and continued growth in ownership of personal electronic devices represents a freely available and largely untapped source of computing power. To leverage those, we present Pando, a new volunteer computing tool based on a declarative concurrent programming model and implemented using JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying number of failure-prone personal devices contributed by volunteers to parallelize the application of a function on a stream of values, by using the devices' browsers. We show that Pando can provide throughput improvements compared to a single personal device, on a variety of compute-bound applications including animation rendering and image processing. We also show the flexibility of our approach by deploying Pando on personal devices connected over a local network, on Grid5000, a French-wide computing grid in a virtual private network, and seven PlanetLab nodes distributed in a wide area network over Europe.Comment: 14 pages, 12 figures, 2 table

    Distant Vehicle Detection Using Radar and Vision

    Full text link
    For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths
    corecore