Exploration of Local and Central Processing for a Wireless Camera Based Sensor Node

Abstract

Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires both higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor network has been based on two different assumptions involving either sending data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. In this paper we focus on determining an optimal point for intelligence partitioning between the sensor node and the central base station and by exploring compression methods. The lifetime of the visual sensor node is predicted by evaluating the energy consumption for different levels of intelligence partitioning at the sensor node. Our results show that sending compressed images after segmentation will result in a longer life for the sensor node

    Similar works