2 research outputs found

    Distributed optimisation framework for in-network data processing

    No full text
    In an information network consisting of different types of communication devices equipped with various types of sensors, it is inevitable that a huge amount of data will be generated. Considering the practical network constraints such as bandwidth and energy limitations, storing, processing and transmitting this very large volume of data is very challenging, if not impossible. However, In-Network Processing (INP) has opened a new door to possible solutions for optimising the utilisation of network resources. INP methods primarily aim to aggregate (e.g., compression, fusion and averaging) data from different sources with the objective of reducing the data volume for further transfer, thus, reducing energy consumption, and increasing the network lifetime. However, processing data often results in an imprecise outcome such as irrelevancy, incompleteness, etc. Therefore, besides characterising the Quality of Information (QoI) in these systems, which is important, it is also crucial to consider the effect of further data processing on the measured QoI associated with each specific piece of information. Typically, the greater the degree of data aggregation, the higher the computation energy cost that is incurred. However, as the volume of data is reduced after aggregation, less energy is needed for subsequent data transmission and reception. Furthermore, aggregation of data can cause deterioration of QoI. Therefore, there is a trade-off among the QoI requirement and energy consumption by computation and communication. We define the optimal data reduction rate parameter as the degree to which data can be efficiently reduced while guaranteeing the required QoI for the end user. Using wireless sensor networks for illustration, we concentrate on designing a distributed framework to facilitate controlling of INP process at each node while satisfying the end user’s QoI requirements. We formulate the INP problem as a non-linear optimisation problem with the objective of minimising the total energy consumption through the network subject to a given QoI requirement for the end user. The proposed problem is intrinsically a non-convex, and, in general, hard to solve. Given the non-convexity and hardness of the problem, we propose a novel approach that can reduce the computation complexity of the problem. Specifically, we prove that under the assumption of uniform parameters’ settings, the complexity of the proposed problem can be reduced significantly, which may be feasible for each node with limited energy supply to carry out the problem computation. Moreover, we propose an optimal solution by transforming the original problem to an equivalent one. Using the theory of duality optimisation, we prove that under a set of reasonable cost and topology assumptions, the optimal solution can be efficiently, obtained despite the non-convexity of the problem. Furthermore, we propose an effective and efficient distributed, iterative algorithm that can converge to the optimal solution. We evaluate our proposed complexity reduction framework under different parameter settings, and show that the problem with N variables can be reduced to the problem with logN variables presenting a significant reduction in the complexity of the problem. The validity and performance of the proposed distributed optimisation framework has been evaluated through extensive simulation. We show that the proposed distributed algorithm can converge to the optimal solution very fast. The behaviour of the proposed framework has been examined under different parameters’ setting, and checked against the optimal solution obtained via an exhaustive search algorithm. The results show the quick and efficient convergences for the proposed algorithm.Open Acces

    Trading transport timeliness and reliability for efficiency in wireless sensor networks

    No full text
    A key task in wireless sensor networks is to deliver information from sensor nodes to the sink. Many applications require the delivery to be reliable and timely. However, increasing reliability/timeliness comes at the cost of higher energy consumption as in both cases additional messages have to be sent: Retransmissions to increase reliability and information delivery via a second, faster path to ensure timeliness. Existing transport protocols either over- or under-provide reliability and/or timeliness and lack optimized efficiency. This work aims in tuning reliability and timeliness in composition for a maximized efficiency. Our approach's takes the reliability/timeliness requirements as input and features a message efficiency that optimally meets user requirements. Information transport proceeds in two steps in a fully distributed way: (i) Finding the optimal number of retransmissions on a per hop basis with delay compensation, and (ii) path split and/or replication if reliability or timeliness requirements are violated. We validate the approach viability through extensive simulations for a wide range of requirements and network conditions. © 2013 IEEE
    corecore