4 research outputs found

    Flow updating: fault-tolerant aggregation for dynamic networks

    Get PDF
    Documento submetido para revisĂŁo pelos pares. A publicar em Journal of Parallel and Distributed Computing. ISSN 0743-7315Data aggregation is a fundamental building block of modern distributed systems. Averaging based approaches, commonly designated gossip-based, are an important class of aggregation algorithms as they allow all nodes to produce a result, converge to any required accuracy, and work independently from the network topology. However, existing approaches exhibit many dependability issues when used in faulty and dynamic environments. This paper describes and evaluates a fault tolerant distributed aggregation technique, Flow Updating, which overcomes the problems in previous averaging approaches and is able to operate on faulty dynamic networks. Experimental results show that this novel approach out performs previous averaging algorithms; it self-adapts to churn and input value changes without requiring any periodic restart, supporting node crashes and high levels of message loss, and works in asynchronous networks. Realistic concerns have been taken into account in evaluating Flow Updating, like the use of unreliable failure detectors and asynchrony, targeting its application to realistic environments.This work was partially funded by FCT PhD grant SFRH/BD/33232/2007 and by project Norte-01-0124-FEDER- 000058, co-financed by the North Portugal Regional Operational Program (ON.2 O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF)

    Distributed optimisation framework for in-network data processing

    No full text
    In an information network consisting of different types of communication devices equipped with various types of sensors, it is inevitable that a huge amount of data will be generated. Considering the practical network constraints such as bandwidth and energy limitations, storing, processing and transmitting this very large volume of data is very challenging, if not impossible. However, In-Network Processing (INP) has opened a new door to possible solutions for optimising the utilisation of network resources. INP methods primarily aim to aggregate (e.g., compression, fusion and averaging) data from different sources with the objective of reducing the data volume for further transfer, thus, reducing energy consumption, and increasing the network lifetime. However, processing data often results in an imprecise outcome such as irrelevancy, incompleteness, etc. Therefore, besides characterising the Quality of Information (QoI) in these systems, which is important, it is also crucial to consider the effect of further data processing on the measured QoI associated with each specific piece of information. Typically, the greater the degree of data aggregation, the higher the computation energy cost that is incurred. However, as the volume of data is reduced after aggregation, less energy is needed for subsequent data transmission and reception. Furthermore, aggregation of data can cause deterioration of QoI. Therefore, there is a trade-off among the QoI requirement and energy consumption by computation and communication. We define the optimal data reduction rate parameter as the degree to which data can be efficiently reduced while guaranteeing the required QoI for the end user. Using wireless sensor networks for illustration, we concentrate on designing a distributed framework to facilitate controlling of INP process at each node while satisfying the end user’s QoI requirements. We formulate the INP problem as a non-linear optimisation problem with the objective of minimising the total energy consumption through the network subject to a given QoI requirement for the end user. The proposed problem is intrinsically a non-convex, and, in general, hard to solve. Given the non-convexity and hardness of the problem, we propose a novel approach that can reduce the computation complexity of the problem. Specifically, we prove that under the assumption of uniform parameters’ settings, the complexity of the proposed problem can be reduced significantly, which may be feasible for each node with limited energy supply to carry out the problem computation. Moreover, we propose an optimal solution by transforming the original problem to an equivalent one. Using the theory of duality optimisation, we prove that under a set of reasonable cost and topology assumptions, the optimal solution can be efficiently, obtained despite the non-convexity of the problem. Furthermore, we propose an effective and efficient distributed, iterative algorithm that can converge to the optimal solution. We evaluate our proposed complexity reduction framework under different parameter settings, and show that the problem with N variables can be reduced to the problem with logN variables presenting a significant reduction in the complexity of the problem. The validity and performance of the proposed distributed optimisation framework has been evaluated through extensive simulation. We show that the proposed distributed algorithm can converge to the optimal solution very fast. The behaviour of the proposed framework has been examined under different parameters’ setting, and checked against the optimal solution obtained via an exhaustive search algorithm. The results show the quick and efficient convergences for the proposed algorithm.Open Acces

    Dynamic approaches to in-network aggregation

    Get PDF
    Abstract — Collaboration between small-scale wireless devices hinges on their ability to infer properties shared across multiple nearby nodes. Wireless-enabled mobile devices in particular create a highly dynamic environment not conducive to distributed reasoning about such global properties. This paper addresses a specific instance of this problem: distributed aggregation. We present extensions to existing unstructured aggregation protocols that enable estimation of count, sum, and average aggregates in highly dynamic environments. With the modified protocols, devices with only limited connectivity can maintain estimates of the aggregate, despite unexpected peer departures and arrivals. Our analysis of these aggregate maintenance extensions demonstrates their effectiveness in unstructured environments despite high levels of node mobility. I
    corecore