23,302 research outputs found

    Dependability in Aggregation by Averaging

    Get PDF
    Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a fundamental invariant, commonly designated as "mass conservation". We will argue that this main invariant is often broken in practical settings, and that additional mechanisms and modifications are required to maintain it, incurring in some degradation of the algorithms performance. In particular, we discuss the behavior of three representative algorithms Push-Sum Protocol, Push-Pull Gossip protocol and Distributed Random Grouping under asynchronous and faulty (with message loss and node crashes) environments. More specifically, we propose and evaluate two new versions of the Push-Pull Gossip protocol, which solve its message interleaving problem (evidenced even in a synchronous operation mode).Comment: 14 pages. Presented in Inforum 200

    A Message Passing Strategy for Decentralized Connectivity Maintenance in Agent Removal

    Full text link
    In a multi-agent system, agents coordinate to achieve global tasks through local communications. Coordination usually requires sufficient information flow, which is usually depicted by the connectivity of the communication network. In a networked system, removal of some agents may cause a disconnection. In order to maintain connectivity in agent removal, one can design a robust network topology that tolerates a finite number of agent losses, and/or develop a control strategy that recovers connectivity. This paper proposes a decentralized control scheme based on a sequence of replacements, each of which occurs between an agent and one of its immediate neighbors. The replacements always end with an agent, whose relocation does not cause a disconnection. We show that such an agent can be reached by a local rule utilizing only some local information available in agents' immediate neighborhoods. As such, the proposed message passing strategy guarantees the connectivity maintenance in arbitrary agent removal. Furthermore, we significantly improve the optimality of the proposed scheme by incorporating δ\delta-criticality (i.e. the criticality of an agent in its δ\delta-neighborhood).Comment: 9 pages, 9 figure

    Efficient calculation of sensor utility and sensor removal in wireless sensor networks for adaptive signal estimation and beamforming

    Get PDF
    Wireless sensor networks are often deployed over a large area of interest and therefore the quality of the sensor signals may vary significantly across the different sensors. In this case, it is useful to have a measure for the importance or the so-called "utility" of each sensor, e.g., for sensor subset selection, resource allocation or topology selection. In this paper, we consider the efficient calculation of sensor utility measures for four different signal estimation or beamforming algorithms in an adaptive context. We use the definition of sensor utility as the increase in cost (e.g., mean-squared error) when the sensor is removed from the estimation procedure. Since each possible sensor removal corresponds to a new estimation problem (involving less sensors), calculating the sensor utilities would require a continuous updating of different signal estimators (where is the number of sensors), increasing computational complexity and memory usage by a factor. However, we derive formulas to efficiently calculate all sensor utilities with hardly any increase in memory usage and computational complexity compared to the signal estimation algorithm already in place. When applied in adaptive signal estimation algorithms, this allows for on-line tracking of all the sensor utilities at almost no additional cost. Furthermore, we derive efficient formulas for sensor removal, i.e., for updating the signal estimator coefficients when a sensor is removed, e.g., due to a failure in the wireless link or when its utility is too low. We provide a complexity evaluation of the derived formulas, and demonstrate the significant reduction in computational complexity compared to straightforward implementations

    Sustaining the Internet with Hyperbolic Mapping

    Full text link
    The Internet infrastructure is severely stressed. Rapidly growing overheads associated with the primary function of the Internet---routing information packets between any two computers in the world---cause concerns among Internet experts that the existing Internet routing architecture may not sustain even another decade. Here we present a method to map the Internet to a hyperbolic space. Guided with the constructed map, which we release with this paper, Internet routing exhibits scaling properties close to theoretically best possible, thus resolving serious scaling limitations that the Internet faces today. Besides this immediate practical viability, our network mapping method can provide a different perspective on the community structure in complex networks

    Route Swarm: Wireless Network Optimization through Mobility

    Full text link
    In this paper, we demonstrate a novel hybrid architecture for coordinating networked robots in sensing and information routing applications. The proposed INformation and Sensing driven PhysIcally REconfigurable robotic network (INSPIRE), consists of a Physical Control Plane (PCP) which commands agent position, and an Information Control Plane (ICP) which regulates information flow towards communication/sensing objectives. We describe an instantiation where a mobile robotic network is dynamically reconfigured to ensure high quality routes between static wireless nodes, which act as source/destination pairs for information flow. The ICP commands the robots towards evenly distributed inter-flow allocations, with intra-flow configurations that maximize route quality. The PCP then guides the robots via potential-based control to reconfigure according to ICP commands. This formulation, deemed Route Swarm, decouples information flow and physical control, generating a feedback between routing and sensing needs and robotic configuration. We demonstrate our propositions through simulation under a realistic wireless network regime.Comment: 9 pages, 4 figures, submitted to the IEEE International Conference on Intelligent Robots and Systems (IROS) 201

    Generating Representative ISP Technologies From First-Principles

    Full text link
    Understanding and modeling the factors that underlie the growth and evolution of network topologies are basic questions that impact capacity planning, forecasting, and protocol research. Early topology generation work focused on generating network-wide connectivity maps, either at the AS-level or the router-level, typically with an eye towards reproducing abstract properties of observed topologies. But recently, advocates of an alternative "first-principles" approach question the feasibility of realizing representative topologies with simple generative models that do not explicitly incorporate real-world constraints, such as the relative costs of router configurations, into the model. Our work synthesizes these two lines by designing a topology generation mechanism that incorporates first-principles constraints. Our goal is more modest than that of constructing an Internet-wide topology: we aim to generate representative topologies for single ISPs. However, our methods also go well beyond previous work, as we annotate these topologies with representative capacity and latency information. Taking only demand for network services over a given region as input, we propose a natural cost model for building and interconnecting PoPs and formulate the resulting optimization problem faced by an ISP. We devise hill-climbing heuristics for this problem and demonstrate that the solutions we obtain are quantitatively similar to those in measured router-level ISP topologies, with respect to both topological properties and fault-tolerance
    corecore