14,835 research outputs found
Dependability in Aggregation by Averaging
Aggregation is an important building block of modern distributed
applications, allowing the determination of meaningful properties (e.g. network
size, total storage capacity, average load, majorities, etc.) that are used to
direct the execution of the system. However, the majority of the existing
aggregation algorithms exhibit relevant dependability issues, when prospecting
their use in real application environments. In this paper, we reveal some
dependability issues of aggregation algorithms based on iterative averaging
techniques, giving some directions to solve them. This class of algorithms is
considered robust (when compared to common tree-based approaches), being
independent from the used routing topology and providing an aggregation result
at all nodes. However, their robustness is strongly challenged and their
correctness often compromised, when changing the assumptions of their working
environment to more realistic ones. The correctness of this class of algorithms
relies on the maintenance of a fundamental invariant, commonly designated as
"mass conservation". We will argue that this main invariant is often broken in
practical settings, and that additional mechanisms and modifications are
required to maintain it, incurring in some degradation of the algorithms
performance. In particular, we discuss the behavior of three representative
algorithms Push-Sum Protocol, Push-Pull Gossip protocol and Distributed Random
Grouping under asynchronous and faulty (with message loss and node crashes)
environments. More specifically, we propose and evaluate two new versions of
the Push-Pull Gossip protocol, which solve its message interleaving problem
(evidenced even in a synchronous operation mode).Comment: 14 pages. Presented in Inforum 200
Security Policy Consistency
With the advent of wide security platforms able to express simultaneously all
the policies comprising an organization's global security policy, the problem
of inconsistencies within security policies become harder and more relevant.
We have defined a tool based on the CHR language which is able to detect
several types of inconsistencies within and between security policies and other
specifications, namely workflow specifications.
Although the problem of security conflicts has been addressed by several
authors, to our knowledge none has addressed the general problem of security
inconsistencies, on its several definitions and target specifications.Comment: To appear in the first CL2000 workshop on Rule-Based Constraint
Reasoning and Programmin
Spectra: Robust Estimation of Distribution Functions in Networks
Distributed aggregation allows the derivation of a given global aggregate
property from many individual local values in nodes of an interconnected
network system. Simple aggregates such as minima/maxima, counts, sums and
averages have been thoroughly studied in the past and are important tools for
distributed algorithms and network coordination. Nonetheless, this kind of
aggregates may not be comprehensive enough to characterize biased data
distributions or when in presence of outliers, making the case for richer
estimates of the values on the network. This work presents Spectra, a
distributed algorithm for the estimation of distribution functions over large
scale networks. The estimate is available at all nodes and the technique
depicts important properties, namely: robust when exposed to high levels of
message loss, fast convergence speed and fine precision in the estimate. It can
also dynamically cope with changes of the sampled local property, not requiring
algorithm restarts, and is highly resilient to node churn. The proposed
approach is experimentally evaluated and contrasted to a competing state of the
art distribution aggregation technique.Comment: Full version of the paper published at 12th IFIP International
Conference on Distributed Applications and Interoperable Systems (DAIS),
Stockholm (Sweden), June 201
The Optimized Model of Multiple Invasion Percolation
We study the optimized version of the multiple invasion percolation model.
Some topological aspects as the behavior of the acceptance profile,
coordination number and vertex type abundance were investigated and compared to
those of the ordinary invasion. Our results indicate that the clusters show a
very high degree of connectivity, spoiling the usual nodes-links-blobs
geometrical picture.Comment: LaTeX file, 6 pages, 2 ps figure
Oil prices assumptions in macroeconomic forecasts: should we follow futures market expectations?
In macroeconomic forecasting, in spite of its important role in prices and activity developments, oil prices are usually taken as an exogenous variable for which assumptions have to be made. This paper evaluates the forecasting performance of futures markets prices against other popular technical procedure, the carry-over assumption. The results suggest that it is almost indifferent to opt for the futures market prices or the carry over assumption for short-term forecasting horizons (up to 12 months), while, for longer-term horizons, they favour the use of futures market prices. However, as futures markets prices reflect the markets expectations for the world economic activity, futures oil prices should be adjusted whenever the market expectations for the world economic growth are different from the values underlying the macroeconomic scenarios in order to assure fully internal consistency of those scenarios.
Fast Distributed Computation of Distances in Networks
This paper presents a distributed algorithm to simultaneously compute the
diameter, radius and node eccentricity in all nodes of a synchronous network.
Such topological information may be useful as input to configure other
algorithms. Previous approaches have been modular, progressing in sequential
phases using building blocks such as BFS tree construction, thus incurring
longer executions than strictly required. We present an algorithm that, by
timely propagation of available estimations, achieves a faster convergence to
the correct values. We show local criteria for detecting convergence in each
node. The algorithm avoids the creation of BFS trees and simply manipulates
sets of node ids and hop counts. For the worst scenario of variable start
times, each node i with eccentricity ecc(i) can compute: the node eccentricity
in diam(G)+ecc(i)+2 rounds; the diameter in 2*diam(G)+ecc(i)+2 rounds; and the
radius in diam(G)+ecc(i)+2*radius(G) rounds.Comment: 12 page
- …