177,288 research outputs found

    Mathematical study of perturbed asynchronous iterations designed for distributed termination

    Get PDF
    International audienceThis paper deals with the mathematical study of perturbed xed point asynchronous iterations designed for a distributed termination. The distributed termination of asynchronous iterations is considered by using a perturbed xed point mapping, which is an approximation of an exact xed point mapping. In the general framework of '-approximate contraction, it is shown that the perturbed asynchronous iteration converges in nite time and that the limit of the perturbed asynchronous iteration belongs to a ball of nite radius and center ~ u ? , solution of the exact problem. The value of the radius is given in the case of linear and quadratic convergence, respectively

    Distributed robust optimization for multi-agent systems with guaranteed finite-time convergence

    Full text link
    A novel distributed algorithm is proposed for finite-time converging to a feasible consensus solution satisfying global optimality to a certain accuracy of the distributed robust convex optimization problem (DRCO) subject to bounded uncertainty under a uniformly strongly connected network. Firstly, a distributed lower bounding procedure is developed, which is based on an outer iterative approximation of the DRCO through the discretization of the compact uncertainty set into a finite number of points. Secondly, a distributed upper bounding procedure is proposed, which is based on iteratively approximating the DRCO by restricting the constraints right-hand side with a proper positive parameter and enforcing the compact uncertainty set at finitely many points. The lower and upper bounds of the global optimal objective for the DRCO are obtained from these two procedures. Thirdly, two distributed termination methods are proposed to make all agents stop updating simultaneously by exploring whether the gap between the upper and the lower bounds reaches a certain accuracy. Fourthly, it is proved that all the agents finite-time converge to a feasible consensus solution that satisfies global optimality within a certain accuracy. Finally, a numerical case study is included to illustrate the effectiveness of the distributed algorithm.Comment: Submitted for publication in Automatic

    Coverage and Connectivity Improvement Algorithms for the Wireless Sensor Networks

    Get PDF
    In this paper we study the increase of coverage and connectivity in a sensor network with a view to improving coverage, while preserving the network’s coverage. We also examine the impact of on the related problem of coverage-boundary detection. We reduce both problems to the computation of Voronoi diagrams and intersectional point method prove and achieve lower bounds on the solution of these problems and present efficient distributed algorithms for computing and maintaining solutions in cases of sensor failures or insertion of new sensors. We prove the correctness and termination properties of our distributed algorithms, and analytically characterize the time complexity and the traffic generated by our algorithms. Our algorithms show that the increase coverage & Connectivity in wireless sensor density

    Wait-Free and Obstruction-Free Snapshot

    Get PDF
    The snapshot problem was first proposed over a decade ago and has since been well-studied in the distributed algorithms community. The challenge is to design a data structure consisting of mm components, shared by upto nn concurrent processes, that supports two operations. The first, Update(i,v)Update(i,v), atomically writes vv to the iith component. The second, Scan()Scan(), returns an atomic snapshot of all mm components. We consider two termination properties: wait-freedom, which requires a process to always terminate in a bounded number of its own steps, and the weaker obstruction-freedom, which requires such termination only for processes that eventually execute uninterrupted. First, we present a simple, time and space optimal, obstruction-free solution to the single-writer, multi-scanner version of the snapshot problem (wherein concurrent Updates never occur on the same component). Second, we assume hardware support for compare&swap (CAS) to give a time-optimal, wait-free solution to the multi-writer, single-scanner snapshot problem (wherein concurrent Scans never occur). This algorithm uses only O(mn)O(mn) space and has optimal CAS, write and remote-reference complexities. Additionally, it can be augmented to implement a general snapshot object with the same time and space bounds, thus improving the space complexity of O(mn2)O(mn^2) of the only previously known time-optimal solution

    Privacy-Preserving and Outsourced Multi-User k-Means Clustering

    Get PDF
    Many techniques for privacy-preserving data mining (PPDM) have been investigated over the past decade. Often, the entities involved in the data mining process are end-users or organizations with limited computing and storage resources. As a result, such entities may want to refrain from participating in the PPDM process. To overcome this issue and to take many other benefits of cloud computing, outsourcing PPDM tasks to the cloud environment has recently gained special attention. We consider the scenario where n entities outsource their databases (in encrypted format) to the cloud and ask the cloud to perform the clustering task on their combined data in a privacy-preserving manner. We term such a process as privacy-preserving and outsourced distributed clustering (PPODC). In this paper, we propose a novel and efficient solution to the PPODC problem based on k-means clustering algorithm. The main novelty of our solution lies in avoiding the secure division operations required in computing cluster centers altogether through an efficient transformation technique. Our solution builds the clusters securely in an iterative fashion and returns the final cluster centers to all entities when a pre-determined termination condition holds. The proposed solution protects data confidentiality of all the participating entities under the standard semi-honest model. To the best of our knowledge, ours is the first work to discuss and propose a comprehensive solution to the PPODC problem that incurs negligible cost on the participating entities. We theoretically estimate both the computation and communication costs of the proposed protocol and also demonstrate its practical value through experiments on a real dataset.Comment: 16 pages, 2 figures, 5 table

    All Byzantine Agreement Problems are Expensive

    Full text link
    Byzantine agreement, arguably the most fundamental problem in distributed computing, operates among n processes, out of which t < n can exhibit arbitrary failures. The problem states that all correct (non-faulty) processes must eventually decide (termination) the same value (agreement) from a set of admissible values defined by the proposals of the processes (validity). Depending on the exact version of the validity property, Byzantine agreement comes in different forms, from Byzantine broadcast to strong and weak consensus, to modern variants of the problem introduced in today's blockchain systems. Regardless of the specific flavor of the agreement problem, its communication cost is a fundamental metric whose improvement has been the focus of decades of research. The Dolev-Reischuk bound, one of the most celebrated results in distributed computing, proved 40 years ago that, at least for Byzantine broadcast, no deterministic solution can do better than Omega(t^2) exchanged messages in the worst case. Since then, it remained unknown whether the quadratic lower bound extends to seemingly weaker variants of Byzantine agreement. This paper answers the question in the affirmative, closing this long-standing open problem. Namely, we prove that any non-trivial agreement problem requires Omega(t^2) messages to be exchanged in the worst case. To prove the general lower bound, we determine the weakest Byzantine agreement problem and show, via a novel indistinguishability argument, that it incurs Omega(t^2) exchanged messages

    A Distributed Surrogate Methodology for Inverse Most Probable Point Searches in Reliability Based Design Optimization

    Get PDF
    Surrogate models are commonly used in place of prohibitively expensive computational models to drive iterative procedures necessary for engineering design and analysis such as global optimization. Additionally, surrogate modeling has been applied to reliability based design optimization which constrains designs to those which provide a satisfactory reliability against failure considering system parameter uncertainties. Through surrogate modeling the analysis time is significantly reduced when the total number of evaluated samples upon which the final model is built is less than the number which would have otherwise been required using the expensive model directly with the analysis algorithm. Too few samples will provide an inaccurate approximation while too many will add redundant information to an already sufficiently accurate region. With the prediction error having an impact on the overall uncertainty present in the optimal solution, care must be taken to only evaluate samples which decrease solution uncertainty rather than prediction uncertainty over the entire design domain. This work proposes a numerical approach to the surrogate based optimization and reliability assessment problem using solution confidence as the primary algorithm termination criterion. The surrogate uncertainty information provided is used to construct multiple distributed surrogates which represent individual realizations of a lager surrogate population designated by the initial approximation. When globally optimized upon, these distributed surrogates yield a solution distribution quantifying the confidence one can have in the optimal solution based on current surrogate uncertainty. Furthermore, the solution distribution provides insight for the placement of supplemental sample evaluations when solution confidence is insufficient. Numerical case studies are presented for comparison of the proposed methodology with existing methods for surrogate based optimization, such as expected improvement from the Efficient Global Optimization algorithm

    Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogeneous enterprise

    Get PDF
    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as central algorithm, incremental breakout algorithm (IncBO), and as distributed algorithm, distributed incremental breakout algorithm (DisIncBO). In both cases, strong performance gains are observed for solving underconstrained problems. Distributed local search algorithms are incomplete and lack a termination guarantee. When problems contain hard or unsolvable subproblems and are tightly or overconstrained, local search falls into infinite cycles without explanation. A scheme is developed that identifies hard or unsolvable subproblems and orders these to size. This scheme is based on the constraint weight information generated by the breakout algorithm during search. This information, combined with the graph structure, is used to derive a fail first variable order. Empirical results show that the derived variable order is 'perfect'. When it guides simple backtracking, exceptionally hard problems do not occur, and, when problems are unsolvable, the fail depth is always the shortest. Two hybrid algorithms, BOBT and BOBT-SUSP are developed. When the problem is unsolvable, BOBT returns the minimal subproblem within the search scope and BOBT-SUSP returns the smallest unsolvable subproblem using a powerful weight sum constraint. A distributed hybrid algorithm (DisBOBT) is developed that combines DisBO with DisBT. The distributed hybrid algorithm first attempts to solve the problem with DisBO. If no solution is available after a bounded number of breakouts, DisBO is terminated, and DisBT solves the problem. DisBT is guided by a distributed variable order that is derived from the constraint weight information and the graph structure. The variable order is incrementally established, every time the partial solution needs to be extended, the next variable within the order is identified. Empirical results show strong performance gains, especially when problems are overconstrained and contain small unsolvable subproblems
    • …
    corecore