23,824 research outputs found
Accelerating Consensus by Spectral Clustering and Polynomial Filters
It is known that polynomial filtering can accelerate the convergence towards
average consensus on an undirected network. In this paper the gain of a
second-order filtering is investigated. A set of graphs is determined for which
consensus can be attained in finite time, and a preconditioner is proposed to
adapt the undirected weights of any given graph to achieve fastest convergence
with the polynomial filter. The corresponding cost function differs from the
traditional spectral gap, as it favors grouping the eigenvalues in two
clusters. A possible loss of robustness of the polynomial filter is also
highlighted
Optimizing the robustness of electrical power systems against cascading failures
Electrical power systems are one of the most important infrastructures that
support our society. However, their vulnerabilities have raised great concern
recently due to several large-scale blackouts around the world. In this paper,
we investigate the robustness of power systems against cascading failures
initiated by a random attack. This is done under a simple yet useful model
based on global and equal redistribution of load upon failures. We provide a
complete understanding of system robustness by i) deriving an expression for
the final system size as a function of the size of initial attacks; ii)
deriving the critical attack size after which system breaks down completely;
iii) showing that complete system breakdown takes place through a first-order
(i.e., discontinuous) transition in terms of the attack size; and iv)
establishing the optimal load-capacity distribution that maximizes robustness.
In particular, we show that robustness is maximized when the difference between
the capacity and initial load is the same for all lines; i.e., when all lines
have the same redundant space regardless of their initial load. This is in
contrast with the intuitive and commonly used setting where capacity of a line
is a fixed factor of its initial load.Comment: 18 pages including 2 pages of supplementary file, 5 figure
Robustness-Driven Resilience Evaluation of Self-Adaptive Software Systems
An increasingly important requirement for certain classes of software-intensive systems is the ability to self-adapt their structure and behavior at run-time when reacting to changes that may occur to the system, its environment, or its goals. A major challenge related to self-adaptive software systems is the ability to provide assurances of their resilience when facing changes. Since in these systems, the components that act as controllers of a target system incorporate highly complex software, there is the need to analyze the impact that controller failures might have on the services delivered by the system. In this paper, we present a novel approach for evaluating the resilience of self-adaptive software systems by applying robustness testing techniques to the controller to uncover failures that can affect system resilience. The approach for evaluating resilience, which is based on probabilistic model checking, quantifies the probability of satisfaction of system properties when the target system is subject to controller failures. The feasibility of the proposed approach is evaluated in the context of an industrial middleware system used to monitor and manage highly populated networks of devices, which was implemented using the Rainbow framework for architecture-based self-adaptation
rDLB: A Novel Approach for Robust Dynamic Load Balancing of Scientific Applications with Parallel Independent Tasks
Scientific applications often contain large and computationally intensive
parallel loops. Dynamic loop self scheduling (DLS) is used to achieve a
balanced load execution of such applications on high performance computing
(HPC) systems. Large HPC systems are vulnerable to processors or node failures
and perturbations in the availability of resources. Most self-scheduling
approaches do not consider fault-tolerant scheduling or depend on failure or
perturbation detection and react by rescheduling failed tasks. In this work, a
robust dynamic load balancing (rDLB) approach is proposed for the robust self
scheduling of independent tasks. The proposed approach is proactive and does
not depend on failure or perturbation detection. The theoretical analysis of
the proposed approach shows that it is linearly scalable and its cost decrease
quadratically by increasing the system size. rDLB is integrated into an MPI DLS
library to evaluate its performance experimentally with two computationally
intensive scientific applications. Results show that rDLB enables the tolerance
of up to (P minus one) processor failures, where P is the number of processors
executing an application. In the presence of perturbations, rDLB boosted the
robustness of DLS techniques up to 30 times and decreased application execution
time up to 7 times compared to their counterparts without rDLB
- …