4 research outputs found

    Parallel Load Balancing on Constrained Client-Server Topologies

    Get PDF
    We study parallel \emph{Load Balancing} protocols for a client-server distributed model defined as follows. There is a set \sC of nn clients and a set \sS of nn servers where each client has (at most) a constant number d1d \geq 1 of requests that must be assigned to some server. The client set and the server one are connected to each other via a fixed bipartite graph: the requests of client vv can only be sent to the servers in its neighborhood N(v)N(v). The goal is to assign every client request so as to minimize the maximum load of the servers. In this setting, efficient parallel protocols are available only for dense topolgies. In particular, a simple symmetric, non-adaptive protocol achieving constant maximum load has been recently introduced by Becchetti et al \cite{BCNPT18} for regular dense bipartite graphs. The parallel completion time is \bigO(\log n) and the overall work is \bigO(n), w.h.p. Motivated by proximity constraints arising in some client-server systems, we devise a simple variant of Becchetti et al's protocol \cite{BCNPT18} and we analyse it over almost-regular bipartite graphs where nodes may have neighborhoods of small size. In detail, we prove that, w.h.p., this new version has a cost equivalent to that of Becchetti et al's protocol (in terms of maximum load, completion time, and work complexity, respectively) on every almost-regular bipartite graph with degree Ω(log2n)\Omega(\log^2n). Our analysis significantly departs from that in \cite{BCNPT18} for the original protocol and requires to cope with non-trivial stochastic-dependence issues on the random choices of the algorithmic process which are due to the worst-case, sparse topology of the underlying graph

    Parallel Load Balancing on constrained client-server topologies

    Get PDF
    We study parallel Load Balancing protocols for the client-server distributed model defined as follows. There is a set of n clients and a set of n servers where each client has (at most) a constant number of requests that must be assigned to some server. The client set and the server one are connected to each other via a fixed bipartite graph: the requests of client v can only be sent to the servers in its neighborhood. The goal is to assign every client request so as to minimize the maximum load of the servers. In this setting, efficient parallel protocols are available only for dense topologies. In particular, a simple protocol, named raes, has been recently introduced by Becchetti et al. [1] for regular dense bipartite graphs. They show that this symmetric, non-adaptive protocol achieves constant maximum load with parallel completion time and overall work, w.h.p. Motivated by proximity constraints arising in some client-server systems, we analyze raes over almost-regular bipartite graphs where nodes may have neighborhoods of small size. In detail, we prove that, w.h.p., the raes protocol keeps the same performances as above (in terms of maximum load, completion time, and work complexity, respectively) on any almost-regular bipartite graph with degree. Our analysis significantly departs from that in [1] since it requires to cope with non-trivial stochastic-dependence issues on the random choices of the algorithmic process which are due to the worst-case, sparse topology of the underlying graph

    Data center resilience assessment : storage, networking and security.

    Get PDF
    Data centers (DC) are the core of the national cyber infrastructure. With the incredible growth of critical data volumes in financial institutions, government organizations, and global companies, data centers are becoming larger and more distributed posing more challenges for operational continuity in the presence of experienced cyber attackers and occasional natural disasters. The main objective of this research work is to present a new methodology for data center resilience assessment, this methodology consists of: • Define Data center resilience requirements. • Devise a high level metric for data center resilience. • Design and develop a tool to validate and the metric. Since computer networks are an important component in the data center architecture, this research work was extended to investigate computer network resilience enhancement opportunities within the area of routing protocols, redundancy, and server load to minimize the network down time and increase the time period of resisting attacks. Data center resilience assessment is a complex process as it involves several aspects such as: policies for emergencies, recovery plans, variation in data center operational roles, hosted/processed data types and data center architectures. However, in this dissertation, storage, networking and security are emphasized. The need for resilience assessment emerged due to the gap in existing reliability, availability, and serviceability (RAS) measures. Resilience as an evaluation metric leads to better proactive perspective in system design and management. The proposed Data center resilience assessment portal (DC-RAP) is designed to easily integrate various operational scenarios. DC-RAP features a user friendly interface to assess the resilience in terms of performance analysis and speed recovery by collecting the following information: time to detect attacks, time to resist, time to fail and recovery time. Several set of experiments were performed, results obtained from investigating the impact of routing protocols, server load balancing algorithms on network resilience, showed that using particular routing protocol or server load balancing algorithm can enhance network resilience level in terms of minimizing the downtime and ensure speed recovery. Also experimental results for investigating the use social network analysis (SNA) for identifying important router in computer network showed that the SNA was successful in identifying important routers. This important router list can be used to redundant those routers to ensure high level of resilience. Finally, experimental results for testing and validating the data center resilience assessment methodology using the DC-RAP showed the ability of the methodology quantify data center resilience in terms of providing steady performance, minimal recovery time and maximum resistance-attacks time. The main contributions of this work can be summarized as follows: • A methodology for evaluation data center resilience has been developed. • Implemented a Data Center Resilience Assessment Portal (D$-RAP) for resilience evaluations. • Investigated the usage of Social Network Analysis to Improve the computer network resilience

    Online client-server load balancing without global information

    No full text
    We consider distributed online algorithms tbr maximizing through-put in a network of clients and servers, modeled as a bipartite graph. Unlike most prior work on online load balancing, we do not as-sume centralized control and seek algorithms and lower bounds for decentralized algorithms in which each participant has only local knowledge about the state of itself and its neighbors. Our prob-lem can be seen as analogous to the recent work on oblivious rout-ing in [8, 14, 191, but with the objective of maximizing through-put rather than minimizing congestion. In contrast to that work, we prove a strong lower bound (polynomial in n, the size of the graph) on the competitive ratio of any oblivious algorithm. This is accompanied by simple algorithms achieving upper bounds which are tight in terms of k, the maximum throughput achievable by an omniscient algorithm. Finally, we examine a restricted model in which clients, upon becoming active, must remain so for at least log(n) time steps. In contrast to the primarily negative results in the oblivious case, here we present an algorithm which is constant-competitive. Our lower bounds justify the intuition, implicit in ear-lier work on the subject [2], that some such restriction (i.e. requiring some stability in the demand pattern over time) is necessary in or-der to achieve a constant-- or even polylogarithmic-- competitive ratio.
    corecore