5,825 research outputs found
Load Balancing via Random Local Search in Closed and Open systems
In this paper, we analyze the performance of random load resampling and
migration strategies in parallel server systems. Clients initially attach to an
arbitrary server, but may switch server independently at random instants of
time in an attempt to improve their service rate. This approach to load
balancing contrasts with traditional approaches where clients make smart server
selections upon arrival (e.g., Join-the-Shortest-Queue policy and variants
thereof). Load resampling is particularly relevant in scenarios where clients
cannot predict the load of a server before being actually attached to it. An
important example is in wireless spectrum sharing where clients try to share a
set of frequency bands in a distributed manner.Comment: Accepted to Sigmetrics 201
A Statistical Mechanical Load Balancer for the Web
The maximum entropy principle from statistical mechanics states that a closed
system attains an equilibrium distribution that maximizes its entropy. We first
show that for graphs with fixed number of edges one can define a stochastic
edge dynamic that can serve as an effective thermalization scheme, and hence,
the underlying graphs are expected to attain their maximum-entropy states,
which turn out to be Erdos-Renyi (ER) random graphs. We next show that (i) a
rate-equation based analysis of node degree distribution does indeed confirm
the maximum-entropy principle, and (ii) the edge dynamic can be effectively
implemented using short random walks on the underlying graphs, leading to a
local algorithm for the generation of ER random graphs. The resulting
statistical mechanical system can be adapted to provide a distributed and local
(i.e., without any centralized monitoring) mechanism for load balancing, which
can have a significant impact in increasing the efficiency and utilization of
both the Internet (e.g., efficient web mirroring), and large-scale computing
infrastructure (e.g., cluster and grid computing).Comment: 11 Pages, 5 Postscript figures; added references, expanded on
protocol discussio
Clustering Algorithms for Scale-free Networks and Applications to Cloud Resource Management
In this paper we introduce algorithms for the construction of scale-free
networks and for clustering around the nerve centers, nodes with a high
connectivity in a scale-free networks. We argue that such overlay networks
could support self-organization in a complex system like a cloud computing
infrastructure and allow the implementation of optimal resource management
policies.Comment: 14 pages, 8 Figurs, Journa
Distributed Graph Embedding with Information-Oriented Random Walks
Graph embedding maps graph nodes to low-dimensional vectors, and is widely
adopted in machine learning tasks. The increasing availability of billion-edge
graphs underscores the importance of learning efficient and effective
embeddings on large graphs, such as link prediction on Twitter with over one
billion edges. Most existing graph embedding methods fall short of reaching
high data scalability. In this paper, we present a general-purpose,
distributed, information-centric random walk-based graph embedding framework,
DistGER, which can scale to embed billion-edge graphs. DistGER incrementally
computes information-centric random walks. It further leverages a
multi-proximity-aware, streaming, parallel graph partitioning strategy,
simultaneously achieving high local partition quality and excellent workload
balancing across machines. DistGER also improves the distributed Skip-Gram
learning model to generate node embeddings by optimizing the access locality,
CPU throughput, and synchronization efficiency. Experiments on real-world
graphs demonstrate that compared to state-of-the-art distributed graph
embedding frameworks, including KnightKing, DistDGL, and Pytorch-BigGraph,
DistGER exhibits 2.33x-129x acceleration, 45% reduction in cross-machines
communication, and > 10% effectiveness improvement in downstream tasks
Load Balancing Techniques in Cloud Computing
As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. The top challenges and Issues faced by cloud Computing is Security, Availability, Performance etc. The issue availability is mainly related to efficient load balancing, resource utilization & live migration of data in the server. In clouds, load balancing, as a method, is applied across different data centres to ensure the network availability by minimizing use of computer hardware, software failures and mitigating recourse limitations. Load Balancing is essential for efficient operations in distributed environments. Hence this paper presents the various existing load balancing Technique in Cloud Computing based on different parameters
Dynamic Load Balancing Algorithms For Cloud Computing
In cloud computing, the load balancing is one of the major requirment. Load is nothing but the of the amount of work that a system performs. Load can be classified as CPU load, memory size and network load. Load balancing is the process of dividing the task among various nodes of a distributed system to improve both resource utilization and job response time. Also avoiding a situation where some of the nodes are heavily loaded and others are idle. Load balancing ensures that every node in the network having equal amount of work (as per their capacity) at any instant of time. In This paper we survey the existing load balancing algorithms for a cloud based environment.
DOI: 10.17762/ijritcc2321-8169.150612
Cache Serializability: Reducing Inconsistency in Edge Transactions
Read-only caches are widely used in cloud infrastructures to reduce access
latency and load on backend databases. Operators view coherent caches as
impractical at genuinely large scale and many client-facing caches are updated
in an asynchronous manner with best-effort pipelines. Existing solutions that
support cache consistency are inapplicable to this scenario since they require
a round trip to the database on every cache transaction.
Existing incoherent cache technologies are oblivious to transactional data
access, even if the backend database supports transactions. We propose T-Cache,
a novel caching policy for read-only transactions in which inconsistency is
tolerable (won't cause safety violations) but undesirable (has a cost). T-Cache
improves cache consistency despite asynchronous and unreliable communication
between the cache and the database. We define cache-serializability, a variant
of serializability that is suitable for incoherent caches, and prove that with
unbounded resources T-Cache implements this new specification. With limited
resources, T-Cache allows the system manager to choose a trade-off between
performance and consistency.
Our evaluation shows that T-Cache detects many inconsistencies with only
nominal overhead. We use synthetic workloads to demonstrate the efficacy of
T-Cache when data accesses are clustered and its adaptive reaction to workload
changes. With workloads based on the real-world topologies, T-Cache detects
43-70% of the inconsistencies and increases the rate of consistent transactions
by 33-58%.Comment: Ittay Eyal, Ken Birman, Robbert van Renesse, "Cache Serializability:
Reducing Inconsistency in Edge Transactions," Distributed Computing Systems
(ICDCS), IEEE 35th International Conference on, June~29 2015--July~2 201
- …