9 research outputs found
Efficient Load Balancing for Cloud Computing by Using Content Analysis
Nowadays, computer networks have grown rapidly due to the demand for information technology management and facilitation of greater functionality. The service provided based on a single machine cannot accommodate large databases. Therefore, single servers must be combined for server group services. The problem in grouping server service is that it is very hard to manage many devices which have different hardware. Cloud computing is an extensive scalable computing infrastructure that shares existing resources. It is a popular option for people and businesses for a number of reasons including cost savings and security. This paper aimed to propose an efficient technique of load balance control by using HA Proxy in cloud computing with the objective of receiving and distributing the workload to the computer server to share the processing resources. The proposed technique applied round-robin scheduling for an efficient resource management of the cloud storage systems that focused on an effective workload balancing and a dynamic replication strategy. The evaluation approach was based on the benchmark data from requests per second and failed requests. The results showed that the proposed technique could improve performance of load balancing by 1,000 request /6.31 sec in cloud computing and generate fewer false alarm
User Scheduling for Precoded Satellite Systems with Individual Quality of Service Constraints
Multibeam high throughput satellite (MB-HTS) systems will play a key role in
delivering broadband services to a large number of users with diverse Quality
of Service (QoS) requirements. This paper focuses on MB-HTS where the same
spectrum is re-used by all user links and, in particular, we propose a novel
user scheduling design capable to provide guarantees in terms of individual QoS
requirements while maximizing the system throughput. This is achieved by
precoding to mitigate mutual interference. The combinatorial optimization
structure requires an extremely high cost to obtain the global optimum even
with a reduced number of users. We, therefore, propose a heuristic algorithm
yielding a good local solution and tolerable computational complexity,
applicable for large-scale networks. Numerical results demonstrate the
effectiveness of our proposed algorithm on scheduling many users with better
sum throughput than the other benchmarks. Besides, the QoS requirements for all
scheduled users are guaranteed.Comment: 6 pages,2 figures, Accepted to present at PIMRC 202
A Unified Framework for SINR Analysis in Poisson Networks with Traffic Dynamics
We study the performance of wireless links for a class of Poisson networks,
in which packets arrive at the transmitters following Bernoulli processes. By
combining stochastic geometry with queueing theory, two fundamental measures
are analyzed, namely the transmission success probability and the meta
distribution of signal-to-interference-plus-noise ratio (SINR). Different from
the conventional approaches that assume independent active states across the
nodes and use homogeneous point processes to model the locations of
interferers, our analysis accounts for the interdependency amongst active
states of the transmitters in space and arrives at a non-homogeneous point
process for the modeling of interferers' positions, which leads to a more
accurate characterization of the SINR. The accuracy of the theoretical results
is verified by simulations, and the developed framework is then used to devise
design guidelines for the deployment strategies of wireless networks
Optimizing Information Freshness in Wireless Networks: A Stochastic Geometry Approach
Optimization of information freshness in wireless networks has usually been
performed based on queueing analysis that captures only the temporal traffic
dynamics associated with the transmitters and receivers. However, the effect of
interference, which is mainly dominated by the interferers' geographic
locations, is not well understood. In this paper, we leverage a spatiotemporal
model, which allows one to characterize the age of information (AoI) from a
joint queueing-geometry perspective, for the design of a decentralized
scheduling policy that exploits local observation to make transmission
decisions that minimize the AoI. To quantify the performance, we also derive
accurate and tractable expressions for the peak AoI. Numerical results reveal
that: i) the packet arrival rate directly affects the service process due to
queueing interactions, ii) the proposed scheme can adapt to traffic variations
and largely reduce the peak AoI, and iii) the proposed scheme scales well as
the network grows in size. This is done by adaptively adjusting the radio
access probability at each transmitter to the change of the ambient
environment.Comment: arXiv admin note: substantial text overlap with arXiv:1907.0967
Scheduling Policies for Federated Learning in Wireless Networks
Motivated by the increasing computational capacity of wireless user
equipments (UEs), e.g., smart phones, tablets, or vehicles, as well as the
increasing concerns about sharing private data, a new machine learning model
has emerged, namely federated learning (FL), that allows a decoupling of data
acquisition and computation at the central unit. Unlike centralized learning
taking place in a data center, FL usually operates in a wireless edge network
where the communication medium is resource-constrained and unreliable. Due to
limited bandwidth, only a portion of UEs can be scheduled for updates at each
iteration. Due to the shared nature of the wireless medium, transmissions are
subjected to interference and are not guaranteed. The performance of FL system
in such a setting is not well understood. In this paper, an analytical model is
developed to characterize the performance of FL in wireless networks.
Particularly, tractable expressions are derived for the convergence rate of FL
in a wireless setting, accounting for effects from both scheduling schemes and
inter-cell interference. Using the developed analysis, the effectiveness of
three different scheduling policies, i.e., random scheduling (RS), round robin
(RR), and proportional fair (PF), are compared in terms of FL convergence rate.
It is shown that running FL with PF outperforms RS and RR if the network is
operating under a high signal-to-interference-plus-noise ratio (SINR)
threshold, while RR is more preferable when the SINR threshold is low.
Moreover, the FL convergence rate decreases rapidly as the SINR threshold
increases, thus confirming the importance of compression and quantization of
the update parameters. The analysis also reveals a trade-off between the number
of scheduled UEs and subchannel bandwidth under a fixed amount of available
spectrum