15,086 research outputs found
Scheduling for next generation WLANs: filling the gap between offered and observed data rates
In wireless networks, opportunistic scheduling is used to increase system throughput by exploiting multi-user diversity. Although recent advances have increased physical layer data rates supported in wireless local area networks (WLANs), actual throughput realized are significantly lower due to overhead. Accordingly, the frame aggregation concept is used in next generation WLANs to improve efficiency. However, with frame aggregation, traditional opportunistic schemes are no longer optimal. In this paper, we propose schedulers that take queue and channel conditions into account jointly, to maximize throughput observed at the users for next generation WLANs. We also extend this work to design two schedulers that perform block scheduling for maximizing network throughput over multiple transmission sequences. For these schedulers, which make decisions over long time durations, we model the system using queueing theory and determine users' temporal access proportions according to this model. Through detailed simulations, we show that all our proposed algorithms offer significant throughput improvement, better fairness, and much lower delay compared with traditional opportunistic schedulers, facilitating the practical use of the evolving standard for next generation wireless networks
DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling
The dynamic provisioning of virtualized resources offered by cloud computing
infrastructures allows applications deployed in a cloud environment to
automatically increase and decrease the amount of used resources. This
capability is called auto-scaling and its main purpose is to automatically
adjust the scale of the system that is running the application to satisfy the
varying workload with minimum resource utilization. The need for auto-scaling
is particularly important during workload peaks, in which applications may need
to scale up to extremely large-scale systems.
Both the research community and the main cloud providers have already
developed auto-scaling solutions. However, most research solutions are
centralized and not suitable for managing large-scale systems, moreover cloud
providers' solutions are bound to the limitations of a specific provider in
terms of resource prices, availability, reliability, and connectivity.
In this paper we propose DEPAS, a decentralized probabilistic auto-scaling
algorithm integrated into a P2P architecture that is cloud provider
independent, thus allowing the auto-scaling of services over multiple cloud
infrastructures at the same time. Our simulations, which are based on real
service traces, show that our approach is capable of: (i) keeping the overall
utilization of all the instantiated cloud resources in a target range, (ii)
maintaining service response times close to the ones obtained using optimal
centralized auto-scaling approaches.Comment: Submitted to Springer Computin
Lifetime-aware cloud data centers: models and performance evaluation
We present a model to evaluate the server lifetime in cloud data centers (DCs). In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server
Traffic-Driven Spectrum Allocation in Heterogeneous Networks
Next generation cellular networks will be heterogeneous with dense deployment
of small cells in order to deliver high data rate per unit area. Traffic
variations are more pronounced in a small cell, which in turn lead to more
dynamic interference to other cells. It is crucial to adapt radio resource
management to traffic conditions in such a heterogeneous network (HetNet). This
paper studies the optimization of spectrum allocation in HetNets on a
relatively slow timescale based on average traffic and channel conditions
(typically over seconds or minutes). Specifically, in a cluster with base
transceiver stations (BTSs), the optimal partition of the spectrum into
segments is determined, corresponding to all possible spectrum reuse patterns
in the downlink. Each BTS's traffic is modeled using a queue with Poisson
arrivals, the service rate of which is a linear function of the combined
bandwidth of all assigned spectrum segments. With the system average packet
sojourn time as the objective, a convex optimization problem is first
formulated, where it is shown that the optimal allocation divides the spectrum
into at most segments. A second, refined model is then proposed to address
queue interactions due to interference, where the corresponding optimal
allocation problem admits an efficient suboptimal solution. Both allocation
schemes attain the entire throughput region of a given network. Simulation
results show the two schemes perform similarly in the heavy-traffic regime, in
which case they significantly outperform both the orthogonal allocation and the
full-frequency-reuse allocation. The refined allocation shows the best
performance under all traffic conditions.Comment: 13 pages, 11 figures, accepted for publication by JSAC-HC
CA-AQM: Channel-Aware Active Queue Management for Wireless Networks
In a wireless network, data transmission suffers from varied signal strengths and channel bit error rates. To ensure successful packet reception under different channel conditions, automatic bit rate control schemes are implemented to adjust the transmission bit rates based on the perceived channel conditions. This leads to a wireless network with diverse bit rates. On the other hand, TCP is unaware of such {\em rate diversity} when it performs flow rate control in wireless networks. Experiments show that the throughput of flows in a wireless network are driven by the one with the lowest bit rate, (i.e., the one with the worst channel condition). This does not only lead to low channel utilization, but also fluctuated performance for all flows independent of their individual channel conditions.
To address this problem, we conduct an optimization-based analytical study of such behavior of TCP. Based on this optimization framework, we present a joint flow control and active queue management solution. The presented channel-aware active queue management (CA-AQM) provides congestion signals for flow control not only based on the queue length but also the channel condition and the transmission bit rate. Theoretical analysis shows that our solution isolates the performance of individual flows with diverse bit rates. Further, it stabilizes the queue lengths and provides a time-fair channel allocation. Test-bed experiments validate our theoretical claims over a multi-rate wireless network testbed
Upstream traffic capacity of a WDM EPON under online GATE-driven scheduling
Passive optical networks are increasingly used for access to the Internet and
it is important to understand the performance of future long-reach,
multi-channel variants. In this paper we discuss requirements on the dynamic
bandwidth allocation (DBA) algorithm used to manage the upstream resource in a
WDM EPON and propose a simple novel DBA algorithm that is considerably more
efficient than classical approaches. We demonstrate that the algorithm emulates
a multi-server polling system and derive capacity formulas that are valid for
general traffic processes. We evaluate delay performance by simulation
demonstrating the superiority of the proposed scheduler. The proposed scheduler
offers considerable flexibility and is particularly efficient in long-reach
access networks where propagation times are high
- …