2,024 research outputs found
Selective Fair Scheduling over Fading Channels
Imposing fairness in resource allocation incurs a loss of system throughput,
known as the Price of Fairness (). In wireless scheduling, increases
when serving users with very poor channel quality because the scheduler wastes
resources trying to be fair. This paper proposes a novel resource allocation
framework to rigorously address this issue. We introduce selective fairness:
being fair only to selected users, and improving by momentarily blocking
the rest. We study the associated admission control problem of finding the user
selection that minimizes subject to selective fairness, and show that
this combinatorial problem can be solved efficiently if the feasibility set
satisfies a condition; in our model it suffices that the wireless channels are
stochastically dominated. Exploiting selective fairness, we design a stochastic
framework where we minimize subject to an SLA, which ensures that an
ergodic subscriber is served frequently enough. In this context, we propose an
online policy that combines the drift-plus-penalty technique with
Gradient-Based Scheduling experts, and we prove it achieves the optimal .
Simulations show that our intelligent blocking outperforms by 40 in
throughput previous approaches which satisfy the SLA by blocking low-SNR users
Cross-layer design of multi-hop wireless networks
MULTI -hop wireless networks are usually defined as a collection of nodes
equipped with radio transmitters, which not only have the capability to
communicate each other in a multi-hop fashion, but also to route each others’ data
packets. The distributed nature of such networks makes them suitable for a variety of
applications where there are no assumed reliable central entities, or controllers, and
may significantly improve the scalability issues of conventional single-hop wireless
networks.
This Ph.D. dissertation mainly investigates two aspects of the research issues
related to the efficient multi-hop wireless networks design, namely: (a) network
protocols and (b) network management, both in cross-layer design paradigms to
ensure the notion of service quality, such as quality of service (QoS) in wireless mesh
networks (WMNs) for backhaul applications and quality of information (QoI) in
wireless sensor networks (WSNs) for sensing tasks. Throughout the presentation of
this Ph.D. dissertation, different network settings are used as illustrative examples,
however the proposed algorithms, methodologies, protocols, and models are not
restricted in the considered networks, but rather have wide applicability.
First, this dissertation proposes a cross-layer design framework integrating
a distributed proportional-fair scheduler and a QoS routing algorithm, while using
WMNs as an illustrative example. The proposed approach has significant performance
gain compared with other network protocols. Second, this dissertation proposes
a generic admission control methodology for any packet network, wired and
wireless, by modeling the network as a black box, and using a generic mathematical
0. Abstract 3
function and Taylor expansion to capture the admission impact. Third, this dissertation
further enhances the previous designs by proposing a negotiation process,
to bridge the applications’ service quality demands and the resource management,
while using WSNs as an illustrative example. This approach allows the negotiation
among different service classes and WSN resource allocations to reach the optimal
operational status. Finally, the guarantees of the service quality are extended to
the environment of multiple, disconnected, mobile subnetworks, where the question
of how to maintain communications using dynamically controlled, unmanned data
ferries is investigated
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
- …