5,948 research outputs found
OSCAR: A Collaborative Bandwidth Aggregation System
The exponential increase in mobile data demand, coupled with growing user
expectation to be connected in all places at all times, have introduced novel
challenges for researchers to address. Fortunately, the wide spread deployment
of various network technologies and the increased adoption of multi-interface
enabled devices have enabled researchers to develop solutions for those
challenges. Such solutions aim to exploit available interfaces on such devices
in both solitary and collaborative forms. These solutions, however, have faced
a steep deployment barrier.
In this paper, we present OSCAR, a multi-objective, incentive-based,
collaborative, and deployable bandwidth aggregation system. We present the
OSCAR architecture that does not introduce any intermediate hardware nor
require changes to current applications or legacy servers. The OSCAR
architecture is designed to automatically estimate the system's context,
dynamically schedule various connections and/or packets to different
interfaces, be backwards compatible with the current Internet architecture, and
provide the user with incentives for collaboration. We also formulate the OSCAR
scheduler as a multi-objective, multi-modal scheduler that maximizes system
throughput while minimizing energy consumption or financial cost. We evaluate
OSCAR via implementation on Linux, as well as via simulation, and compare our
results to the current optimal achievable throughput, cost, and energy
consumption. Our evaluation shows that, in the throughput maximization mode, we
provide up to 150% enhancement in throughput compared to current operating
systems, without any changes to legacy servers. Moreover, this performance gain
further increases with the availability of connection resume-supporting, or
OSCAR-enabled servers, reaching the maximum achievable upper-bound throughput
Recommended from our members
Game theory for dynamic spectrum sharing cognitive radio
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University on 21 June 2010.âGame Theoryâ is the formal study of conflict and cooperation. The theory is based on a set of tools that have been developed in order to assist with the modelling and analysis of individual, independent decision makers. These actions potentially affect any decisions, which are made by other competitors. Therefore, it is well suited and capable of addressing the various issues linked to wireless communications. This work presents a Green Game-Based Hybrid Vertical Handover Model. The model is used for heterogeneous wireless networks, which combines both dynamic (Received Signal Strength and Node Mobility) and static (Cost, Power Consumption and Bandwidth) factors. These factors control the handover decision process; whereby the mechanism successfully eliminates any unnecessary handovers, reduces delay and overall number of handovers to 50% less and 70% less dropped packets and saves 50% more energy in comparison to other mechanisms. A novel Game-Based Multi-Interface Fast-Handover MIPv6 protocol is introduced in this thesis as an extension to the Multi-Interface Fast-handover MIPv6 protocol. The protocol works when the mobile node has more than one wireless interface. The protocol controls the handover decision process by deciding whether a handover is necessary and helps the node to choose the right access point at the right time. In addition, the protocol switches the mobile nodes interfaces âONâ and âOFFâ when needed to control the mobile nodeâs energy consumption and eliminate power lost of adding another interface. The protocol successfully reduces the number of handovers to 70%, 90% less dropped packets, 40% more received packets and acknowledgments and 85% less end-to-end delay in comparison to other Protocols. Furthermore, the thesis adapts a novel combination of both game and auction theory in dynamic resource allocation and price-power-based routing in wireless Ad-Hoc networks. Under auction schemes, destinations nodes bid the information data to access to the data stored in the server node. The server will allocate the data to the winner who values it most. Once the data has been allocated to the winner, another mechanism for dynamic routing is adopted. The routing mechanism is based on the source-destination cooperation, power consumption and source-compensation to the intermediate nodes. The mechanism dramatically increases the sellerâs revenue to 50% more when compared to random allocation scheme and briefly evaluates the reliability of predefined route with respect to data prices, source and destination cooperation for different network settings. Last but not least, this thesis adjusts an adaptive competitive second-price pay-to-bid sealed auction game and a reputation-based game. This solves the fairness problems associated with spectrum sharing amongst one primary user and a large number of secondary users in a cognitive radio environment. The proposed games create a competition between the bidders and offers better revenue to the players in terms of fairness to more than 60% in certain scenarios. The proposed game could reach the maximum total profit for both primary and secondary users with better fairness; this is illustrated through numerical results
Optimal configuration of active and backup servers for augmented reality cooperative games
Interactive applications as online games and mobile devices have become more and more popular in recent years. From their combination, new and interesting cooperative services could be generated. For instance, gamers endowed with Augmented Reality (AR) visors connected as wireless nodes in an ad-hoc network, can interact with each other while immersed in the game. To enable this vision, we discuss here a hybrid architecture enabling game play in ad-hoc mode instead of the traditional client-server setting. In our architecture, one of the player nodes also acts as the server of the game, whereas other backup server nodes are ready to become active servers in case of disconnection of the network i.e. due to low energy level of the currently active server. This allows to have a longer gaming session before incurring in disconnections or energy exhaustion. In this context, the server election strategy with the aim of maximizing network lifetime is not so straightforward. To this end, we have hence analyzed this issue through a Mixed Integer Linear Programming (MILP) model and both numerical and simulation-based analysis shows that the backup servers solution fulfills its design objective
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
Efficient Scheduling Algorithms for Wireless Resource Allocation and Virtualization in Wireless Networks
The continuing growth in demand for better mobile broadband experiences has motivated rapid development of radio-access technologies to support high data rates and improve quality of service (QoS) and quality of experience (QoE) for mobile users. However, the modern radio-access technologies pose new challenges to mobile network operators (MNO) and wireless device designers such as reducing the total cost of ownership while supporting high data throughput per user, and extending battery life-per-charge of the mobile devices. In this thesis, a variety of optimization techniques aimed at providing innovative solutions for such challenges are explored.
The thesis is divided into two parts. In the first part, the challenge of extending battery life-per-charge is addressed. Optimal and suboptimal power-efficient schedulers that minimize the total transmit power and meet the QoS requirements of the users are presented. The second outlines the benefits and challenges of deploying wireless resource virtualization (WRV) concept as a promising solution for satisfying the growing demand for mobile data and reducing capital and operational costs. First, a WRV framework is proposed for single cell zone that is able to centralize and share the spectrum resources between multiple MNOs. Consequently, several WRV frameworks are proposed, which virtualize the spectrum resource of the entire network for cloud radio access network (C-RAN)- one of the front runners for the next generation network architecture.
The main contributions of this thesis are in designing optimal and suboptimal solutions for the aforementioned challenges. In most cases, the optimal solutions suffer from high complexity, and therefore low-complexity suboptimal solutions are provided for practical systems. The optimal solutions are used as benchmarks for evaluating the suboptimal solutions. The results prove that the proposed solutions effectively contribute in addressing the challenges caused by the demand for high data rates and power transmission in mobile networks
Towards Viable Large Scale Heterogeneous Wireless Networks
We explore radio resource allocation and management issues related to a large-scale heterogeneous (hetnet) wireless system made up of several Radio Access Technologies (RATs) that collectively provide a unified wireless network to a diverse set of users through co-ordination managed by a centralized Global Resource Controller (GRC). We incorporate 3G cellular technologies HSPA and EVDO, 4G cellular technologies WiMAX and LTE, and WLAN technology Wi-Fi as the RATs in our hetnet wireless system. We assume that the user devices are either multi-modal or have one or more reconfigurable radios which makes it possible for each device to use any available RAT at any given time subject to resource-sharing agreements. For such a hetnet system where resource allocation is coordinated at a global level, characterizing the network performance in terms of various conflicting network efficiency objectives that takes costs associated with a network re-association operation into account largely remains an open problem. Also, all the studies to-date that try to characterize the network performance of a hetnet system do not account for RAT-specific implementation details and the management overhead associated with setting up a centralized control. We study the radio resource allocation problem and the implementation/management overhead issues associated with a hetnet system in two research phases. In the first phase, we develop cost models associated with network re-association in terms of increased power consumption and communication downtime taking into account various user device assumptions. Using these cost models in our problem formulations, the first phase focuses on resource allocation strategies where we use a high-level system modeling approach to study the achievable performance in terms of conflicting network efficiency measures of spectral efficiency, overall power consumption, and instantaneous and long-term fairness for each user in the hetnet system. Our main result from this phase of study suggests that the gain in spectral efficiency due to multi-access network diversity results in a tremendous increase in overall power consumption due to frequent re-associations required by user devices. We then develop a utility function-based optimization algorithm to characterize and achieve a desired tradeoff in terms of all four network efficiency measures of spectral efficiency, overall power consumption and instantaneous and long-term fairness. We show an increase in a multi-attribute system utility measure of up to 56.7% for our algorithm compared to other widely studied resource allocation algorithms including max-sum rate, proportional fairness, max-min fairness and min power. The second phase of our research study focuses on practical implementation issues including the overhead required to implement a centralized GRC solution in a hetnet system. Through detailed protocol level simulations performed in ns-2, we show an increase in spectral efficiency of up to 99% and an increase in instantaneous fairness of up to 28.5% for two sort-based user device-to-Access Point (AP)/Base Station (BS) association algorithms implemented at the GRC that aim to maximize system spectral efficiency and instantaneous fairness performance metrics respectively compared to a distributed solution where each user makes his/her own association decision. The efficiency increase for each respective attribute again results in a tremendous increase in power consumption of up to 650% and 794% for each respective algorithm implemented at the GRC compared to a distributed solution because of frequent re-associations
Energy sustainable paradigms and methods for future mobile networks: A survey
In this survey, we discuss the role of energy in the design of future mobile
networks and, in particular, we advocate and elaborate on the use of energy
harvesting (EH) hardware as a means to decrease the environmental footprint of
5G technology. To take full advantage of the harvested (renewable) energy,
while still meeting the quality of service required by dense 5G deployments,
suitable management techniques are here reviewed, highlighting the open issues
that are still to be solved to provide eco-friendly and cost-effective mobile
architectures. Several solutions have recently been proposed to tackle
capacity, coverage and efficiency problems, including: C-RAN, Software Defined
Networking (SDN) and fog computing, among others. However, these are not
explicitly tailored to increase the energy efficiency of networks featuring
renewable energy sources, and have the following limitations: (i) their energy
savings are in many cases still insufficient and (ii) they do not consider
network elements possessing energy harvesting capabilities. In this paper, we
systematically review existing energy sustainable paradigms and methods to
address points (i) and (ii), discussing how these can be exploited to obtain
highly efficient, energy self-sufficient and high capacity networks. Several
open issues have emerged from our review, ranging from the need for accurate
energy, transmission and consumption models, to the lack of accurate data
traffic profiles, to the use of power transfer, energy cooperation and energy
trading techniques. These challenges are here discussed along with some
research directions to follow for achieving sustainable 5G systems.Comment: Accepted by Elsevier Computer Communications, 21 pages, 9 figure
Spectral, Energy and Computation Efficiency in Future 5G Wireless Networks
Wireless technology has revolutionized the way people communicate. From first generation, or 1G, in the 1980s to current, largely deployed 4G in the 2010s, we have witnessed not only a technological leap, but also the reformation of associated applications. It is expected that 5G will become commercially available in 2020. 5G is driven by ever-increasing demands for high mobile traffic, low transmission delay, and massive numbers of connected devices. Today, with the popularity of smart phones, intelligent appliances, autonomous cars, and tablets, communication demands are higher than ever, especially when it comes to low-cost and easy-access solutions.
Existing communication architecture cannot fulfill 5Gâs needs. For example, 5G requires connection speeds up to 1,000 times faster than current technology can provide. Also, from transmitter side to receiver side, 5G delays should be less than 1ms, while 4G targets a 5ms delay speed. To meet these requirements, 5G will apply several disruptive techniques. We focus on two of them: new radio and new scheme. As for the former, we study the non-orthogonal multiple access (NOMA) and as for the latter, we use mobile edge computing (MEC).
Traditional communication systems allow users to communicate alternatively, which clearly avoids inter-user interference, but also caps the connection speed. NOMA, on the other hand, allows multiple users to transmit simultaneously. While NOMA will inevitably cause excessive interference, we prove such interference can be mitigated by an advanced receiver side technique. NOMA has existed on the research frontier since 2013. Since that time, both academics and industry professionals have extensively studied its performance. In this dissertation, our contribution is to incorporate NOMA with several potential schemes, such as relay, IoT, and cognitive radio networks. Furthermore, we reviewed various limitations on NOMA and proposed a more practical model.
In the second part, MEC is considered. MEC is a transformation from the previous cloud computing system. In particular, MEC leverages powerful devices nearby and instead of sending information to distant cloud servers, the transmission occurs in closer range, which can effectively reduce communication delay. In this work, we have proposed a new evaluation metric for MEC which can more effectively leverage the trade-off between the amount of computation and the energy consumed thereby.
A practical communication system for wearable devices is proposed in the last part, which combines all the techniques discussed above. The challenges for wearable communication are inherent in its diverse needs, as some devices may require low speed but high reliability (factory sensors), while others may need low delay (medical devices). We have addressed these challenges and validated our findings through simulations
- âŠ