3,938 research outputs found

    A baseband wireless spectrum hypervisor for multiplexing concurrent OFDM signals

    Get PDF
    The next generation of wireless and mobile networks will have to handle a significant increase in traffic load compared to the current ones. This situation calls for novel ways to increase the spectral efficiency. Therefore, in this paper, we propose a wireless spectrum hypervisor architecture that abstracts a radio frequency (RF) front-end into a configurable number of virtual RF front ends. The proposed architecture has the ability to enable flexible spectrum access in existing wireless and mobile networks, which is a challenging task due to the limited spectrum programmability, i.e., the capability a system has to change the spectral properties of a given signal to fit an arbitrary frequency allocation. The proposed architecture is a non-intrusive and highly optimized wireless hypervisor that multiplexes the signals of several different and concurrent multi-carrier-based radio access technologies with numerologies that are multiple integers of one another, which are also referred in our work as radio access technologies with correlated numerology. For example, the proposed architecture can multiplex the signals of several Wi-Fi access points, several LTE base stations, several WiMAX base stations, etc. As it able to multiplex the signals of radio access technologies with correlated numerology, it can, for instance, multiplex the signals of LTE, 5G-NR and NB-IoT base stations. It abstracts a radio frequency front-end into a configurable number of virtual RF front ends, making it possible for such different technologies to share the same RF front-end and consequently reduce the costs and increasing the spectral efficiency by employing densification, once several networks share the same infrastructure or by dynamically accessing free chunks of spectrum. Therefore, the main goal of the proposed approach is to improve spectral efficiency by efficiently using vacant gaps in congested spectrum bandwidths or adopting network densification through infrastructure sharing. We demonstrate mathematically how our proposed approach works and present several simulation results proving its functionality and efficiency. Additionally, we designed and implemented an open-source and free proof of concept prototype of the proposed architecture, which can be used by researchers and developers to run experiments or extend the concept to other applications. We present several experimental results used to validate the proposed prototype. We demonstrate that the prototype can easily handle up to 12 concurrent physical layers

    Multi-capacity bin packing with dependent items and its application to the packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP) problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem , and we evaluate its efficiency using simulations on various application workloads, and network models.This work was done while author was at Boston University. It was partially supported by NSF CISE awards #1430145, #1414119, #1239021 and #1012798. (1430145 - NSF CISE; 1414119 - NSF CISE; 1239021 - NSF CISE; 1012798 - NSF CISE

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications

    Achievable Rate and Optimal Physical Layer Rate Allocation in Interference-Free Wireless Networks

    Get PDF
    We analyze the achievable rate in interference-free wireless networks with physical layer fading channels and orthogonal multiple access. As a starting point, the point-to-point channel is considered. We find the optimal physical and network layer rate trade-off which maximizes the achievable overall rate for both a fixed rate transmission scheme and an improved scheme based on multiple virtual users and superposition coding. These initial results are extended to the network setting, where, based on a cut-set formulation, the achievable rate at each node and its upper bound are derived. We propose a distributed optimization algorithm which allows to jointly determine the maximum achievable rate, the optimal physical layer rates on each network link, and an opportunistic back-pressure-type routing strategy on the network layer. This inherently justifies the layered architecture in existing wireless networks. Finally, we show that the proposed layered optimization approach can achieve almost all of the ergodic network capacity in high SNR.Comment: 5 pages, to appear in Proc. IEEE ISIT, July 200

    Network-constrained packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources.With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP)problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem, and we evaluate its efficiency using simulations on various application workloads, and network models.This work is supported by NSF CISE CNS Award #1347522, # 1239021, # 1012798

    Efficient Resource Management Mechanism for 802.16 Wireless Networks Based on Weighted Fair Queuing

    Get PDF
    Wireless Networking continues on its path of being one of the most commonly used means of communication. The evolution of this technology has taken place through the design of various protocols. Some common wireless protocols are the WLAN, 802.16 or WiMAX, and the emerging 802.20, which specializes in high speed vehicular networks, taking the concept from 802.16 to higher levels of performance. As with any large network, congestion becomes an important issue. Congestion gains importance as more hosts join a wireless network. In most cases, congestion is caused by the lack of an efficient mechanism to deal with exponential increases in host devices. This can effectively lead to very huge bottlenecks in the network causing slow sluggish performance, which may eventually reduce the speed of the network. With continuous advancement being the trend in this technology, the proposal of an efficient scheme for wireless resource allocation is an important solution to the problem of congestion. The primary area of focus will be the emerging standard for wireless networks, the 802.16 or “WiMAX”. This project, attempts to propose a mechanism for an effective resource management mechanism between subscriber stations and the corresponding base station

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa
    corecore