50,921 research outputs found

    Value-aware resource allocation for service guarantees in networks,”

    Get PDF
    Abstract-The traditional formulation of the total value of information transfer is a multi-commodity flow problem. Each data source is seen as generating a commodity along a fixed route, and the objective is to maximize the total system throughput under some concept of fairness, subject to capacity constraints of the links used. This problem is well studied under the framework of network utility maximization and has led to several different distributed congestion control schemes. However, this view of value does not capture the fact that flows may associate value, not just with throughput, but with link-quality metrics such as packet delay and jitter. In this work, the congestion control problem is redefined to include individual source preferences. It is assumed that degradation in link quality seen by a flow adds up on the links it traverses, and the total utility is maximized in such a way that the end-to-end quality degradation seen by each source is bounded by a value that it declares. Decoupling sourcedissatisfaction and link-degradation through an effective capacity variable, a distributed and provably optimal resource allocation algorithm is designed to maximize system utility subject to these quality constraints. The applicability of the controller in different situations is supported by numerical simulations, and a protocol developed using the controller is simulated on ns-2 to illustrate its performance

    Distributed QoS Guarantees for Realtime Traffic in Ad Hoc Networks

    Get PDF
    In this paper, we propose a new cross-layer framework, named QPART ( QoS br>rotocol for Adhoc Realtime Traffic), which provides QoS guarantees to real-time multimedia applications for wireless ad hoc networks. By adapting the contention window sizes at the MAC layer, QPART schedules packets of flows according to their unique QoS requirements. QPART implements priority-based admission control and conflict resolution to ensure that the requirements of admitted realtime flows is smaller than the network capacity. The novelty of QPART is that it is robust to mobility and variances in channel capacity and imposes no control message overhead on the network

    Interference-Aware Downlink Resource Management for OFDMA Femtocell Networks

    Get PDF
    Femtocell is an economical solution to provide high speed indoor communication instead of the conventional macro-cellular networks. Especially, OFDMA femtocell is considered in the next generation cellular network such as 3GPP LTE and mobile WiMAX system. Although the femtocell has great advantages to accommodate indoor users, interference management problem is a critical issue to operate femtocell network. Existing OFDMA resource management algorithms only consider optimizing system-centric metric, and cannot manage the co-channel interference. Moreover, it is hard to cooperate with other femtocells to control the interference, since the self-configurable characteristics of femtocell. This paper proposes a novel interference-aware resource allocation algorithm for OFDMA femtocell networks. The proposed algorithm allocates resources according to a new objective function which reflects the effect of interference, and the heuristic algorithm is also introduced to reduce the complexity of the original problem. The Monte-Carlo simulation is performed to evaluate the performance of the proposed algorithm compared to the existing solutions

    Multi-capacity bin packing with dependent items and its application to the packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP) problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem , and we evaluate its efficiency using simulations on various application workloads, and network models.This work was done while author was at Boston University. It was partially supported by NSF CISE awards #1430145, #1414119, #1239021 and #1012798. (1430145 - NSF CISE; 1414119 - NSF CISE; 1239021 - NSF CISE; 1012798 - NSF CISE

    Network-constrained packing of brokered workloads in virtualized environments

    Full text link
    Providing resource allocation with performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, in which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. Existing resource allocation solutions either assume that applications manage their data transfer between their virtualized resources, or that cloud providers manage their internal networking resources.With the increased prevalence of brokerage services in cloud platforms, there is a need for resource allocation solutions that provides predictability guarantees in settings, in which neither application scheduling nor cloud provider resources can be managed/controlled by the broker. This paper addresses this problem, as we define the Network-Constrained Packing (NCP)problem of finding the optimal mapping of brokered resources to applications with guaranteed performance predictability. We prove that NCP is NP-hard, and we define two special instances of the problem, for which exact solutions can be found efficiently. We develop a greedy heuristic to solve the general instance of the NCP problem, and we evaluate its efficiency using simulations on various application workloads, and network models.This work is supported by NSF CISE CNS Award #1347522, # 1239021, # 1012798

    Unified clustering and communication protocol for wireless sensor networks

    Get PDF
    In this paper we present an energy-efficient cross layer protocol for providing application specific reservations in wireless senor networks called the “Unified Clustering and Communication Protocol ” (UCCP). Our modular cross layered framework satisfies three wireless sensor network requirements, namely, the QoS requirement of heterogeneous applications, energy aware clustering and data forwarding by relay sensor nodes. Our unified design approach is motivated by providing an integrated and viable solution for self organization and end-to-end communication is wireless sensor networks. Dynamic QoS based reservation guarantees are provided using a reservation-based TDMA approach. Our novel energy-efficient clustering approach employs a multi-objective optimization technique based on OR (operations research) practices. We adopt a simple hierarchy in which relay nodes forward data messages from cluster head to the sink, thus eliminating the overheads needed to maintain a routing protocol. Simulation results demonstrate that UCCP provides an energy-efficient and scalable solution to meet the application specific QoS demands in resource constrained sensor nodes. Index Terms — wireless sensor networks, unified communication, optimization, clustering and quality of service

    Scheduling Policies in Time and Frequency Domains for LTE Downlink Channel: A Performance Comparison

    Get PDF
    A key feature of the Long-Term Evolution (LTE) system is that the packet scheduler can make use of the channel quality information (CQI), which is periodically reported by user equipment either in an aggregate form for the whole downlink channel or distinguished for each available subchannel. This mechanism allows for wide discretion in resource allocation, thus promoting the flourishing of several scheduling algorithms, with different purposes. It is therefore of great interest to compare the performance of such algorithms under different scenarios. Here, we carry out a thorough performance analysis of different scheduling algorithms for saturated User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) traffic sources, as well as consider both the time- and frequency-domain versions of the schedulers and for both flat and frequency-selective channels. The analysis makes it possible to appreciate the difference among the scheduling algorithms and to assess the performance gain, in terms of cell capacity, users' fairness, and packet service time, obtained by exploiting the richer, but heavier, information carried by subchannel CQI. An important part of this analysis is a throughput guarantee scheduler, which we propose in this paper. The analysis reveals that the proposed scheduler provides a good tradeoff between cell capacity and fairness both for TCP and UDP traffic sources
    corecore