17,515 research outputs found

    The Blacklisting Memory Scheduler: Balancing Performance, Fairness and Complexity

    Full text link
    In a multicore system, applications running on different cores interfere at main memory. This inter-application interference degrades overall system performance and unfairly slows down applications. Prior works have developed application-aware memory schedulers to tackle this problem. State-of-the-art application-aware memory schedulers prioritize requests of applications that are vulnerable to interference, by ranking individual applications based on their memory access characteristics and enforcing a total rank order. In this paper, we observe that state-of-the-art application-aware memory schedulers have two major shortcomings. First, such schedulers trade off hardware complexity in order to achieve high performance or fairness, since ranking applications with a total order leads to high hardware complexity. Second, ranking can unfairly slow down applications that are at the bottom of the ranking stack. To overcome these shortcomings, we propose the Blacklisting Memory Scheduler (BLISS), which achieves high system performance and fairness while incurring low hardware complexity, based on two observations. First, we find that, to mitigate interference, it is sufficient to separate applications into only two groups. Second, we show that this grouping can be efficiently performed by simply counting the number of consecutive requests served from each application. We evaluate BLISS across a wide variety of workloads/system configurations and compare its performance and hardware complexity, with five state-of-the-art memory schedulers. Our evaluations show that BLISS achieves 5% better system performance and 25% better fairness than the best-performing previous scheduler while greatly reducing critical path latency and hardware area cost of the memory scheduler (by 79% and 43%, respectively), thereby achieving a good trade-off between performance, fairness and hardware complexity

    CA-AQM: Channel-Aware Active Queue Management for Wireless Networks

    Get PDF
    In a wireless network, data transmission suffers from varied signal strengths and channel bit error rates. To ensure successful packet reception under different channel conditions, automatic bit rate control schemes are implemented to adjust the transmission bit rates based on the perceived channel conditions. This leads to a wireless network with diverse bit rates. On the other hand, TCP is unaware of such {\em rate diversity} when it performs flow rate control in wireless networks. Experiments show that the throughput of flows in a wireless network are driven by the one with the lowest bit rate, (i.e., the one with the worst channel condition). This does not only lead to low channel utilization, but also fluctuated performance for all flows independent of their individual channel conditions. To address this problem, we conduct an optimization-based analytical study of such behavior of TCP. Based on this optimization framework, we present a joint flow control and active queue management solution. The presented channel-aware active queue management (CA-AQM) provides congestion signals for flow control not only based on the queue length but also the channel condition and the transmission bit rate. Theoretical analysis shows that our solution isolates the performance of individual flows with diverse bit rates. Further, it stabilizes the queue lengths and provides a time-fair channel allocation. Test-bed experiments validate our theoretical claims over a multi-rate wireless network testbed

    Performance analysis of carrier aggregation for various mobile network implementations scenario based on spectrum allocated

    Full text link
    Carrier Aggregation (CA) is one of the Long Term Evolution Advanced (LTE-A) features that allow mobile network operators (MNO) to combine multiple component carriers (CCs) across the available spectrum to create a wider bandwidth channel for increasing the network data throughput and overall capacity. CA has a potential to enhance data rates and network performance in the downlink, uplink, or both, and it can support aggregation of frequency division duplexing (FDD) as well as time division duplexing (TDD). The technique enables the MNO to exploit fragmented spectrum allocations and can be utilized to aggregate licensed and unlicensed carrier spectrum as well. This paper analyzes the performance gains and complexity level that arises from the aggregation of three inter-band component carriers (3CC) as compared to the aggregation of 2CC using a Vienna LTE System Level simulator. The results show a considerable growth in the average cell throughput when 3CC aggregations are implemented over the 2CC aggregation, at the expense of reduction in the fairness index. The reduction in the fairness index implies that, the scheduler has an increased task in resource allocations due to the added component carrier. Compensating for such decrease in the fairness index could result into scheduler design complexity. The proposed scheme can be adopted in combining various component carriers, to increase the bandwidth and hence the data rates.Comment: 13 page

    Defining and Measuring The Creation of Quality Jobs

    Get PDF
    Our research is intended to support our peers in the Community Development Financial Institution (CDFI) industry who, through their financing, have served low-income and other disadvantaged communities for two decades.  While the CDFI industry has been instrumental in supporting job creation across the U.S., we believe that now is the time to focus greater attention on the quality of the jobs created in order to combat rising income and wealth inequality.Through a better understanding of what defines a quality job and a set of practical methods for measuring the quality of jobs created, we believe CDFIs and others in the impact investing community will be better positioned to make more effective investments that support good jobs for workers, businesses, and communities

    Transform-domain analysis of packet delay in network nodes with QoS-aware scheduling

    Get PDF
    In order to differentiate the perceived QoS between traffic classes in heterogeneous packet networks, equipment discriminates incoming packets based on their class, particularly in the way queued packets are scheduled for further transmission. We review a common stochastic modelling framework in which scheduling mechanisms can be evaluated, especially with regard to the resulting per-class delay distribution. For this, a discrete-time single-server queue is considered with two classes of packet arrivals, either delay-sensitive (1) or delay-tolerant (2). The steady-state analysis relies on the use of well-chosen supplementary variables and is mainly done in the transform domain. Secondly, we propose and analyse a new type of scheduling mechanism that allows precise control over the amount of delay differentiation between the classes. The idea is to introduce N reserved places in the queue, intended for future arrivals of class 1
    • …
    corecore