10,996 research outputs found

    When Queueing Meets Coding: Optimal-Latency Data Retrieving Scheme in Storage Clouds

    Full text link
    In this paper, we study the problem of reducing the delay of downloading data from cloud storage systems by leveraging multiple parallel threads, assuming that the data has been encoded and stored in the clouds using fixed rate forward error correction (FEC) codes with parameters (n, k). That is, each file is divided into k equal-sized chunks, which are then expanded into n chunks such that any k chunks out of the n are sufficient to successfully restore the original file. The model can be depicted as a multiple-server queue with arrivals of data retrieving requests and a server corresponding to a thread. However, this is not a typical queueing model because a server can terminate its operation, depending on when other servers complete their service (due to the redundancy that is spread across the threads). Hence, to the best of our knowledge, the analysis of this queueing model remains quite uncharted. Recent traces from Amazon S3 show that the time to retrieve a fixed size chunk is random and can be approximated as a constant delay plus an i.i.d. exponentially distributed random variable. For the tractability of the theoretical analysis, we assume that the chunk downloading time is i.i.d. exponentially distributed. Under this assumption, we show that any work-conserving scheme is delay-optimal among all on-line scheduling schemes when k = 1. When k > 1, we find that a simple greedy scheme, which allocates all available threads to the head of line request, is delay optimal among all on-line scheduling schemes. We also provide some numerical results that point to the limitations of the exponential assumption, and suggest further research directions.Comment: Original accepted by IEEE Infocom 2014, 9 pages. Some statements in the Infocom paper are correcte

    Predictable Real-Time Wireless Networking For Sensing And Control

    Get PDF
    Towards the end goal of providing predictable real-time wireless networking for sensing and control, we have developed a real-time routing protocol MTA that predictably delivers data by their deadlines, and a scheduling protocol PRKS to ensure a certain link reliability based on the Physical-ratio-K (PRK) model, which is both realistic and amenable for distributed implementation, and a greedy scheduling algorithm to deliver as many packets as possible to the sink by a deadline in lossy multi-hop wireless sensor networks. Real-time routing is a basic element of closed-loop, real-time sensing and control, but it is challenging due to dynamic, uncertain link/path delays. The probabilistic nature of link/path delays makes the basic problem of computing the probabilistic distribution of path delays NP-hard, yet quantifying probabilistic path delays is a basic element of real-time routing and may well have to be executed by resource-constrained devices in a distributed manner; the highly-varying nature of link/path delays makes it necessary to adapt to in-situ delay conditions in real-time routing, but it has been observed that delay-based routing can lead to instability, estimation error, and low data delivery performance in general. To address these challenges, we propose the Multi-Timescale Estimation (MTE) method; by accurately estimating the mean and variance of per-packet transmission time and by adapting to fast-varying queueing in an accurate, agile manner, MTE enables accurate, agile, and efficient estimation of probabilistic path delay bounds in a distributed manner. Based on MTE, we propose the Multi-Timescale Adaptation (MTA) routing protocol; MTA integrates the stability of an ETX-based directed-acyclic-graph (DAG) with the agility of spatiotemporal data flow control within the DAG to ensure real-time data delivery in the presence of dynamics and uncertainties. We also address the challenges of implementing MTE and MTA in resource-constrained devices such as TelosB motes. We evaluate the performance of MTA using the NetEye and Indriya sensor network testbeds. We find that MTA significantly outperforms existing protocols, e.g., improving deadline success ratio by 89% and reducing transmission cost by a factor of 9.7. Predictable wireless communication is another basic enabler for networked sensing and control in many cyber-physical systems, yet co-channel interference remains a major source of uncertainty in wireless communication. Integrating the protocol model\u27s locality and the physical model\u27s high fidelity, the physical-ratio-K (PRK) interference model bridges the gap between the suitability for distributed implementation and the enabled scheduling performance, and it is expected to serve as a foundation for distributed, predictable interference control. To realize the potential of the PRK model and to address the challenges of distributed PRK-based scheduling, we design protocol PRKS. PRKS uses a control-theoretic approach to instantiating the PRK model according to in-situ network and environmental conditions, and, through purely local coordination, the distributed controllers converge to a state where the desired link reliability is guaranteed. PRKS uses local signal maps to address the challenges of anisotropic, asymmetric wireless communication and large interference range, and PRKS leverages the different timescales of PRK model adaptation and data transmission to decouple protocol signaling from data transmission. Through sensor network testbed-based measurement study, we observe that, unlike existing scheduling protocols where link reliability is unpredictable and the reliability requirement satisfaction ratio can be as low as 0%, PRKS enables predictably high link reliability (e.g., 95%) in different network and environmental conditions without a priori knowledge of these conditions, and, through local distributed coordination, PRKS achieves a channel spatial reuse very close to what is enabled by the state-of-the-art centralized scheduler while ensuring the required link reliability. Ensuring the required link reliability in PRKS also reduces communication delay and improves network throughput. We study the problem of scheduling packet transmissions to maximize the expected number of packets collected at the sink by a deadline in a multi-hop wireless sensor network with lossy links. Most existing work assumes error-free transmissions when interference constraints are complied, yet links can be unreliable due to external interference, shadow- ing, and fading in harsh environments in practice. We formulate the problem as a Markov decision process, yielding an optimal solution. However, the problem is computationally in- tractable due to the curse of dimensionality. Thus, we propose the efficient and greedy Best Link First Scheduling (BLF) protocol. We prove it is optimal for the single-hop case and provide an approach for distributed implementation. Extensive simulations show it greatly enhances real-time data delivery performance, increasing deadline catch ratio by up to 50%, compared with existing scheduling protocols in a wide range of network and traffic settings

    A Novel Workload Allocation Strategy for Batch Jobs

    Get PDF
    The distribution of computational tasks across a diverse set of geographically distributed heterogeneous resources is a critical issue in the realisation of true computational grids. Conventionally, workload allocation algorithms are divided into static and dynamic approaches. Whilst dynamic approaches frequently outperform static schemes, they usually require the collection and processing of detailed system information at frequent intervals - a task that can be both time consuming and unreliable in the real-world. This paper introduces a novel workload allocation algorithm for optimally distributing the workload produced by the arrival of batches of jobs. Results show that, for the arrival of batches of jobs, this workload allocation algorithm outperforms other commonly used algorithms in the static case. A hybrid scheduling approach (using this workload allocation algorithm), where information about the speed of computational resources is inferred from previously completed jobs, is then introduced and the efficiency of this approach demonstrated using a real world computational grid. These results are compared to the same workload allocation algorithm used in the static case and it can be seen that this hybrid approach comprehensively outperforms the static approach

    Meeting Real-Time Constraint of Spectrum Management in TV Black-Space Access

    Get PDF
    The TV set feedback feature standardized in the next generation TV system, ATSC 3.0, would enable opportunistic access of active TV channels in future Cognitive Radio Networks. This new dynamic spectrum access approach is named as black-space access, as it is complementary of current TV white space, which stands for inactive TV channels. TV black-space access can significantly increase the available spectrum of Cognitive Radio Networks in populated urban markets, where spectrum shortage is most severe while TV whitespace is very limited. However, to enable TV black-space access, secondary user has to evacuate a TV channel in a timely manner when TV user comes in. Such strict real-time constraint is an unique challenge of spectrum management infrastructure of Cognitive Radio Networks. In this paper, the real-time performance of spectrum management with regard to the degree of centralization of infrastructure is modeled and tested. Based on collected empirical network latency and database response time, we analyze the average evacuation time under four structures of spectrum management infrastructure: fully distribution, city-wide centralization, national-wide centralization, and semi-national centralization. The results show that national wide centralization may not meet the real-time requirement, while semi-national centralization that use multiple co-located independent spectrum manager can achieve real-time performance while keep most of the operational advantage of fully centralized structure.Comment: 9 pages, 7 figures, Technical Repor

    Scheduling with Predictions and the Price of Misprediction

    Get PDF
    In many traditional job scheduling settings, it is assumed that one knows the time it will take for a job to complete service. In such cases, strategies such as shortest job first can be used to improve performance in terms of measures such as the average time a job waits in the system. We consider the setting where the service time is not known, but is predicted by for example a machine learning algorithm. Our main result is the derivation, under natural assumptions, of formulae for the performance of several strategies for queueing systems that use predictions for service times in order to schedule jobs. As part of our analysis, we suggest the framework of the "price of misprediction," which offers a measure of the cost of using predicted information
    • …
    corecore