1,930 research outputs found

    TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes

    Full text link
    Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5x lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3x as many requests

    Verification-Based Decoding for Rateless Codes in the Presence of Errors and Erasures

    Get PDF
    In this paper, verification-based decoding is proposed for the correction and filling-in of lost/erased packets for multicast service in data networks, which employs Rateless codes. Patterns of preferred parity-check equations are presented for the reduction of the average number of parity-check symbols required. Since the locations of unverified symbols are known, the effect of erasures and errors is the same in terms of the overhead required for successful decoding. Simulation results show that for an error-only, an erasure-only or a combination of both at 10% error/erasure probability, 78% of the messages can be recovered with a 50% overhead, whereas 99% of the messages can be recovered with a 100% overhead

    Mesh-based content routing using XML

    Get PDF

    Exploring the data needs and sources for severe weather impact forecasts and warnings : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Emergency Management at Massey University, Wellington, New Zealand

    Get PDF
    Figures 2.4 & 2.5 are re-used with permission.The journal articles in Appendices J, L & M are republished under respective Creative Commons licenses. Appendix K has been removed from the thesis until 1 July 2022 in accordance with the American Meteorological Society Copyright Policy, but is available open access at https://doi.org/10.1175/WCAS-D-21-0093.1Early warning systems offer an essential, timely, and cost-effective approach for mitigating the impacts of severe weather hazards. Yet, notable historic severe weather events have exposed major communication gaps between warning services and target audiences, resulting in widespread losses. The World Meteorological Organization (WMO) has proposed Impact Forecasts and Warnings (IFW) to address these communication gaps by bringing in knowledge of exposure, vulnerability, and impacts; thus, leading to warnings that may better align with the position, needs, and capabilities of target audiences. A gap was identified in the literature around implementing IFWs: that of accessing the required knowledge and data around impacts, vulnerability, and exposure. This research aims to address this gap by exploring the data needs of IFWs and identifying existing and potential data sources to support those needs. Using Grounded Theory (GT), 39 interviews were conducted with users and creators of hazard, impact, vulnerability, and exposure (HIVE) data within and outside of Aotearoa New Zealand. Additionally, three virtual workshops provided triangulation with practitioners. In total, 59 people participated in this research. Resulting qualitative data were analysed using GT coding techniques, memo-writing, and diagramming. Findings indicate a growing need for gathering and using impact, vulnerability, and exposure data for IFWs. New insight highlights a growing need to model and warn for social and health impacts. Findings further show that plenty of sources for HIVE data are collected for emergency response and other uses with relevant application to IFWs. Partnerships and collaboration lie at the heart of using HIVE data both for IFWs and for disaster risk reduction. This thesis contributes to the global understanding of how hydrometeorological and emergency management services can implement IFWs, by advancing the discussion around implementing IFWs as per the WMO’s guidelines, and around building up disaster risk data in accordance with the Sendai Framework Priorities. An important outcome of this research is the provision of a pathway for stakeholders to identify data sources and partnerships required for implementing a hydrometeorological IFW system

    A digital fountain retrospective

    Full text link
    We introduced the concept of a digital fountain as a scalable approach to reliable multicast, realized with fast and practical erasure codes, in a paper published in ACM SIGCOMM '98. This invited editorial, on the occasion of the 50th anniversary of the SIG, reflects on the trajectory of work leading up to our approach, and the numerous developments in the field in the subsequent 21 years. We discuss advances in rateless codes, efficient implementations, applications of digital fountains in distributed storage systems, and connections to invertible Bloom lookup tables.Published versio

    QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts

    Full text link
    Large inter-datacenter transfers are crucial for cloud service efficiency and are increasingly used by organizations that have dedicated wide area networks between datacenters. A recent work uses multicast forwarding trees to reduce the bandwidth needs and improve completion times of point-to-multipoint transfers. Using a single forwarding tree per transfer, however, leads to poor performance because the slowest receiver dictates the completion time for all receivers. Using multiple forwarding trees per transfer alleviates this concern--the average receiver could finish early; however, if done naively, bandwidth usage would also increase and it is apriori unclear how best to partition receivers, how to construct the multiple trees and how to determine the rate and schedule of flows on these trees. This paper presents QuickCast, a first solution to these problems. Using simulations on real-world network topologies, we see that QuickCast can speed up the average receiver's completion time by as much as 10×10\times while only using 1.04×1.04\times more bandwidth; further, the completion time for all receivers also improves by as much as 1.6×1.6\times faster at high loads.Comment: [Extended Version] Accepted for presentation in IEEE INFOCOM 2018, Honolulu, H

    NASA Tech Briefs, August 2013

    Get PDF
    Topics covered include: Radial Internal Material Handling System (RIMS) for Circular Habitat Volumes; Conical Seat Shut-Off Valve; Impact-Actuated Digging Tool for Lunar Excavation; Flexible Mechanical Conveyors for Regolith Extraction and Transport; Remote Memory Access Protocol Target Node Intellectual Property; Soft Decision Analyzer; Distributed Prognostics and Health Management with a Wireless Network Architecture; Minimal Power Latch for Single-Slope ADCs; Bismuth Passivation Technique for High-Resolution X-Ray Detectors; High-Strength, Super-elastic Compounds; Cu-Cr-Nb-Zr Alloy for Rocket Engines and Other High-Heat- Flux Applications; Microgravity Storage Vessels and Conveying-Line Feeders for Cohesive Regolith; CRUQS: A Miniature Fine Sun Sensor for Nanosatellites; On-Chip Microfluidic Components for In Situ Analysis, Separation, and Detection of Amino Acids; Spectroscopic Determination of Trace Contaminants in High-Purity Oxygen; Method of Separating Oxygen From Spacecraft Cabin Air to Enable Extravehicular Activities; Atomic Force Microscope Mediated Chromatography; Sample Analysis at Mars Instrument Simulator; Access Control of Web- and Java-Based Applications; Tool for Automated Retrieval of Generic Event Tracks (TARGET); Bilayer Protograph Codes for Half-Duplex Relay Channels; Influence of Computational Drop Representation in LES of a Droplet-Laden Mixing Layer
    • …
    corecore