4,316 research outputs found

    Accuracy-Aware Adaptive Traffic Monitoring for Software Dataplanes

    Get PDF
    Network operators have recently been developing multi-Gbps traffic monitoring tools on commodity hardware, as part of the packet-processing pipelines realizing software dataplanes. These solutions allow the execution of sophisticated per-packet monitoring using the processing power available on servers. Although advances in packet capture have enabled the interception of packets at high rates, bottlenecks can still arise in the monitoring process as a result of concurrent access to shared processor resources, variations of the traffic skew, and unbalanced packet-rate spikes. In this paper we present an adaptive monitoring framework, →ol, which is resilient to bottlenecks while maintaining the accuracy of monitoring reports above a user-specified threshold. →ol dynamically reduces the measurement task sets under adverse conditions, and reconfigures them to recover potential accuracy degradations. To quantify the monitoring accuracy at run time, →ol adopts a novel task-independent technique that generates accuracy estimates according to recently observed traffic characteristics. With a prototype implementation based on a generic packet-processing pipeline, and using well-known measurements tasks, we show that →ol achieves lossless traffic monitoring for a wide range of conditions, significantly enhances the level of monitoring accuracy, and performs adaptations at the time scale of milliseconds with limited overhead

    A survey of distributed data aggregation algorithms

    Get PDF
    Distributed data aggregation is an important task, allowing the decentralized determination of meaningful global properties, which can then be used to direct the execution of other applications. The resulting values are derived by the distributed computation of functions like COUNT, SUM, and AVERAGE. Some application examples deal with the determination of the network size, total storage capacity, average load, majorities and many others. In the last decade, many different approaches have been proposed, with different trade-offs in terms of accuracy, reliability, message and time complexity. Due to the considerable amount and variety of aggregation algorithms, it can be difficult and time consuming to determine which techniques will be more appropriate to use in specific settings, justifying the existence of a survey to aid in this task. This work reviews the state of the art on distributed data aggregation algorithms, providing three main contributions. First, it formally defines the concept of aggregation, characterizing the different types of aggregation functions. Second, it succinctly describes the main aggregation techniques, organizing them in a taxonomy. Finally, it provides some guidelines toward the selection and use of the most relevant techniques, summarizing their principal characteristics.info:eu-repo/semantics/publishedVersio

    Faster and More Accurate Measurement through Additive-Error Counters

    Full text link
    Counters are a fundamental building block for networking applications such as load balancing, traffic engineering, and intrusion detection, which require estimating flow sizes and identifying heavy hitter flows. Existing works suggest replacing counters with shorter multiplicative error \emph{estimators} that improve the accuracy by fitting more of them within a given space. However, such estimators impose a computational overhead that degrades the measurement throughput. Instead, we propose \emph{additive} error estimators, which are simpler, faster, and more accurate when used for network measurement. Our solution is rigorously analyzed and empirically evaluated against several other measurement algorithms on real Internet traces. For a given error target, we improve the speed of the uncompressed solutions by 5×5\times-30×30\times, and the space by up to 4×4\times. Compared with existing state-of-the-art estimators, our solution is 9× 9\times-35×35\times faster while being considerably more accurate.Comment: To appear in IEEE INFOCOM 202

    Fractional Hitting Sets for Efficient and Lightweight Genomic Data Sketching

    Get PDF
    The exponential increase in publicly available sequencing data and genomic resources necessitates the development of highly efficient methods for data processing and analysis. Locality-sensitive hashing techniques have successfully transformed large datasets into smaller, more manageable sketches while maintaining comparability using metrics such as Jaccard and containment indices. However, fixed-size sketches encounter difficulties when applied to divergent datasets. Scalable sketching methods, such as Sourmash, provide valuable solutions but still lack resource-efficient, tailored indexing. Our objective is to create lighter sketches with comparable results while enhancing efficiency. We introduce the concept of Fractional Hitting Sets, a generalization of Universal Hitting Sets, which uniformly cover a specified fraction of the k-mer space. In theory and practice, we demonstrate the feasibility of achieving such coverage with simple but highly efficient schemes. By encoding the covered k-mers as super-k-mers, we provide a space-efficient exact representation that also enables optimized comparisons. Our novel tool, SuperSampler, implements this scheme, and experimental results with real bacterial collections closely match our theoretical findings. In comparison to Sourmash, SuperSampler achieves similar outcomes while utilizing an order of magnitude less space and memory and operating several times faster. This highlights the potential of our approach in addressing the challenges presented by the ever-expanding landscape of genomic data

    Accurate and Resource-Efficient Monitoring for Future Networks

    Get PDF
    Monitoring functionality is a key component of any network management system. It is essential for profiling network resource usage, detecting attacks, and capturing the performance of a multitude of services using the network. Traditional monitoring solutions operate on long timescales producing periodic reports, which are mostly used for manual and infrequent network management tasks. However, these practices have been recently questioned by the advent of Software Defined Networking (SDN). By empowering management applications with the right tools to perform automatic, frequent, and fine-grained network reconfigurations, SDN has made these applications more dependent than before on the accuracy and timeliness of monitoring reports. As a result, monitoring systems are required to collect considerable amounts of heterogeneous measurement data, process them in real-time, and expose the resulting knowledge in short timescales to network decision-making processes. Satisfying these requirements is extremely challenging given today’s larger network scales, massive and dynamic traffic volumes, and the stringent constraints on time availability and hardware resources. This PhD thesis tackles this important challenge by investigating how an accurate and resource-efficient monitoring function can be realised in the context of future, software-defined networks. Novel monitoring methodologies, designs, and frameworks are provided in this thesis, which scale with increasing network sizes and automatically adjust to changes in the operating conditions. These achieve the goal of efficient measurement collection and reporting, lightweight measurement- data processing, and timely monitoring knowledge delivery
    • …
    corecore