22 research outputs found

    ClassBench: A Packet Classification Benchmark

    Get PDF
    Due to the importance and complexity of the packet classification problem, a myriad of algorithms and re-sulting implementations exist. The performance and capacity of many algorithms and classification devices, including TCAMs, depend upon properties of the filter set and query patterns. Unlike microprocessors in the field of computer architecture, there are no standard performance evaluation tools or techniques avail-able to evaluate packet classification algorithms and products. Network service providers are reluctant to distribute copies of real filter sets for security and confidentiality reasons, hence realistic test vectors are a scarce commodity. The small subset of the research community who obtain real filter sets either limit performance evaluation to the small sample space or employ ad hoc methods of modifying those filter sets. In response to this problem, we present ClassBench, a suite of tools for benchmarking packet classification algorithms and devices. ClassBench includes a Filter Set Generator that produces synthetic filter sets that accurately model the characteristics of real filter sets. Along with varying the size of the filter sets, we provide high-level control over the composition of the filters in the resulting filter set. The tools suite also includes a Trace Generator that produces a sequence of packet headers to exercise the synthetic filter set. Along with specifying the relative size of the trace, we provide a simple mechanism for controlling locality of reference in the trace. While we have already found ClassBench to be very useful in our own research, we seek to initiate a broader discussion and solicit input from the community to guide the refinement of the tools and codification of a formal benchmarking methodology

    Models, Algorithms, and Architectures for Scalable Packet Classification

    Get PDF
    The growth and diversification of the Internet imposes increasing demands on the performance and functionality of network infrastructure. Routers, the devices responsible for the switch-ing and directing of traffic in the Internet, are being called upon to not only handle increased volumes of traffic at higher speeds, but also impose tighter security policies and provide support for a richer set of network services. This dissertation addresses the searching tasks performed by Internet routers in order to forward packets and apply network services to packets belonging to defined traffic flows. As these searching tasks must be performed for each packet traversing the router, the speed and scalability of the solutions to the route lookup and packet classification problems largely determine the realizable performance of the router, and hence the Internet as a whole. Despite the energetic attention of the academic and corporate research communities, there remains a need for search engines that scale to support faster communication links, larger route tables and filter sets and increasingly complex filters. The major contributions of this work include the design and analysis of a scalable hardware implementation of a Longest Prefix Matching (LPM) search engine for route lookup, a survey and taxonomy of packet classification techniques, a thorough analysis of packet classification filter sets, the design and analysis of a suite of performance evaluation tools for packet classification algorithms and devices, and a new packet classification algorithm that scales to support high-speed links and large filter sets classifying on additional packet fields

    Viewpoint-based testing of concurrent components

    Get PDF
    The use of multiple partial viewpoints is recommended for specification. We believe they also can be useful for devising strategies for testing. In this paper, we use Object-Z to formally specify concurrent Java components from viewpoints based on the separation of application and synchronisation concerns inherent in Java monitors. We then use the Test-Template Framework on the Object-Z viewpoints to devise a strategy for testing the components. When combining the test templates for the different viewpoints we focus on the observable behaviour of the application to systematically derive a practical testing strategy. The Producer-Consumer and Readers-Writers problems are considered as case studies

    Finding the needles in the haystack: Generating legal test inputs for object-oriented programs

    Get PDF
    A test input for an object-oriented program typically consists of asequence of method calls that use the API defined by the programunder test. Generating legal test inputs can be challenging because,for some programs, the set of legal method sequences is much smallerthan the set of all possible sequences; without a formalspecification of legal sequences, an input generator is bound toproduce mostly illegal sequences.We propose a scalable technique that combines dynamic analysis withrandom testing to help an input generator create legal test inputswithout a formal specification, even for programs in whichmost sequences are illegal. The technique uses an example executionof the program to infer a model of legal call sequences, and usesthe model to guide a random input generator towards legal butbehaviorally-diverse sequences.We have implemented our technique for Java, in a tool calledPalulu, and evaluated its effectiveness in creating legal inputsfor real programs. Our experimental results indicate that thetechnique is effective and scalable. Our preliminary evaluationindicates that the technique can quickly generate legal sequencesfor complex inputs: in a case study, Palulu created legal testinputs in seconds for a set of complex classes, for which it took anexpert thirty minutes to generate a single legal input

    Energy Efficient Hardware Accelerators for Packet Classification and String Matching

    Get PDF
    This thesis focuses on the design of new algorithms and energy efficient high throughput hardware accelerators that implement packet classification and fixed string matching. These computationally heavy and memory intensive tasks are used by networking equipment to inspect all packets at wire speed. The constant growth in Internet usage has made them increasingly difficult to implement at core network line speeds. Packet classification is used to sort packets into different flows by comparing their headers to a list of rules. A flow is used to decide a packet’s priority and the manner in which it is processed. Fixed string matching is used to inspect a packet’s payload to check if it contains any strings associated with known viruses, attacks or other harmful activities. The contributions of this thesis towards the area of packet classification are hardware accelerators that allow packet classification to be implemented at core network line speeds when classifying packets using rulesets containing tens of thousands of rules. The hardware accelerators use modified versions of the HyperCuts packet classification algorithm. An adaptive clocking unit is also presented that dynamically adjusts the clock speed of a packet classification hardware accelerator so that its processing capacity matches the processing needs of the network traffic. This keeps dynamic power consumption to a minimum. Contributions made towards the area of fixed string matching include a new algorithm that builds a state machine that is used to search for strings with the aid of default transition pointers. The use of default transition pointers keep memory consumption low, allowing state machines capable of searching for thousands of strings to be small enough to fit in the on-chip memory of devices such as FPGAs. A hardware accelerator is also presented that uses these state machines to search through the payloads of packets for strings at core network line speeds

    Melhoria da testabilidade de classes usando o conceito de autoteste

    Get PDF
    Orientador: Eliane MartinsDissertação (mestrado) - Universidade Estadual de Campinas. Instituto de ComputaçãoResumo: O esforço e custo adicionais exigidos pelo processo de teste de um software ou componente depende em geral de quão fácil é testá-Io. No caso de componentes reutilizáveis essa habilidade se toma mais importante ainda, pois eles devem ser testados durante seu desenvolvimento, toda vez que forem reutilizados em um novo contexto, e a cada vez que sofram alguma alteração. Uma proposta para construir classes com maior testabilidade foi feita fazendo-se uma analogia entre circuitos integrados e classes. Foi proposta a construção de classes autotestáveis e tal como em circuitos integrados, estas classes teriam mecanismos embutidos específicos para teste. Outro fator importante para testabilidade do software é a possibilidade de reutilização de testes. Por esta razão foi empregada nesse estudo a técnica incremental hierárquica, que leva em conta a hierarquia de herança permitindo que, em alguns casos, seja possível reutilizar casos de testes gerados para a classe base nos testes de suas classes derivadas. Este trabalho apresenta a metodologia definida para construir uma classe autotestável, bem como o protótipo construído para automatizar parte desta metodologia, mostrando assim a sua viabilidadeAbstract: In general the additional effort and cost required by the testing process depends on the testability of the software or component being tested. Testability is more important for reusable components because they need testing during their development and afier each change (either in the class or in the environrnent). An approach comparing classes and integrated circuits of hardware had been proposed for constructing classes with higher testability. The resulting class is called builtin self-testing class, because it contains built-in testing mechanisms. Class testability will be increased if reuse could be extended to the test cases too. The incremental hierarchical technique considers the inheritance relationship to test classes by reusing test cases of a parent class when testing a subclass. This work presentes: a methodology to construct built-in self-testing classes and a prototyping tool, ConCAT, that automates part of this methodology, by showing its feasibilityMestradoMestre em Ciência da Computaçã

    Rethinking Software Network Data Planes in the Era of Microservices

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Workshop Notes of the Seventh International Workshop "What can FCA do for Artificial Intelligence?"

    Get PDF
    International audienceThese are the proceedings of the seventh edition of the FCA4AI workshop (http://www.fca4ai.hse.ru/) co-located with the IJCAI 2019 Conference in Macao (China). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at classification and knowledge discovery that can be used for many purposes in Artificial Intelligence (AI). The objective of the FCA4AI workshop is to investigate two main issues: how can FCA supports various AI activities (knowledge discovery, knowledge engineering, machine learning, data mining, information retrieval, recommendation. . . ), and how can FCA be extended in order to help AI researchers to solve new and complex problems in their domain

    Doctor of Philosophy

    Get PDF
    dissertationAs the base of the software stack, system-level software is expected to provide ecient and scalable storage, communication, security and resource management functionalities. However, there are many computationally expensive functionalities at the system level, such as encryption, packet inspection, and error correction. All of these require substantial computing power. What's more, today's application workloads have entered gigabyte and terabyte scales, which demand even more computing power. To solve the rapidly increased computing power demand at the system level, this dissertation proposes using parallel graphics pro- cessing units (GPUs) in system software. GPUs excel at parallel computing, and also have a much faster development trend in parallel performance than central processing units (CPUs). However, system-level software has been originally designed to be latency-oriented. GPUs are designed for long-running computation and large-scale data processing, which are throughput-oriented. Such mismatch makes it dicult to t the system-level software with the GPUs. This dissertation presents generic principles of system-level GPU computing developed during the process of creating our two general frameworks for integrating GPU computing in storage and network packet processing. The principles are generic design techniques and abstractions to deal with common system-level GPU computing challenges. Those principles have been evaluated in concrete cases including storage and network packet processing applications that have been augmented with GPU computing. The signicant performance improvement found in the evaluation shows the eectiveness and eciency of the proposed techniques and abstractions. This dissertation also presents a literature survey of the relatively young system-level GPU computing area, to introduce the state of the art in both applications and techniques, and also their future potentials
    corecore