34 research outputs found

    Design and Evaluation of Packet Classification Systems, Doctoral Dissertation, December 2006

    Get PDF
    Although many algorithms and architectures have been proposed, the design of efficient packet classification systems remains a challenging problem. The diversity of filter specifications, the scale of filter sets, and the throughput requirements of high speed networks all contribute to the difficulty. We need to review the algorithms from a high-level point-of-view in order to advance the study. This level of understanding can lead to significant performance improvements. In this dissertation, we evaluate several existing algorithms and present several new algorithms as well. The previous evaluation results for existing algorithms are not convincing because they have not been done in a consistent way. To resolve this issue, an objective evaluation platform needs to be developed. We implement and evaluate several representative algorithms with uniform criteria. The source code and the evaluation results are both published on a web-site to provide the research community a benchmark for impartial and thorough algorithm evaluations. We propose several new algorithms to deal with the different variations of the packet classification problem. They are: (1) the Shape Shifting Trie algorithm for longest prefix matching, used in IP lookups or as a building block for general packet classification algorithms; (2) the Fast Hash Table lookup algorithm used for exact flow match; (3) the longest prefix matching algorithm using hash tables and tries, used in IP lookups or packet classification algorithms;(4) the 2D coarse-grained tuple-space search algorithm with controlled filter expansion, used for two-dimensional packet classification or as a building block for general packet classification algorithms; (5) the Adaptive Binary Cutting algorithm used for general multi-dimensional packet classification. In addition to the algorithmic solutions, we also consider the TCAM hardware solution. In particular, we address the TCAM filter update problem for general packet classification and provide an efficient algorithm. Building upon the previous work, these algorithms significantly improve the performance of packet classification systems and set a solid foundation for further study

    An algorithmic approach to OpenFlow ruleset transformation

    Get PDF
    In an ideal development cycle for an OpenFlow application, a developer designs a pipeline to suit their application's needs and installs rules to that pipeline. Their application will run on any OpenFlow switch, whether software or hardware based. A network operator deploying this application would assess their network's requirements and purchase OpenFlow hardware to meet these requirements; such as bandwidth, port density, and flow table size. In reality, this level of interoperability does not exist as many OpenFlow switches are built on a fixed-function pipeline. Fixed-function pipelines limit the matches and actions available to rules depending on the table, but in doing so make more efficient use of expensive hardware resources such as TCAM. This thesis investigates improving OpenFlow device interoperability by developing a method to rewrite existing rulesets to new complex fixed-function pipelines. Additionally, this thesis developed the tools to assess and verify the interoperability and equivalence of OpenFlow rulesets and pipelines. This thesis developed a library and tools for working with descriptions of fixed-function pipelines, specifically, the Table Type Pattern description. This library provides a method to check if an existing ruleset is compatible with a new pipeline. Additionally, this thesis designed and implemented a pragmatic approach to compare if the forwarding behaviour of two OpenFlow 1.3 rulesets is equivalent. Equivalence checking provides a tool to verify that an OpenFlow application rewritten to program a new pipeline maintains the correct forwarding behaviour. Finally, this thesis investigates the problem of algorithmically rewriting an existing OpenFlow ruleset, programmed by an existing application, to fit a different fixed-function pipeline. Solving this problem allows an OpenFlow application to be written once and run on any OpenFlow switch. This research aimed to solve this problem in a comprehensive manner that did not rely on the target pipeline supporting features such as OpenFlow metadata. This thesis developed and implemented a general method to convert an OpenFlow 1.3 to a complex constrained fixed-function

    A Practical Hardware Implementation of Systemic Computation

    Get PDF
    It is widely accepted that natural computation, such as brain computation, is far superior to typical computational approaches addressing tasks such as learning and parallel processing. As conventional silicon-based technologies are about to reach their physical limits, researchers have drawn inspiration from nature to found new computational paradigms. Such a newly-conceived paradigm is Systemic Computation (SC). SC is a bio-inspired model of computation. It incorporates natural characteristics and defines a massively parallel non-von Neumann computer architecture that can model natural systems efficiently. This thesis investigates the viability and utility of a Systemic Computation hardware implementation, since prior software-based approaches have proved inadequate in terms of performance and flexibility. This is achieved by addressing three main research challenges regarding the level of support for the natural properties of SC, the design of its implied architecture and methods to make the implementation practical and efficient. Various hardware-based approaches to Natural Computation are reviewed and their compatibility and suitability, with respect to the SC paradigm, is investigated. FPGAs are identified as the most appropriate implementation platform through critical evaluation and the first prototype Hardware Architecture of Systemic computation (HAoS) is presented. HAoS is a novel custom digital design, which takes advantage of the inbuilt parallelism of an FPGA and the highly efficient matching capability of a Ternary Content Addressable Memory. It provides basic processing capabilities in order to minimize time-demanding data transfers, while the optional use of a CPU provides high-level processing support. It is optimized and extended to a practical hardware platform accompanied by a software framework to provide an efficient SC programming solution. The suggested platform is evaluated using three bio-inspired models and analysis shows that it satisfies the research challenges and provides an effective solution in terms of efficiency versus flexibility trade-off

    Fast Packet Processing on High Performance Architectures

    Get PDF
    The rapid growth of Internet and the fast emergence of new network applications have brought great challenges and complex issues in deploying high-speed and QoS guaranteed IP network. For this reason packet classication and network intrusion detection have assumed a key role in modern communication networks in order to provide Qos and security. In this thesis we describe a number of the most advanced solutions to these tasks. We introduce NetFPGA and Network Processors as reference platforms both for the design and the implementation of the solutions and algorithms described in this thesis. The rise in links capacity reduces the time available to network devices for packet processing. For this reason, we show different solutions which, either by heuristic and randomization or by smart construction of state machine, allow IP lookup, packet classification and deep packet inspection to be fast in real devices based on high speed platforms such as NetFPGA or Network Processors

    Software-Defined Networking: A Comprehensive Survey

    Get PDF
    peer reviewedThe Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this - ew paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment

    The Whitworthian 2005-2006

    Get PDF
    The Whitworthian student newspaper, September 2005-May 2006.https://digitalcommons.whitworth.edu/whitworthian/1090/thumbnail.jp

    Traditional herbal medicine in Mesoamerica: toward its evidence base for improving universal health coverage

    Get PDF
    The quality of health care in Mesoamerica is influenced by its rich cultural diversity and characterized by social inequalities. Especially indigenous and rural communities confront diverse barriers to accessing formal health services, leading to often conflicting plurimedical systems. Fostering integrative medicine is a fundamental pillar for achieving universal health coverage (UHC) for marginalized populations. Recent developments toward health sovereignty in the region are concerned with assessing the role of traditional medicines, and particularly herbal medicines, to foster accessible and culturally pertinent healthcare provision models. In Mesoamerica, as in most regions of the world, a wealth of information on traditional and complementary medicine has been recorded. Yet these data are often scattered, making it difficult for policy makers to regulate and integrate traditionally used botanical products into primary health care. This critical review is based on a quantitative analysis of 28 survey papers focusing on the traditional use of botanical drugs in Mesoamerica used for the compilation of the "Mesoamerican Medicinal Plant Database" (MAMPDB), which includes a total of 12,537 use-records for 2188 plant taxa. Our approach presents a fundamental step toward UHC by presenting a pharmacological and toxicological review of the cross-culturally salient plant taxa and associated botanical drugs used in traditional medicine in Mesoamerica. Especially for native herbal drugs, data about safety and effectiveness are limited. Commonly used cross-culturally salient botanical drugs, which are considered safe but for which data on effectiveness is lacking constitute ideal candidates for treatment outcome studies

    Traditional Herbal Medicine in Mesoamerica: Toward Its Evidence Base for Improving Universal Health Coverage

    Get PDF
    The quality of health care in Mesoamerica is influenced by its rich cultural diversity and characterized by social inequalities. Especially indigenous and rural communities confront diverse barriers to accessing formal health services, leading to often conflicting plurimedical systems. Fostering integrative medicine is a fundamental pillar for achieving universal health coverage (UHC) for marginalized populations. Recent developments toward health sovereignty in the region are concerned with assessing the role of traditional medicines, and particularly herbal medicines, to foster accessible and culturally pertinent healthcare provision models. In Mesoamerica, as in most regions of the world, a wealth of information on traditional and complementary medicine has been recorded. Yet these data are often scattered, making it difficult for policy makers to regulate and integrate traditionally used botanical products into primary health care. This critical review is based on a quantitative analysis of 28 survey papers focusing on the traditional use of botanical drugs in Mesoamerica used for the compilation of the “Mesoamerican Medicinal Plant Database” (MAMPDB), which includes a total of 12,537 use-records for 2188 plant taxa. Our approach presents a fundamental step toward UHC by presenting a pharmacological and toxicological review of the cross-culturally salient plant taxa and associated botanical drugs used in traditional medicine in Mesoamerica. Especially for native herbal drugs, data about safety and effectiveness are limited. Commonly used cross-culturally salient botanical drugs, which are considered safe but for which data on effectiveness is lacking constitute ideal candidates for treatment outcome studies

    A Survey on Security and Privacy of 5G Technologies: Potential Solutions, Recent Advancements, and Future Directions

    Get PDF
    Security has become the primary concern in many telecommunications industries today as risks can have high consequences. Especially, as the core and enable technologies will be associated with 5G network, the confidential information will move at all layers in future wireless systems. Several incidents revealed that the hazard encountered by an infected wireless network, not only affects the security and privacy concerns, but also impedes the complex dynamics of the communications ecosystem. Consequently, the complexity and strength of security attacks have increased in the recent past making the detection or prevention of sabotage a global challenge. From the security and privacy perspectives, this paper presents a comprehensive detail on the core and enabling technologies, which are used to build the 5G security model; network softwarization security, PHY (Physical) layer security and 5G privacy concerns, among others. Additionally, the paper includes discussion on security monitoring and management of 5G networks. This paper also evaluates the related security measures and standards of core 5G technologies by resorting to different standardization bodies and provide a brief overview of 5G standardization security forces. Furthermore, the key projects of international significance, in line with the security concerns of 5G and beyond are also presented. Finally, a future directions and open challenges section has included to encourage future research.European CommissionNational Research Tomsk Polytechnic UniversityUpdate citation details during checkdate report - A
    corecore