11 research outputs found

    Towards a Packet Classification Benchmark

    Get PDF
    Packet classification is the enabling technology for next generation network services and often the primary bottleneck in high-performance routers. Due to the importance and complexity of the problem, a myriad of algorithms and resulting implementations exist. The performance and capacity of many algorithms and classification devices, including TCAMs, depend upon properties of the filter set and query patterns. Unlike microprocessors in the field of computer architecture, there are no standard performance evaluation tools or techniques available to evaluate packet classification algorithms and products. Network service providers are reluctant to distribute copies of real filter databases for security and confidentiality reasons, hence realistic test vectors are a scarce commodity. The small subset of the research community who obtain real databases either limit performance evaluation to the small sample space or employ ad hoc methods of modifying those databases. We present a tool for creating synthetic filter databases that retain characteristics of a seed database and provide systematic mechanisms for varying the number and composition of the filters. We propose a benchmarking methodology based on this tool that provides a mechanism for evaluating packet classification performance on a uniform scale. We seek to initiate a broader discussion within the community that will result in a standard packet classification benchmark

    ClassBench: A Packet Classification Benchmark

    Get PDF
    Due to the importance and complexity of the packet classification problem, a myriad of algorithms and re-sulting implementations exist. The performance and capacity of many algorithms and classification devices, including TCAMs, depend upon properties of the filter set and query patterns. Unlike microprocessors in the field of computer architecture, there are no standard performance evaluation tools or techniques avail-able to evaluate packet classification algorithms and products. Network service providers are reluctant to distribute copies of real filter sets for security and confidentiality reasons, hence realistic test vectors are a scarce commodity. The small subset of the research community who obtain real filter sets either limit performance evaluation to the small sample space or employ ad hoc methods of modifying those filter sets. In response to this problem, we present ClassBench, a suite of tools for benchmarking packet classification algorithms and devices. ClassBench includes a Filter Set Generator that produces synthetic filter sets that accurately model the characteristics of real filter sets. Along with varying the size of the filter sets, we provide high-level control over the composition of the filters in the resulting filter set. The tools suite also includes a Trace Generator that produces a sequence of packet headers to exercise the synthetic filter set. Along with specifying the relative size of the trace, we provide a simple mechanism for controlling locality of reference in the trace. While we have already found ClassBench to be very useful in our own research, we seek to initiate a broader discussion and solicit input from the community to guide the refinement of the tools and codification of a formal benchmarking methodology

    Comparison of routing software in Linux

    Get PDF
    Linux-käyttöjärjestelmä yleistyy nykyään yhä enemmän ja enemmän. Verkkoyhteydet tulevat nopeammiksi ja niiden määrä kasvaa koko ajan. Nykypäivän verkot tarvitsevat reititystä, jotta viestit voidaan välittää Internetissä eteenpäin kohti vastaanottajaa. Linux-järjestelmät voivat toimia reitittiminä. Tässä työssä käsittelemme sekä Linux-käyttöjärjestelmää että reititystoiminnallisuutta. Reititysohjelmistomme perustuu FreeBSD-käyttöjärjestelmään. Tässä työssä tutkimme, kuinka hyvin tämä ohjelmisto toimii Linuxissa. Ensimmäinen toimenpide on muokata reititysohjelmisto yhteensopivaksi Linuxin kanssa. Sen jälkeen tutkimme ohjelmiston toiminnallisuutta Linuxissa vertailemalla tätä reititysohjelmaa kahden kaupallisen ja yhden avoimeen lähdekoodiin perustuvan reititysratkaisun kanssa. Vertailu koostuu suorituskyky- ja ohjelmiston kompleksisuuden mittauksista. Näiden mittausten tulokset eivät pelkästään näytä, että ohjelmaa voidaan ajaa Linuxissa, vaan antavat myös lisätietoa siitä, miten reititysohjelmistot suorittavat reititystehtäviä. Ohjelmiston kompleksisuusmittausten tuloksena näemme lähdekoodin laadun vertailluissa reititysohjelmissa. Ohjelmiston kompleksisuus liittyy siihen, kuinka helppoa ohjelmistoa on ylläpitää.Linux operating system is becoming more and more popular today. Network connections are becoming faster and the amount is increasing all the time. Today's networks need routing so that the messages can go towards their destinations in the Internet. The routing can be performed in the Linux systems. In this thesis we handle both the Linux operating system and routing functionality. Our routing software is based on the FreeBSD operating system. This thesis studies how well that software works on Linux. The first step is to port this software on Linux. After that we examine the functionality of the software in Linux by comparing the routing daemon with two commercial routing solutions and an open source one. The comparison consists of performance and software complexity measurements. The results of these measurements not only show that the software is capable to be run on Linux, but also give even more information on how different routing software packages perform the routing tasks. The output of the software complexity measurements shows the type of source code in the compared routing solutions. The complexity of the software is related to the easiness to maintain it

    Terminology for Forwarding Information Base (FIB) based Router Performance

    No full text

    Despliegue de una infraestructura de red definida por software

    Get PDF
    Desde los inicios de la informática, todo tipo de recursos (computacionales, de almacenamiento o de red) han sido guardados en medios físicos, e intencionadamente separados unos de otros. No fue hasta que se introdujo el concepto (y la demanda) de capacidad económica de computación, almacenamiento y conectividad en el entorno de los data centers, que las organizaciones tuvieron que aunar esfuerzos para agrupar estos recursos. Así surgen los primeros hipervisores o monitores de máquina virtual. Este tipo de tecnología permite que un equipo utilizando un sistema operativo fuera capaz de ejecutar uno o más clientes con los mismos o diferentes sistemas operativos, ahorrando así en costes. Con la llegada de Internet a cada vez más número de hogares, la demanda aumentó de manera exponencial propiciando que los departamentos IT de cada empresa hayan tenido que ingeniárselas para satisfacer sus necesidades. No obstante, empresas como Amazon, cuyo crecimiento seguía una distribución exponencial, veían cómo cada 6-9 meses se doblaban el número de recursos que necesitaban. De manera que la estrategia que seguían dichas empresas, era comprar mucho más de lo que necesitaban y alquilar los recursos temporalmente a otras empresas dando lugar a los primeros data centers multiusuario, con los consiguientes problemas a la hora de administrar la comunicación entre máquinas para que sólo se pudieran comunicar entre equipos de la misma empresa.Up until a few years ago, storage, computing, and network resources were intentionally stored in hardware, remaining separated one from each other. It was really only after the introduction (and demand) of low-priced computing power, storage and networking in data center environments that organizations were forced to cooperate in order to group them all together. Thereby, first hypervisors appeared. This kind of technology allowed a host running a specific operating system to execute one or more client operating systems saving on costs. With the expanding of the Internet, the demand increased exponentially. Consequently, IT departments contrived to satisfy their lacks. Nevertheless, companies like Amazon were growing at the rate of an exponential distribution doubling every six to nine months. As a result, Amazon’s IT department was forced to acquire large quantities of resources that they would temporally rent to other companies until they needed. So was the birth of multitenant datacenters. This of course created a new problem, which was how to separate thousands of potential tenants, whose resources needed to be spread across different physical machines.Ingeniería Telemátic

    Improving the performance of software-defined networks using dynamic flow installation and management techniques

    Get PDF
    As computer networks evolve, they become more complex, introducing several challenges in the areas of performance and management. Such problems can lead to stagnation in network innovation. Software Defined Networks (SDN) framework could be one of the best candidates for improving and revolutionising networking by giving the full control to the network administrators to implement new management and performance optimisation techniques. This thesis examines performance issues faced in SDN due to the introduction of the SDN Controller. These issues include the extra delay due to the round-trip time between the switch and the controller as well as the fact that some packets arrive at the destination out-of-order. We propose a novel dynamic flow installation and management algorithm (OFPE) using the SDN protocol OpenFlow, which preserves the controller to a non-overloaded CPU state and allow it to dynamically add and adjust flow table rules to reduce packet delay and out-of-order packets. In addition, we propose OFPEX, an extension to OFPE algorithm that includes techniques for managing multi-switch environments as well as methods that make use of the packets interarrival time in categorising and serving packet flows. Such techniques allow topology awareness, helping the controller to install flow table rules in such a way to form optimal routes for high priority flows thus increasing network performance. For the performance evaluation of the proposed algorithms, both hardware testbed as well as emulation experiments have been conducted. The performance results indicate that OFPE algorithm achieves a significant enhancement in performance in the form of reduced delay by up to 92.56% (depending on the scenario), reduced packet loss by up to 55.32% and reduced out-of-order packets by up to 69.44%. Furthermore, we propose a novel placement algorithm for distributed Mininet implementations which uses weights in order to distribute the experiment components to the appropriately distributed machines. The proposed algorithm uses static code analysis in order to examine the experimental code as well as it measures the capabilities of physical components in order to create a weights table which is then used to distribute the experiment components properly. The performance results of the proposed algorithm evaluation indicated reductions in delay and packet loss of up to 65.51% and 86.35% respectively, as well as a decrease in the standard deviation of CPU usage by up to 88.63%. These results indicate that the proposed algorithm distributes the experiment components evenly across the available resources. Finally, we propose a series of Benchmarking tests that can be used to rate all the available SDN experimental platforms. These tests allow the selection of the appropriate experimental platform according to the scenario needs as well as they indicate the resources needed by each platform
    corecore