101 research outputs found

    Efficient binary cutting packet classification

    Get PDF
    Packet classification is the process of distributing packets into ‘flows’ in an internet router. Router processes all packets which belong to predefined rule sets in similar manner& classify them to decide upon what all services packet should receive. It plays an important role in both edge and core routers to provideadvanced network service such as quality of service, firewalls and intrusion detection. These services require the ability to categorize & isolate packet traffic in different flows for proper processing. Packet classification remains a classical problem, even though lots of researcher working on the problem. Existing algorithms such asHyperCuts,boundary cutting and HiCuts have achieved an efficient performance by representing rules in geometrical method in a classifier and searching for a geometric subspace to which each inputpacket belongs. Some fixed interval-based cutting not relating to the actual space that eachrule covers is ineffective and results in a huge storage requirement. However, the memoryconsumption of these algorithms remains quite high when high throughput is required.Hence in this paper we are proposing a new efficient splitting criterion which is memory andtime efficient as compared to other mentioned techniques. Our proposed approach known as (ABC) Adaptive Binary Cuttingproducesa set of different-sized cuts at each decision step, with the goal to balance the distribution offilters and to reduce the filter duplication effect. The proposed algorithmuses stronger andmore straightforward criteria for decision treeconstruction. Experimental results will showthe effectiveness of proposed algorithm as compared to existing algorithm using differentparameters such as time & memory. In this paper, no symmetrical size cut at each decision node, with aim to make a distribution of filters balanced and also to reduce redundancy in filter

    Packet Filtering Module For PFQ Packet Capturing Engine.

    Get PDF
    The evolution of commodity hardware is pushing parallelism forward as the key factor that can allow software to attain hardware-class performance while still retaining its advantages. On one side, commodity CPUs are providing more and more cores (the next-generation Intel Xeon E 7500 CPUs will soon make 10 cores processors a commodity product), with a complex cache hierarchy which makes aware data placement crucial to good performance. On the other side, server NIC‘s are adapting to these new trends by increasing themselves their level of parallelism. While traditional 1Gbps NICs exchanged data with the CPU through a single ring of shared memory buffers, modern 10Gbps cards support multiple queues: multiple cores can therefore receive and transmit packets in parallel. In particular, incoming packets can be de-multiplexed across CPUs based on a hash function (the so-called RSS technology) or on the MAC address (the VMD-q technology, designed for servers hosting multiple virtual machines). The Linux kernel has recently begun to support these new technologies. Though there is lot of network monitoring software‘s, most of them have not yet been designed with high parallelism in mind. Therefore a novel packet capturing engine, named PFQ was designed, that allows efficient capturing and in-kernel aggregation, as well as connection-aware load balancing. Such an engine is based on a novel lockless queue and allows parallel packet capturing to let the user-space application arbitrarily define its degree of parallelism. Therefore, both legacy applications and natively parallel ones can benefit from such capturing engine. In addition, PFQ outperforms its competitors both in terms of captured packets and CPU consumption. In this thesis, a new packet filtering block is designed implemented and added to the existing PFQ capture engine which helps in dropping out unnecessary packets before they are copied into the kernel space thus improves the overall performance of the engine considerably. Because network monitors often want only a small subset of network traffic, a dramatic performance gain is realized by filtering out unwanted packets in interrupt context

    Low Latency Stochastic Filtering Software Firewall Architecture

    Get PDF
    Firewalls are an integral part of network security. They are pervasive throughout networks and can be found in mobile phones, workstations, servers, switches, routers, and standalone network devices. Their primary responsibility is to track and discard unauthorized network traffic, and may be implemented using costly special purpose hardware to flexible inexpensive software running on commodity hardware. The most basic action of a firewall is to match packets against a set of rules in an Access Control List (ACL) to determine whether they should be allowed or denied access to a network or resource. By design, traditional firewalls must sequentially search through the ACL table, leading to increasing latencies as the number of entries in the table increase. This is particularly true for software firewalls implemented in commodity server hardware. Reducing latency in software firewalls may enable them to replace hardware firewalls in certain applications. In this thesis, we propose a software firewall architecture which removes the sequential ACL lookup from the critical path and thus decreases the latency per packet in the common case. To accomplish this we implement a Bloom filter-based, stochastic pre-classification stage, enabling the bifurcation of the predicted good and predicted bad packet code paths, greatly improving performance. Our proposed architecture improves firewall performance 67% to 92% under anonymized trace based workloads from CAIDA servers. While our approach has the possibility of incorrectly classifying a small subset of bad packets as good, we show that these holes are neither predictable nor permanent, leading to a vanishingly small probability of firewall penetration

    Deux défis des Réseaux Logiciels : Relayage par le Nom et Vérification des Tables

    Get PDF
    The Internet changed the lives of network users: not only it affects users' habits, but it is also increasingly being shaped by network users' behavior.Several new services have been introduced during the past decades (i.e. file sharing, video streaming, cloud computing) to meet users' expectation.As a consequence, although the Internet infrastructure provides a good best-effort service to exchange information in a point-to-point fashion, this is not the principal need that todays users request. Current networks necessitate some major architectural changes in order to follow the upcoming requirements, but the experience of the past decades shows that bringing new features to the existing infrastructure may be slow.In this thesis work, we identify two main aspects of the Internet evolution: a “behavioral” aspect, which refers to a change occurred in the way users interact with the network, and a “structural” aspect, related to the evolution problem from an architectural point of view.The behavioral perspective states that there is a mismatch between the usage of the network and the actual functions it provides. While network devices implement the simple primitives of sending and receiving generic packets, users are really interested in different primitives, such as retrieving or consuming content. The structural perspective suggests that the problem of the slow evolution of the Internet infrastructure lies in its architectural design, that has been shown to be hardly upgradeable.On the one hand, to encounter the new network usage, the research community proposed the Named-data networking paradigm (NDN), which brings the content-based functionalities to network devices.On the other hand Software-defined networking (SDN) can be adopted to simplify the architectural evolution and shorten the upgrade-time thanks to its centralized software control plane, at the cost of a higher network complexity that can easily introduce some bugs. SDN verification is a novel research direction aiming to check the consistency and safety of network configurations by providing formal or empirical validation.The talk consists of two parts. In the first part, we focus on the behavioral aspect by presenting the design and evaluation of “Caesar”, a content router that advances the state-of-the-art by implementing content-based functionalities which may coexist with real network environments.In the second part, we target network misconfiguration diagnosis, and we present a framework for the analysis of the network topology and forwarding tables, which can be used to detect the presence of a loop at real-time and in real network environments.Cette thèse aborde des problèmes liés à deux aspects majeurs de l’évolution d’Internet : l’aspect >, qui correspond aux nouvelles interactions entre les utilisateurs et le réseau, et l’aspect >, lié aux changements d’Internet d’un point de vue architectural.Le manuscrit est composé d’un chapitre introductif qui donne les grandes lignes de recherche de ce travail de thèse, suivi d’un chapitre consacré à la description de l’état de l’art sur les deux aspects mentionnés ci-dessus. Parmi les solutions proposées par la communauté scientifique pour s'adapter à l’évolution d’Internet, deux nouveaux paradigmes réseaux sont particulièrement décrits : Information- Centric Networking (ICN) et Software-Defined Networking (SDN).La thèse continue avec la proposition de >, un dispositif réseau, inspiré par ICN, capable de gérer la distribution de contenus à partir de primitives de routage basées sur le nom des données et non les adresses des serveurs. Caesar est présenté dans deux chapitres, qui décrivent l’architecture et deux des principaux modules : le relayage et la gestion de la traçabilité des requêtes.La suite du manuscrit décrit un outil mathématique pour la détection efficace de boucles dans un réseau SDN d’un point de vue théorique. Les améliorations de l’algorithme proposé par rapport à l’état de l’art sont discutées.La thèse se conclue par un résumé des principaux résultats obtenus et une présentation des travaux en cours et futurs

    Retouched Bloom Filters: Allowing Networked Applications to Flexibly Trade Off False Positives Against False Negatives

    Full text link
    Where distributed agents must share voluminous set membership information, Bloom filters provide a compact, though lossy, way for them to do so. Numerous recent networking papers have examined the trade-offs between the bandwidth consumed by the transmission of Bloom filters, and the error rate, which takes the form of false positives, and which rises the more the filters are compressed. In this paper, we introduce the retouched Bloom filter (RBF), an extension that makes the Bloom filter more flexible by permitting the removal of selected false positives at the expense of generating random false negatives. We analytically show that RBFs created through a random process maintain an overall error rate, expressed as a combination of the false positive rate and the false negative rate, that is equal to the false positive rate of the corresponding Bloom filters. We further provide some simple heuristics and improved algorithms that decrease the false positive rate more than than the corresponding increase in the false negative rate, when creating RBFs. Finally, we demonstrate the advantages of an RBF over a Bloom filter in a distributed network topology measurement application, where information about large stop sets must be shared among route tracing monitors.Comment: This is a new version of the technical reports with improved algorithms and theorical analysis of algorithm

    Fast Packet Processing on High Performance Architectures

    Get PDF
    The rapid growth of Internet and the fast emergence of new network applications have brought great challenges and complex issues in deploying high-speed and QoS guaranteed IP network. For this reason packet classication and network intrusion detection have assumed a key role in modern communication networks in order to provide Qos and security. In this thesis we describe a number of the most advanced solutions to these tasks. We introduce NetFPGA and Network Processors as reference platforms both for the design and the implementation of the solutions and algorithms described in this thesis. The rise in links capacity reduces the time available to network devices for packet processing. For this reason, we show different solutions which, either by heuristic and randomization or by smart construction of state machine, allow IP lookup, packet classification and deep packet inspection to be fast in real devices based on high speed platforms such as NetFPGA or Network Processors

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic

    On Scalable Attack Detection in the Network

    Full text link
    corecore