5 research outputs found

    Packet Classification via Improved Space Decomposition Techniques

    Get PDF
    Packet Classification is a common task in modern Internet routers. In a nutshell, the goal is to classify packets into ``classes\u27\u27 or ``flows\u27\u27 according to some ruleset that looks at multiple fields of each packet. Differentiated actions can then be applied to the traffic depending on the result of the classification. One way to approach the task is to model it as a point location problem in a multidimensional space, partitioned into a large number of regions, (up to 10610^6 or more, generated by the number of possible paths in the decision tree resulting from the specification of the ruleset). Many solutions proposed in the literature not to scale well with the size of the problem, with the exception of one based on a Fat Inverted Segment Tree. In this paper we propose a new geometric filtering technique, called {em g-filter}, which is competitive with the best result in the literature, and is based on an improved space decomposition technique. A theoretical worst case asymptotic analysis shows that classification in {em g-filter} has O(1)O(1) time complexity, and space complexity close to linear in the number of rules. Additionally, thorough experiments show that the constants involved are extremely small on a wide range of problem sizes, and improve the best results in the literature. Finally, the g-filter method is not limited to 2-dimensional rules, but can handle any number of attributes with only a moderate increased overhead per additional dimension

    Fast firewall implementations for software-based and hardware-based routers

    No full text

    Fast firewall implementations for software-based and hardware-based routers

    No full text

    Dynamic Traffic Driven Architectures and Algorithms for Securing Networks

    Get PDF
    The continuous growth in the Internet's size, the amount of data traffic, and the complexity of processing this traffic gives rise to new challenges in building high performance network devices. Such an exponential growth coupledwith the increasing sophistication of attacks, is placing stringent demands on the performance of networked systems (Firewalls). These challengesrequire new designs, architecture and algorithms for the optimization of such systems.The current or classical security of present day Internet is "static" and "oblivious" to traffic dynamics in the network. Hence, there are tremendous efforts towards the design and development of several techniques and strategies to deal with the above shortcomings. Unfortunately, the current solutions have been successful in addressing only some aspects ofsecurity. However, as a whole security remains a major issue. This is primarily due to the lack of adaptation and dynamics in the design of such intrusion detection and mitigation systems.This thesis focuses on the design of architectures and algorithms for theoptimization of such networked systems, to aid not only adaptive and real-time "packet filtering' but also fast "content basedrouting (differentiated services)' in today's data-driven networks.The approach proposed involves a unique combination of algorithmic andarchitectural techniques that aims to outperform all current solutions in termsof adaptiveness, speed of operation (under attack or heavily loaded conditions) andoverall operational cost-effectiveness of such systems. The tools proposed in thisthesis also aim to offer the flexibility to include new approaches, and providethe ability to migrate or deploy additional entities for attack detection and defense
    corecore