1,049 research outputs found

    Packet Classification via Improved Space Decomposition Techniques

    Get PDF
    P ack et Classification is a common task in moder n Inter net r outers. The goal is to classify pack ets into "classes" or "flo ws" according to some ruleset that looks at multiple fields of each pack et. Differ entiated actions can then be applied to the traffic depending on the r esult of the classification. Ev en though rulesets can be expr essed in a r elati v ely compact way by using high le v el languages, the r esulting decision tr ees can partition the sear ch space (the set of possible attrib ute v alues) in a potentially v ery lar ge ( and mor e) number of r egions. This calls f or methods that scale to such lar ge pr oblem sizes, though the only scalable pr oposal in the literatur e so far is the one based on a F at In v erted Segment T r ee [1 ]. In this paper we pr opose a new geometric technique called G-filter f or pack et classification on dimensions. G-filter is based on an impr o v ed space decomposition technique. In addition to a theor etical analysis sho wing that classification in G-filter has time complexity and slightly super -linear space in the number of rules, we pr o vide thor ough experiments sho wing that the constants in v olv ed ar e extr emely small on a wide range of pr oblem sizes, and that G-filter impr o v e the best r esults in the literatur e f or lar ge pr oblem sizes, and is competiti v e f or small sizes as well

    Packet Classification via Improved Space Decomposition Techniques

    Get PDF
    Packet Classification is a common task in modern Internet routers. In a nutshell, the goal is to classify packets into ``classes\u27\u27 or ``flows\u27\u27 according to some ruleset that looks at multiple fields of each packet. Differentiated actions can then be applied to the traffic depending on the result of the classification. One way to approach the task is to model it as a point location problem in a multidimensional space, partitioned into a large number of regions, (up to 10610^6 or more, generated by the number of possible paths in the decision tree resulting from the specification of the ruleset). Many solutions proposed in the literature not to scale well with the size of the problem, with the exception of one based on a Fat Inverted Segment Tree. In this paper we propose a new geometric filtering technique, called {em g-filter}, which is competitive with the best result in the literature, and is based on an improved space decomposition technique. A theoretical worst case asymptotic analysis shows that classification in {em g-filter} has O(1)O(1) time complexity, and space complexity close to linear in the number of rules. Additionally, thorough experiments show that the constants involved are extremely small on a wide range of problem sizes, and improve the best results in the literature. Finally, the g-filter method is not limited to 2-dimensional rules, but can handle any number of attributes with only a moderate increased overhead per additional dimension

    Efficient algorithms and abstract data types for local inconsistency isolation in firewall ACLS

    Get PDF
    Writing and managing firewall ACLs are hard, tedious, time-consuming and error-prone tasks for a wide range of reasons. During these tasks, inconsistent rules can be introduced. An inconsistent firewall ACL implies in general a design fault, and indicates that the firewall is accepting traffic that should be denied or vice versa. This can result in severe problems such as unwanted accesses to services, denial of service, overflows, etc. However, the administrator is who ultimately decides if an inconsistent rule is a fault or not. Although many algorithms to detect and manage inconsistencies in firewall ACLs have been proposed, they have different drawbacks regarding different aspects of the consistency diagnosis problem, which can prevent their use in a wide range of real-life situations. In this paper, we review these algorithms along with their drawbacks, and propose a new divide and conquer based algorithm, which uses specialized abstract data types. The proposed algorithm returns consistency results over the original ACL. Its computational complexity is better than the current best algorithm for inconsistency isolation, as experimental results will also show.Ministerio de Educación y Ciencia DIP2006-15476-C02-0

    Models, Algorithms, and Architectures for Scalable Packet Classification

    Get PDF
    The growth and diversification of the Internet imposes increasing demands on the performance and functionality of network infrastructure. Routers, the devices responsible for the switch-ing and directing of traffic in the Internet, are being called upon to not only handle increased volumes of traffic at higher speeds, but also impose tighter security policies and provide support for a richer set of network services. This dissertation addresses the searching tasks performed by Internet routers in order to forward packets and apply network services to packets belonging to defined traffic flows. As these searching tasks must be performed for each packet traversing the router, the speed and scalability of the solutions to the route lookup and packet classification problems largely determine the realizable performance of the router, and hence the Internet as a whole. Despite the energetic attention of the academic and corporate research communities, there remains a need for search engines that scale to support faster communication links, larger route tables and filter sets and increasingly complex filters. The major contributions of this work include the design and analysis of a scalable hardware implementation of a Longest Prefix Matching (LPM) search engine for route lookup, a survey and taxonomy of packet classification techniques, a thorough analysis of packet classification filter sets, the design and analysis of a suite of performance evaluation tools for packet classification algorithms and devices, and a new packet classification algorithm that scales to support high-speed links and large filter sets classifying on additional packet fields

    Network architecture for large-scale distributed virtual environments

    Get PDF
    Distributed Virtual Environments (DVEs) provide 3D graphical computer generated environments with stereo sound, supporting real-time collaboration between potentially large numbers of users distributed around the world. Early DVEs has been used over local area networks (LANs). Recently with the Internet's development into the most common embedding for DVEs these distributed applications have been moved towards an exploiting IP networks. This has brought the scalability challenges into the DVEs evolution. The network bandwidth resource is the more limited resource of the DVE system and to improve the DVE's scalability it is necessary to manage carefully this resource. To achieve the saving in the network bandwidth the different types of the network traffic that is produced by the DVEs have to be considered. DVE applications demand· exchange of the data that forms different types of traffic such as a computer data type, video and audio, and a 3D data type to keep the consistency of the application's state. The problem is that the meeting of the QoS requirements of both control and continuous media traffic already have been covered by the existing research. But QoS for transfer of the 3D information has not really been considered. The 3D DVE geometry traffic is very bursty in nature and places a high demands on the network for short intervals of time due to the quite large size of the 3D models and the DVE application requirements to transmit a 3D data as quick as possible. The main motivation in carrying out the work presented in this thesis is to find a solution to improve the scalability of the DVE applications by a consideration the QoS requirements of the 3D DVE geometrical data type. In this work we are investigating the possibility to decrease the network bandwidth utilization by the 3D DVE traffic using the level of detail (LOD) concept and the active networking approach. The background work of the thesis surveys the DVE applications and the scalability requirements of the DVE systems. It also discusses the active networks and multiresolution representation and progressive transmission of the 3D data. The new active networking approach to the transmission of the 3D geometry data within the DVE systems is proposed in this thesis. This approach enhances the currently applied peer-to-peer DVE architecture by adding to the peer-to-peer multicast neny_ork layer filtering of the 3D flows an application level filtering on the active intermediate nodes. The active router keeps the application level information about the placements of users. This information is used by active routers to prune more detailed 3D data flows (higher LODs) in the multicast tree arches that are linked to the distance DVE participants. The exploration of possible benefits of exploiting the proposed active approach through the comparison with the non-active approach is carried out using the simulation­based performance modelling approach. Complex interactions between participants in DVE application and a large number of analyzed variables indicate that flexible simulation is more appropriate than mathematical modelling. To build a test bed will not be feasible. Results from the evaluation demonstrate that the proposed active approach shows potential benefits to the improvement of the DVE's scalability but the degree of improvement depends on the users' movement pattern. Therefore, other active networking methods to support the 3D DVE geometry transmission may also be required
    corecore