4,483 research outputs found

    Multi-engine packet classification hardware accelerator

    Get PDF
    As line rates increase, the task of designing high performance architectures with reduced power consumption for the processing of router traffic remains important. In this paper, we present a multi-engine packet classification hardware accelerator, which gives increased performance and reduced power consumption. It follows the basic idea of decision-tree based packet classification algorithms, such as HiCuts and HyperCuts, in which the hyperspace represented by the ruleset is recursively divided into smaller subspaces according to some heuristics. Each classification engine consists of a Trie Traverser which is responsible for finding the leaf node corresponding to the incoming packet, and a Leaf Node Searcher that reports the matching rule in the leaf node. The packet classification engine utilizes the possibility of ultra-wide memory word provided by FPGA block RAM to store the decision tree data structure, in an attempt to reduce the number of memory accesses needed for the classification. Since the clock rate of an individual engine cannot catch up to that of the internal memory, multiple classification engines are used to increase the throughput. The implementations in two different FPGAs show that this architecture can reach a searching speed of 169 million packets per second (mpps) with synthesized ACL, FW and IPC rulesets. Further analysis reveals that compared to state of the art TCAM solutions, a power savings of up to 72% and an increase in throughput of up to 27% can be achieved

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    Compressed Data Structures for Recursive Flow Classification

    Get PDF
    High-speed packet classiļ¬cation is crucial to the implementation of several advanced network services and protocols; many QoS implementations, active networking platforms, and security devices (such as ļ¬rewalls and intrusion-detection systems) require it. But performing classiļ¬cation on multiple ļ¬elds, at the speed of modern networks, is known to be a diļ¬ƒcult problem. The Recursive Flow Classiļ¬cation (RFC) algorithm described by Gupta and McKeown performs classiļ¬cation very quickly, but can require excessive storage when using thousands of rules. This paper studies a compressed representation for the tables used in RFC, trading some memory accesses for space. The compressionā€™s eļ¬ƒciency can be improved by rear-ranging rows and columns of the tables. Finding a near-optimal rearrangement can be transformed into a Traveling Salesman Problem in which certain approximation algorithms can be used. Also, in evaluating the compressed representation of tables, we study the eļ¬€ects of choosing diļ¬€erent reduction trees in RFC. We evaluate these methods using a real-world ļ¬lter database with 159 rules. Results show a reduction in the size of the cross product tables by 61.6% in the median case; in some cases their size is reduced by 87% or more. Furthermore, experimental evidence suggests larger databases may be more compressible

    dReDBox: Materializing a full-stack rack-scale system prototype of a next-generation disaggregated datacenter

    Get PDF
    Current datacenters are based on server machines, whose mainboard and hardware components form the baseline, monolithic building block that the rest of the system software, middleware and application stack are built upon. This leads to the following limitations: (a) resource proportionality of a multi-tray system is bounded by the basic building block (mainboard), (b) resource allocation to processes or virtual machines (VMs) is bounded by the available resources within the boundary of the mainboard, leading to spare resource fragmentation and inefficiencies, and (c) upgrades must be applied to each and every server even when only a specific component needs to be upgraded. The dRedBox project (Disaggregated Recursive Datacentre-in-a-Box) addresses the above limitations, and proposes the next generation, low-power, across form-factor datacenters, departing from the paradigm of the mainboard-as-a-unit and enabling the creation of function-block-as-a-unit. Hardware-level disaggregation and software-defined wiring of resources is supported by a full-fledged Type-1 hypervisor that can execute commodity virtual machines, which communicate over a low-latency and high-throughput software-defined optical network. To evaluate its novel approach, dRedBox will demonstrate application execution in the domains of network functions virtualization, infrastructure analytics, and real-time video surveillance.This work has been supported in part by EU H2020 ICTproject dRedBox, contract #687632.Peer ReviewedPostprint (author's final draft

    Software-Driven and Virtualized Architectures for Scalable 5G Networks

    Full text link
    In this dissertation, we argue that it is essential to rearchitect 4G cellular core networksā€“sitting between the Internet and the radio access networkā€“to meet the scalability, performance, and flexibility requirements of 5G networks. Today, there is a growing consensus among operators and research community that software-defined networking (SDN), network function virtualization (NFV), and mobile edge computing (MEC) paradigms will be the key ingredients of the next-generation cellular networks. Motivated by these trends, we design and optimize three core network architectures, SoftMoW, SoftBox, and SkyCore, for different network scales, objectives, and conditions. SoftMoW provides global control over nationwide core networks with the ultimate goal of enabling new routing and mobility optimizations. SoftBox attempts to enhance policy enforcement in statewide core networks to enable low-latency, signaling-efficient, and customized services for mobile devices. Sky- Core is aimed at realizing a compact core network for citywide UAV-based radio networks that are going to serve first responders in the future. Network slicing techniques make it possible to deploy these solutions on the same infrastructure in parallel. To better support mobility and provide verifiable security, these architectures can use an addressing scheme that separates network locations and identities with self-certifying, flat and non-aggregatable address components. To benefit the proposed architectures, we designed a high-speed and memory-efficient router, called Caesar, for this type of addressing schemePHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146130/1/moradi_1.pd

    Models, Algorithms, and Architectures for Scalable Packet Classification

    Get PDF
    The growth and diversiļ¬cation of the Internet imposes increasing demands on the performance and functionality of network infrastructure. Routers, the devices responsible for the switch-ing and directing of trafļ¬c in the Internet, are being called upon to not only handle increased volumes of trafļ¬c at higher speeds, but also impose tighter security policies and provide support for a richer set of network services. This dissertation addresses the searching tasks performed by Internet routers in order to forward packets and apply network services to packets belonging to deļ¬ned trafļ¬c ļ¬‚ows. As these searching tasks must be performed for each packet traversing the router, the speed and scalability of the solutions to the route lookup and packet classiļ¬cation problems largely determine the realizable performance of the router, and hence the Internet as a whole. Despite the energetic attention of the academic and corporate research communities, there remains a need for search engines that scale to support faster communication links, larger route tables and ļ¬lter sets and increasingly complex ļ¬lters. The major contributions of this work include the design and analysis of a scalable hardware implementation of a Longest Preļ¬x Matching (LPM) search engine for route lookup, a survey and taxonomy of packet classiļ¬cation techniques, a thorough analysis of packet classiļ¬cation ļ¬lter sets, the design and analysis of a suite of performance evaluation tools for packet classiļ¬cation algorithms and devices, and a new packet classiļ¬cation algorithm that scales to support high-speed links and large ļ¬lter sets classifying on additional packet ļ¬elds

    Packet Classification via Improved Space Decomposition Techniques

    Get PDF
    P ack et Classification is a common task in moder n Inter net r outers. The goal is to classify pack ets into "classes" or "flo ws" according to some ruleset that looks at multiple fields of each pack et. Differ entiated actions can then be applied to the traffic depending on the r esult of the classification. Ev en though rulesets can be expr essed in a r elati v ely compact way by using high le v el languages, the r esulting decision tr ees can partition the sear ch space (the set of possible attrib ute v alues) in a potentially v ery lar ge ( and mor e) number of r egions. This calls f or methods that scale to such lar ge pr oblem sizes, though the only scalable pr oposal in the literatur e so far is the one based on a F at In v erted Segment T r ee [1 ]. In this paper we pr opose a new geometric technique called G-filter f or pack et classification on dimensions. G-filter is based on an impr o v ed space decomposition technique. In addition to a theor etical analysis sho wing that classification in G-filter has time complexity and slightly super -linear space in the number of rules, we pr o vide thor ough experiments sho wing that the constants in v olv ed ar e extr emely small on a wide range of pr oblem sizes, and that G-filter impr o v e the best r esults in the literatur e f or lar ge pr oblem sizes, and is competiti v e f or small sizes as well
    • ā€¦
    corecore