983 research outputs found

    Modeling Data-Plane Power Consumption of Future Internet Architectures

    Full text link
    With current efforts to design Future Internet Architectures (FIAs), the evaluation and comparison of different proposals is an interesting research challenge. Previously, metrics such as bandwidth or latency have commonly been used to compare FIAs to IP networks. We suggest the use of power consumption as a metric to compare FIAs. While low power consumption is an important goal in its own right (as lower energy use translates to smaller environmental impact as well as lower operating costs), power consumption can also serve as a proxy for other metrics such as bandwidth and processor load. Lacking power consumption statistics about either commodity FIA routers or widely deployed FIA testbeds, we propose models for power consumption of FIA routers. Based on our models, we simulate scenarios for measuring power consumption of content delivery in different FIAs. Specifically, we address two questions: 1) which of the proposed FIA candidates achieves the lowest energy footprint; and 2) which set of design choices yields a power-efficient network architecture? Although the lack of real-world data makes numerous assumptions necessary for our analysis, we explore the uncertainty of our calculations through sensitivity analysis of input parameters

    A scalable multi-core architecture with heterogeneous memory structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs)

    Full text link
    Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multi-core neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.Comment: 17 pages, 14 figure

    FISE: A Forwarding Table Structure for Enterprise Networks

    Get PDF
    This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordWith increasing demands for more flexible services, the routing policies in enterprise networks become much richer. This has placed a heavy burden to the current router forwarding plane in support of the increasing number of policies, primarily due to the limited capacity in TCAM, which further hinders the development of new network services and applications. The scalable forwarding table structures for enterprise networks have therefore attracted numerous attentions from both academia and industry. To tackle this challenge, in this paper we present the design and implementation of a new forwarding table structure. It separates the functions of TCAM and SRAM, and maximally utilizes the large and flexible SRAM. A set of schemes are progressively designed, to compress storage of forwarding rules, and maintain correctness and achieve line-card speeds of packet forwarding. We further design an incremental update algorithm that allows less access to memory. The proposed scheme is validated and evaluated through a realistic implementation on a commercial router using real datasets. Our proposal can be easily implemented in the existing devices. The evaluation results show that the performance of forwarding tables under the proposed scheme is promising.National Key R&D Program of ChinaNational Natural Science Foundation of China (NSFC)Scientific Research Foundation for Young Teachers of Shenzhen Universit

    ADVANCED HASHING SCHEMES FOR PACKETFORWARDING USING SET ASSOCIATIVEMEMORY ARCHITECTURES

    Get PDF
    Building a high performance IP packet forwarding (PF) engine remains a challenge due to increasingly stringent throughput requirements and the growing sizes of IP forwarding tables.The router has to match the incoming packet's IP address against the forwarding table.The matching process has to be done in wire speed which is why scalability and low power consumption are features that PF engines must maintain.It is common for PF engines to use hash tables; however, the classic hashing downsides have to be dealt with (e.g., collisions, worst case memory access time, ... etc.).While open addressing hash tables, in general, provide good average case search performance, their memory utilization and worst case performance can degrade quickly due to collisions that leads to bucket overflows.Set associative memory can be used for hardware implementations of hash tables with the property that each bucket of a hash table can be searched in one memory cycle.Hence, PF engine architectures based on associative memory will outperform those based on the conventional Ternary Content Addressable Memory (TCAM) in terms of power and scalability.The two standard solutions to the overflow problem are either to use some sort of predefined probing (e.g., linear or quadratic) or to use multiple hash functions.This work presents two new hash schemes that extend both aforementioned solutions to tackle the overflow problem efficiently.The first scheme is a hash probing scheme that is called Content-based HAsh Probing, or CHAP.CHAP is a probing scheme that is based on the content of the hash table to avoid the classical side effects of predefined hash probing methods (i.e., primary and secondary clustering phenomena) and at the same time reduces the overflow.The second scheme, called Progressive Hashing, or PH, is a general multiple hash scheme that reduces the overflow as well.PH splits the prefixes into groups where each group is assigned one hash function, then reuse some hash functions in a progressive fashion to reduce the overflow.We show by experimenting with real IP lookup tables that both schemes outperform other hashing schemes

    A Multifunctional Processing Board for the Fast Track Trigger of the H1 Experiment

    Get PDF
    The electron-proton collider HERA is being upgraded to provide higher luminosity from the end of the year 2001. In order to enhance the selectivity on exclusive processes a Fast Track Trigger (FTT) with high momentum resolution is being built for the H1 Collaboration. The FTT will perform a 3-dimensional reconstruction of curved tracks in a magnetic field of 1.1 Tesla down to 100 MeV in transverse momentum. It is able to reconstruct up to 48 tracks within 23 mus in a high track multiplicity environment. The FTT consists of two hardware levels L1, L2 and a third software level. Analog signals of 450 wires are digitized at the first level stage followed by a quick lookup of valid track segment patterns. For the main processing tasks at the second level such as linking, fitting and deciding, a multifunctional processing board has been developed by the ETH Zurich in collaboration with Supercomputing Systems (Zurich). It integrates a high-density FPGA (Altera APEX 20K600E) and four floating point DSPs (Texas Instruments TMS320C6701). This presentation will mainly concentrate on second trigger level hardware aspects and on the implementation of the algorithms used for linking and fitting. Emphasis is especially put on the integrated CAM (content addressable memory) functionality of the FPGA, which is ideally suited for implementing fast search tasks like track segment linking.Comment: 6 pages, 4 figures, submitted to TN
    corecore