2,074 research outputs found

    Virtual lines, a deadlock free and real-time routing mechanism for ATM networks

    Get PDF
    In this paper we present a routing mechanism and buffer allocation mechanism for an ATM switching fabric. Since the fabric will be used to transfer multimedia traffic it should provide a guaranteed throughput and a bounded latency. We focus on the design of a suitable routing mechanism that is capable to fulfil these requirements and is free of deadlocks. We will describe two basic concepts that can be used to implement deadlock free routing. Routing of messages is closely related to buffering. We have organized the buffers into parallel fifos, each representing a virtual line. In this way we not only have solved the problem of Head Of Line blocking, but we can also give real-time guarantees. We will show that for local high-speed networks it is more advantageous to have a proper flow control than to have large buffers. Although the virtual line concept can have a low buffer utilization, the transfer efficiency can be higher. The virtual lines concept allows adaptive routing. The total throughput of the network can be improved by using alternative routes. Adaptive routing is attractive in networks where alternative routes are not much longer than the initial route(s). The network of the switching fabric is built up from switching elements interconnected in a Kautz topology

    Virtual lines, a deadlock-free and real-time routing mechanism for ATM networks

    Get PDF
    In this paper, we present a routing mechanism and buffer allocation mechanism for an ATM switching fabric. Since the fabric will be used to transfer multimedia traffic, it should provide a guaranteed throughput and a bounded latency. We focus on the design of a suitable routing mechanism that is capable of fulfilling these requirements and is free of deadlocks. We will describe two basic concepts that can be used to implement deadlock-free routing. Routing of messages is closely related to buffering. We have organized the buffers into parallel FIFO's, each representing a virtual line. In this way, we not only have solved the problem of head of line blocking, but we can also give real-time guarantees. We will show that for local high-speed networks, it is more advantageous to have a proper flow control than to have large buffers. Although the virtual line concept can have a low buffer utilization, the transfer efficiency can be higher. The virtual line concept allows adaptive routing. The total throughput of the network can be improved by using alternative routes. Adaptive routing is attractive in networks where alternative routes are not much longer than the initial route(s). The network of the switching fabric is built up from switching elements interconnected in a Kautz topology

    Design and implementation of the Quarc network on-chip

    Get PDF
    Networks-on-Chip (NoC) have emerged as alternative to buses to provide a packet-switched communication medium for modular development of large Systems-on-Chip. However, to successfully replace its predecessor, the NoC has to be able to efficiently exchange all types of traffic including collective communications. The latter is especially important for e.g. cache updates in multicore systems. The Quarc NoC architecture has been introduced as a Networks-on-Chip which is highly efficient in exchanging all types of traffic including broadcast and multicast. In this paper we present the hardware implementation of the switch architecture and the network adapter (transceiver) of the Quarc NoC. Moreover, the paper presents an analysis and comparison of the cost and performance between the Quarc and the Spidergon NoCs implemented in Verilog targeting the Xilinx Virtex FPGA family. We demonstrate a dramatic improvement in performance over the Spidergon especially for broadcast traffic, at no additional hardware cost

    Deadlock avoidance with virtual channels

    Get PDF
    High Performance Computing is a rapidly evolving area of computer science which attends to solve complicated computational problems with the combination of computational nodes connected through high speed networks. This work concentrates on the networks problems that appear in such networks and specially focuses on the Deadlock problem that can decrease the efficiency of the communication or even destroy the balance and paralyze the network. Goal of this work is the Deadlock avoidance with the use of virtual channels, in the switches of the network where the problem appears. The deadlock avoidance assures that will not be loss of data inside network, having as result the increased latency of the served packets, due to the extra calculation that the switches have to make to apply the policy.La computación de alto rendimiento es una zona de rápida evolución de la informática que busca resolver complicados problemas de cálculo con la combinación de los nodos de cómputo conectados a través de redes de alta velocidad. Este trabajo se centra en los problemas de las redes que aparecen en este tipo de sistemas y especialmente se centra en el problema del "deadlock" que puede disminuir la eficacia de la comunicación con la paralización de la red. El objetivo de este trabajo es la evitación de deadlock con el uso de canales virtuales, en los conmutadores de la red donde aparece el problema. Evitar el deadlock asegura que no se producirá la pérdida de datos en red, teniendo como resultado el aumento de la latencia de los paquetes, debido al overhead extra de cálculo que los conmutadores tienen que hacer para aplicar la política.La computació d'alt rendiment és una àrea de ràpida evolució de la informàtica que pretén resoldre complicats problemes de càlcul amb la combinació de nodes de còmput connectats a través de xarxes d'alta velocitat. Aquest treball se centra en els problemes de les xarxes que apareixen en aquest tipus de sistemes i especialment se centra en el problema del "deadlock" que pot disminuir l'eficàcia de la comunicació amb la paralització de la xarxa. L'objectiu d'aquest treball és l'evitació de deadlock amb l'ús de canals virtuals, en els commutadors de la xarxa on apareix el problema. Evitar deadlock assegura que no es produirà la pèrdua de dades en xarxa, tenint com a resultat l'augment de la latència dels paquets, degut al overhead extra de càlcul que els commutadors han de fer per aplicar la política

    Application-Aware Deadlock-Free Oblivious Routing

    Get PDF
    Conventional oblivious routing algorithms are either not application-aware or assume that each flow has its own private channel to ensure deadlock avoidance. We present a framework for application-aware routing that assures deadlock-freedom under one or more channels by forcing routes to conform to an acyclic channel dependence graph. Arbitrary minimal routes can be made deadlock-free through appropriate static channel allocation when two or more channels are available. Given bandwidth estimates for flows, we present a mixed integer-linear programming (MILP) approach and a heuristic approach for producing deadlock-free routes that minimize maximum channel load. The heuristic algorithm is calibrated using the MILP algorithm and evaluated on a number of benchmarks through detailed network simulation. Our framework can be used to produce application-aware routes that target the minimization of latency, number of flows through a link, bandwidth, or any combination thereof

    Application-Aware Deadlock-Free Oblivious Routing

    Get PDF
    Conventional oblivious routing algorithms are either not application-aware or assume that each flow has its own private channel to ensure deadlock avoidance. We present a framework for application-aware routing that assures deadlock-freedom under one or more channels by forcing routes to conform to an acyclic channel dependence graph. Arbitrary minimal routes can be made deadlock-free through appropriate static channel allocation when two or more channels are available. Given bandwidth estimates for flows, we present a mixed integer-linear programming (MILP) approach and a heuristic approach for producing deadlock-free routes that minimize maximum channel load. The heuristic algorithm is calibrated using the MILP algorithm and evaluated on a number of benchmarks through detailed network simulation. Our framework can be used to produce application-aware routes that target the minimization of latency, number of flows through a link, bandwidth, or any combination thereof

    Static virtual channel allocation in oblivious routing

    Get PDF
    Most virtual channel routers have multiple virtual channels to mitigate the effects of head-of-line blocking. When there are more flows than virtual channels at a link, packets or flows must compete for channels, either in a dynamic way at each link or by static assignment computed before transmission starts. In this paper, we present methods that statically allocate channels to flows at each link when oblivious routing is used, and ensure deadlock freedom for arbitrary minimal routes when two or more virtual channels are available. We then experimentally explore the performance trade-offs of static and dynamic virtual channel allocation for various oblivious routing methods, including DOR, ROMM, Valiant and a novel bandwidth-sensitive oblivious routing scheme (BSORM). Through judicious separation of flows, static allocation schemes often exceed the performance of dynamic allocation schemes

    Round Robin based Arbitration Mechanism for Signaling Approach based Router Architecture

    Get PDF
    In Network-on-Chip the effectiveness of the network resource allocation is demonstrated by the flow control mechanism. There are two types of flow control mechanisms: buffered and bufferless. Compared to buffered flow control methods, buffer less flow control mechanisms are easier to use, need less power, and take up less space. When there are congestion and resource conflicts, it experiences higher packet loss and packet misrouting inside the network. A good buffered control mechanism useful as it overcomes the limitations of buffer less mechanism. There are numerous buffered and bufferless flow control methods available. In this paper, signaling-based Virtual Output Queue Router Arbiter Mechanism is used to explore credit-based flow control. This mechanism worked on new concept that is “stress value”. This information is generated in the form of credit whenever any input buffer has free space. Then, using this credit data, the node's stress value is determined. Free buffer space takes precedence over stress value if it is bigger. The stress value will increase if there is less available buffer space. To handle the congestion problem, the signaling block then sends this stress value to a neighboring router. To help the arbitrator make a more accurate decision, the crediting system constantly operates in tandem with arbitration

    Design and Performance Analysis of Low Latency Routing Algorithm based NoC for MPSoC

    Get PDF
    The Network on Chip is appropriate where System-on-Chip technology is scalable and adaptable. The Network on Chip is a new communication architecture with a number of benefits, including scalability, flexibility, and reusability, for applications built on Multiprocessor System on a Chip (MPSoC). However, the design of efficient NoC fabric with high performance is critically complex because of its architectural parameters. Identifying a suitable scheduling algorithm to resolve arbitration among ports to obtain high-speed data transfer in the router is one of the most significant phases while designing a Network on chip based Multiprocessor System on a Chip. Low latency, throughput, space utilization, energy consumption, and reliability for Network on chip fabric are all determined by the router. The performance of the NoC system is hampered by the deadlock issues that plague conventional routing algorithms. This work develops a novel routing algorithm to address the deadlock problem. In this paper, a deterministic shortest path deadlock-free routing method is developed based on the analysis of the Turn Model. In the 2D-mesh structure, the algorithm uses separate routing methods for the odd and even columns. This minimizes the number of paths for a single channel, congestion, and latency. Two test scenarios—one with and one without a load test—were used to evaluate the proposed model. For a zero-load network, three clock cycles are utilized to transfer the packets. For the load network, five clocks are utilized to transfer the packets. The latency is measured for both cases without load and with load test and the corresponding latency is 3ns and 7ns respectively.The proposed method has an 18.57Mbps throughput.  The area and power utilization for the proposed method are 69% (IO utilization) and 0.128W respectively. In order to validate the proposed method, the latency is compared with existing work and 50% latency is reduced both with and without congestion load
    corecore