430 research outputs found

    High capacity photonic integrated switching circuits

    Get PDF
    As the demand for high-capacity data transfer keeps increasing in high performance computing and in a broader range of system area networking environments; reconfiguring the strained networks at ever faster speeds with larger volumes of traffic has become a huge challenge. Formidable bottlenecks appear at the physical layer of these switched interconnects due to its energy consumption and footprint. The energy consumption of the highly sophisticated but increasingly unwieldy electronic switching systems is growing rapidly with line rate, and their designs are already being constrained by heat and power management issues. The routing of multi-Terabit/second data using optical techniques has been targeted by leading international industrial and academic research labs. So far the work has relied largely on discrete components which are bulky and incurconsiderable networking complexity. The integration of the most promising architectures is required in a way which fully leverages the advantages of photonic technologies. Photonic integration technologies offer the promise of low power consumption and reduced footprint. In particular, photonic integrated semiconductor optical amplifier (SOA) gate-based circuits have received much attention as a potential solution. SOA gates exhibit multi-terahertz bandwidths and can be switched from a high-gain state to a high-loss state within a nanosecond using low-voltage electronics. In addition, in contrast to the electronic switching systems, their energy consumption does not rise with line rate. This dissertation will discuss, through the use of different kind of materials and integration technologies, that photonic integrated SOA-based optoelectronic switches can be scalable in either connectivity or data capacity and are poised to become a key technology for very high-speed applications. In Chapter 2, the optical switching background with the drawbacks of optical switches using electronic cores is discussed. The current optical technologies for switching are reviewed with special attention given to the SOA-based switches. Chapter 3 discusses the first demonstrations using quantum dot (QD) material to develop scalable and compact switching matrices operating in the 1.55µm telecommunication window. In Chapter 4, the capacity limitations of scalable quantum well (QW) SOA-based multistage switches is assessed through experimental studies for the first time. In Chapter 5 theoretical analysis on the dependence of data integrity as ultrahigh line-rate and number of monolithically integrated SOA-stages increases is discussed. Chapter 6 presents some designs for the next generation of large scale photonic integrated interconnects. A 16x16 switch architecture is described from its blocking properties to the new miniaturized elements proposed. Finally, Chapter 7 presents several recommendations for future work, along with some concluding remark

    SDT: A Low-cost and Topology-reconfigurable Testbed for Network Research

    Full text link
    Network experiments are essential to network-related scientific research (e.g., congestion control, QoS, network topology design, and traffic engineering). However, (re)configuring various topologies on a real testbed is expensive, time-consuming, and error-prone. In this paper, we propose \emph{Software Defined Topology Testbed (SDT)}, a method for constructing a user-defined network topology using a few commodity switches. SDT is low-cost, deployment-friendly, and reconfigurable, which can run multiple sets of experiments under different topologies by simply using different topology configuration files at the controller we designed. We implement a prototype of SDT and conduct numerous experiments. Evaluations show that SDT only introduces at most 2\% extra overhead than full testbeds on multi-hop latency and is far more efficient than software simulators (reducing the evaluation time by up to 2899x). SDT is more cost-effective and scalable than existing Topology Projection (TP) solutions. Further experiments show that SDT can support various network research experiments at a low cost on topics including but not limited to topology design, congestion control, and traffic engineering.Comment: This paper will be published in IEEE CLUSTER 2023. Preview version onl

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Energy-Efficient Interconnection Networks for High-Performance Computing

    Get PDF
    In recent years, energy has become one of the most important factors for de- signing and operating large scale computing systems. This is particularly true in high-performance computing, where systems often consist of thousands of nodes. Especially after the end of Dennard’s scaling, the demand for energy- proportionality in components, where energy is depending linearly on utilization, increases continuously. As the main contributor to the overall power consumption, processors have received the main attention so far. The increasing energy proportionality of processors, however, shifts the focus to other components such as interconnection networks. Their share of the overall power consumption is expected to increase to 20% or more while other components further increase their efficiency in the near future. Hence, it is crucial to improve energy proportionality in interconnection networks likewise to reduce overall power and energy consumption. To facilitate these attempts, this work provides comprehensive studies about energy saving in interconnection networks at different levels. First, interconnection networks differ fundamentally from other components in their underlying technology. To gain a deeper understanding of these differences and to identify targets for energy savings, this work provides a detailed power analysis of current network hardware. Furthermore, various applications at different scales are analyzed regarding their communication patterns and locality properties. The findings show that communication makes up only a small fraction of the execution time and networks are actually idling most of the time. Another observation is that point-to-point communication often only occurs within various small subsets of all participants, which indicates that a coordinated mapping could further decrease network traffic. Based on these studies, three different energy-saving policies are designed, which all differ in their implementation and focus. Then, these policies are evaluated in an event-based, power-aware network simulator. While two policies that operate completely local at link level, enable significant energy savings of more than 90% in most analyses, the hybrid one does not provide further benefits despite significant additional design effort. Additionally, these studies include network design parameters, such as transition time between different link configurations, as well as the three most common topologies in supercomputing systems. The final part of this work addresses the interactions of congestion management and energy-saving policies. Although both network management strategies aim for different goals and use opposite approaches, they complement each other and can increase energy efficiency in all studies as well as improve the performance overhead as opposed to plain energy saving

    Hosting Capacity Optimization in Modern Distribution Grids

    Get PDF
    The availability of distributed renewable energy resources and the anticipated increase in new types of loads are changing the way electricity is being produced and supplied to consumers. This shift is moving away from a network delivering power solely from centralized power plants towards a decentralized network which supplements its power production by incorporating local distributed generators (DGs). However, the increased integration of DGs into existing distribution networks is impacting their behavior in terms of voltage profile, reliability, and power quality. To determine the maximum amount of DG that distribution grids can accommodate the concept of hosting capacity is introduced. The distribution grid hosting capacity is defined as the amount of new production or consumption that can be added to the grid without adversely impacting the reliability or voltage quality for other customers. The study of the hosting capacity is commonly accomplished by simulating power flow for each potential placement of DG while enforcing operating limits (e.g. voltage limits and line thermal limits). Traditionally, power flow is simulated by solving full nonlinear AC power flow equations for each potential configuration. Existing methods for computing hosting capacity require extensive iterations, which can be computationally-expensive and lack solution optimality. In this dissertation, several approaches for determining the optimal hosting capacity are introduced. First, an optimization-based method for determining the hosting capacity in distribution grids is proposed. The method is developed based on a set of linear power flow equations that enable linear programming formulation of the hosting capacity model. The optimization-based hosting capacity method is then extended to investigate further increasing hosting capacity by also optimizing network reconfiguration. The network reconfigurations use existing switches in the system to increase allowable hosting capacity without upgrading the network infrastructure. Finally, a sensitivity-based method is described which more efficiently obtains the optimal hosting capacity for larger distribution systems. The proposed methods are examined on several test radial distribution grids to show their effectiveness and acceptable performance. Performance is further measured against existing iterative hosting capacity calculation methods. Results demonstrate that the proposed method outperforms traditional methods in terms of computation time while offering comparable results

    Implementation of Bus-Based and NoC-Based MP3 Decoders on FPGA

    Get PDF
    The trend of modern System-on-Chip (SoC) design is increasing in size and number of Processing Elements (PE) for various and general purpose tasks. Emergence of Field Programmable Gate Array (FPGA) into the world of technology has lowered the limitations faced by Application Specific Integrated Circuit (ASIC) design. FPGA has a less timeto- market and is a perfect candidate for prototyping purposes due to the flexibility they create for the design and this is the key feature of the FPGA technology. Technology advancements have introduced reconfiguration concepts which increase the flexibility of FPGA designs more. One method to improve SoC's performance is to adopt a sophi sticated communication medium between PEs to achieve a high throughput. Bus architecture has been improved to meet the requirements of high-performance SoCs, however, its inherently poor scalability limjts their enhancement. The Network-on-Chip (NoC) design paradigm has emerged to overcome the scalability limitations of point-to-point and bus communkation. This thesis presents an investigation towards NoC versus bus based implementation of an SoC. An MP3 decoder has been selected as an application to be implemented on the proposed design. The final design in the thes is demonstrated that the NoC based MP3 decoder achieves a 14% faster clock frequency and real time operation with the NoC based design decode an MP3 frame on average in 10% less time that the bus based MP3 decoder
    • …
    corecore