37 research outputs found

    Queue Management in Network Processors

    Get PDF
    Abstract: -One of the main bottlenecks when designing a network processing system is very often its memory subsystem. This is mainly due to the state-of-the-art network links operating at very high speeds and to the fact that in order to support advanced Quality of Service (QoS), a large number of independent queues is desirable. In this paper we analyze the performance bottlenecks of various data memory managers integrated in typical Network Processing Units (NPUs). We expose the performance limitations of software implementations utilizing the RISC processing cores typically found in most NPU architectures and we identify the requirements for hardware assisted memory management in order to achieve wire-speed operation at gigabit per second rates. Furthermore, we describe the architecture and performance of a hardware memory manager that fulfills those requirements. This memory manager, although it is implemented in a reconfigurable technology, it can provide up to 6.2Gbps of aggregate throughput, while handling 32K independent queues

    BrainFrame: A node-level heterogeneous accelerator platform for neuron simulations

    Get PDF
    Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU

    A Study of Reconfigurable Accelerators for Cloud Computing

    Get PDF
    Due to the exponential increase in network traffic in the data centers, thousands of servers interconnected with high bandwidth switches are required. Field Programmable Gate Arrays (FPGAs) with Cloud ecosystem offer high performance in efficiency and energy, making them active resources, easy to program and reconfigure. This paper looks at FPGAs as reconfigurable accelerators for the cloud computing presents the main hardware accelerators that have been presented in various widely used cloud computing applications such as: MapReduce, Spark, Memcached, Databases

    Reconfigurable network processing platforms

    No full text
    This dissertation presents our investigation on how to efficiently exploit reconfigurable hardware to design flexible, high performance, and power efficient network devices capable to adapt to varying processing requirements of network applications and traffic. The proposed reconfigurable network processing platform targets mainly access, edge, and enterprise devices. These devices have to sustain less bandwidth compared to those utilized in core networks. However the processing requirements on a per packet basis are much higher in these devices (e.g., payload processing). Furthermore, devices in these networks have to be flexible in order to support emerging network applications. A promising technology for the implementation of these devices is the Field-Programmable Gate Arrays (FPGAs). FPGAs are typical devices that combine flexibility (through the reconfiguration) and performance (through the inherent hardware nature that can exploit parallelism), therefore they can efficiently address the requirements of the edge and access network devices. A reconfigurable network processing platform is presented that includes reconfigurable hardware accelerators, a reconfigurable queue scheduler, and a configurable transactional memory controller. Furthermore, the performance and the constraints of the platform are formulated as an integer optimization problem and an integrated design flow is presented for the platform. Both static and dynamic reconfiguration is explored in this dissertation. Static reconfiguration is utilized to address the different processing requirements of network applications, while dynamic reconfiguration is utilized to adapt to network traffic fluctuations. Two representative devices were implemented and evaluated in the proposed platform; a multi-service edge router and a content-based (web) switch. In the former device, dynamic reconfiguration is utilized to deal with network traffic fluctuations. The device monitors the traffic and adapts to the network traffic fluctuations taking into account the reconfiguration overhead. In the latter device, a reconfigurable architecture for a content-based switch is utilized and compared to a mainstream network processor in terms of performance and power. The device accommodates several co-processors that can be interchanged to perform specific type of switching (e.g., URL-based or cookie-based switching). Moreover, the exploitation of reconfigurable logic is investigated for queue scheduling in network devices. A reconfigurable queue scheduler is presented that adapts to the network traffic requirements (number of active queues) and can be used both in edge routers and web switches. Finally, configurable transactional memories are proposed which can be used to efficiently deploy multi-processing platforms for network processing applications. The proposed configurable transactional memory controller can be configured based on the application and device features (e.g., number of processors), can offer an easier programming framework for multi-processor reconfigurable platforms, and provides increased performance compared to traditional locking schemes. The results of the research presented in this dissertation show that the FPGAs can be an efficient alternative to network processors and can be used not only for lower network layers, but also as a complete platform for emerging network processing applications.Electrical Engineering, Mathematics and Computer Scienc

    Feasibility Study of a Self-healing Hardware Platform

    No full text

    A new SOI monolithic capacitive sensor for absolute and differential pressure measurements

    No full text
    In the present work, a new monolithic capacitive pressure sensor is being introduced. The sensor is manufactured according to a custom, 15-step SOI process. The process primarily offers great flexibility as far as sensor design is concerned. Absolute or differential pressure sensing is possible by simply arranging proper sensor package. Measurement sensitivity and span are easily regulated over a wide range of values by setting one-single design parameter. Attention is paid to avoid p-n junction formation in order to improve the sensor robustness against temperature increase and allow high-temperature post-processing without doping profile degradation. The presented design allows the implementation of an ordinary p-well CMOS post-process. Sensitivity of 2 mV/kPa, within a span of 180 kPa (2%) and a bandwidth of 25 kHz, is achievable by means of a CMOS switched-capacitor ASIC that is developed and presented here. Significant care has been taken for the ASIC performance to depend as less as possible on CMOS process and transistor-parameter variations that increase due to poor uniformity of the transistor substrate. Moreover, a state-of-the-art design is implemented for the circuit to provide robustness against parasitic capacitances connected in parallel with sensing capacitors. Implementation of additional analog signal processing improves the aforementioned accuracy at a significant extend. The sensors main applications include medical devices such as sphygmomanometers and respirators that require high reliability and biocompatibility. (C) 2005 Elsevier B.V. All rights reserved
    corecore