818 research outputs found

    Hybrid switching : converging packet and TDM flows in a single platform

    Get PDF
    Optical fibers have brought fast and reliable data transmission to today’s network. The immense fiber build-out over the last few years has generated a wide array of new access technologies, transport and network protocols, and next-generation services in the Local Area Network (LAN), Metropolitan Area Network (MAN), and Wide Area Network (WAN). All these different technologies, protocols, and services were introduced to address particular telecommunication needs. To remain competitive in the market, the service providers must offer most of these services, while maintaining their own profitability. However, offering a large variety of equipment, protocols, and services posses a big challenge for service carriers because it requires a huge investment in different technology platforms, lots of training of staff, and the management of all these networks. In today’s network, service providers use SONET (Synchronous Optical NETwork) as a basic TDM (Time Division Multiplexing) transport network. SONET was primarily designed to carry voice traffic from telephone networks. However, with the explosion of traffic in the Internet, the same SONET based TDM network is optimized to support increasing demand for packet based Internet network services (data, voice, video, teleconference etc.) at access networks and LANs. Therefore the service providers need to support their Internet Protocol (IP) infrastructure as well as in the legacy telephony infrastructure. Supporting both TDM and packet services in the present condition needs multilayer operations which is complex, expensive, and difficult to manage. A hybrid switch is a novel architecture that combines packets (IP) and TDM switching in a unified access platform and provides seamless integration of access networks and LANs with MAN/WAN networks. The ability to fully integrate these two capabilities in a single chassis will allow service providers to deploy a more cost effective and flexible architecture that can support a variety of different services. This thesis develops a hybrid switch which is capable of offering bundled services for TDM switching and packet routing. This is done by dividing the switch’s bandwidth into VT1.5 (Virtual Tributary -1.5) channels and providing SONET based signaling for routing the data and controlling the switch’s resources. The switch is a TDM based architecture which allows each switch’s port to be independently configured for any mixture of packet and TDM traffic, including 100% packet and 100% TDM. This switch allows service providers to simplify their edge networks by consolidating the number of separate boxes needed to provide fast and reliable access. This switch also reduces the number of network management systems needed, and decreases the resources needed to install, provision and maintain the network because of its ability to “collapse” two network layers into one platform. The scope of this thesis includes system architecture, logic implementation, and verification testing, and performance evaluation of the hybrid switch. The architecture consists of ingress/egress ports, an arbiter and a crossbar. Data from ingress ports is carried to the egress ports via VT1.5 channels which are switched at the cross point of the crossbar. The crossbar setup and channel assignments at ingress port are done by the arbiter. The design was tested by simulation and the hardware cost was estimated. The performance results showed that the switch is non-blocking, provide differentiated service, and has an overall effective throughput of 80%. This result is a significant step towards the goal of building a switch that can support multiprotocol and provide different network capabilities into one platform. The long-term goal of this project is to develop a prototype of the hybrid switch with broadband capability

    Scalable network virtualization using FPGAs

    Full text link

    A processor-sharing scheduling strategy for NFV nodes

    Get PDF
    The introduction of the two paradigms SDN and NFV to "softwarize" the current Internet is making management and resource allocation two key challenges in the evolution towards the Future Internet. In this context, this paper proposes Network-Aware Round Robin (NARR), a processor-sharing strategy, to reduce delays in traversing SDN/NFV nodes. The application of NARR alleviates the job of the Orchestrator by automatically working at the intranode level, dynamically assigning the processor slices to the virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interface cards (NICs). An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

    Arbiter, September 20

    Get PDF

    Neural network computing using on-chip accelerators

    Get PDF
    The use of neural networks, machine learning, or artificial intelligence, in its broadest and most controversial sense, has been a tumultuous journey involving three distinct hype cycles and a history dating back to the 1960s. Resurgent, enthusiastic interest in machine learning and its applications bolsters the case for machine learning as a fundamental computational kernel. Furthermore, researchers have demonstrated that machine learning can be utilized as an auxiliary component of applications to enhance or enable new types of computation such as approximate computing or automatic parallelization. In our view, machine learning becomes not the underlying application, but a ubiquitous component of applications. This view necessitates a different approach towards the deployment of machine learning computation that spans not only hardware design of accelerator architectures, but also user and supervisor software to enable the safe, simultaneous use of machine learning accelerator resources. In this dissertation, we propose a multi-transaction model of neural network computation to meet the needs of future machine learning applications. We demonstrate that this model, encompassing a decoupled backend accelerator for inference and learning from hardware and software for managing neural network transactions can be achieved with low overhead and integrated with a modern RISC-V microprocessor. Our extensions span user and supervisor software and data structures and, coupled with our hardware, enable multiple transactions from different address spaces to execute simultaneously, yet safely. Together, our system demonstrates the utility of a multi-transaction model to increase energy efficiency improvements and improve overall accelerator throughput for machine learning applications

    Self-timed field programmmable gate array architectures

    Get PDF

    Analysis of Microcontroller Embedded SRAMs for Applications in Physical Unclonable Functions

    Get PDF
    The growth of the Internet of Things (IoT) market has motivated widespread proliferation of microcontroller- (MCU) based embedded systems. Suitable due to their abundance, low cost, low power consumption and small footprint. The memory architecture typically consists of volatile memory such as block(s) of SRAM, and non-volatile memory (NVM) for code storage. Authentication and encryption safeguard these endpoints within an IoT framework, which requires storage of a secure key. Keys stored within integrated circuits (ICs) are susceptible to attack via reverse engineering of the NVM. Newer approaches use Physical Unclonable Functions (PUFs), which produce unique identi ers that takes advantage of device-level randomness induced by manufacturing process variation in silicon. The unclonable property of PUFs is demonstrated with an analytical model. The unpredictable yet repeatable start-up values (SUVs) of SRAM bit-cells form the basis of an SRAM PUF. Performance measures, such as reliability, randomness, symmetry, and stability, dictate the quality of a PUF. Two commercial o -the-shelf (COTS) ARM-Cortex based MCU products, the STM32F429ZIT6U and ATSAMR21G18A, underwent automated and manual power cycling experiments that examined their embedded SRAM SUVs. The characterization framework provided acquires data via debug software and a developed C program, power cycling using a USB controlled relay and post-processing using Python. Applications of PUFs include cryptographic key generation, device identi cation and true random number hardware generation. Statistical results and a comparative analysis are presented. Amongst the total bitcell count of the embedded SRAM in STM and ATSAM MCUs, 36:86% and 28:86% are classi ed as non- or partially-skewed, respectively across N = 10; 000 samples. The Atmel MCU outperforms the STM MCU in reliability by 1.42 %, randomness by 0.65 % and stability by 8.00 %, with a 4.74 % SUV bias towards a logic '1'. Max errors per 128-bit data item is 22 and 38 bits for MCU #1 and MCU #2, respectively. The STM MCU exhibits column-wise correlation illustrated in a heatmap, where the Atmel MCU shows a random signature. The embedded SRAM in the Atmel MCU outperforms the STM MCU's and is thereby considered the more suitable PUF
    corecore