14,169 research outputs found

    Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level

    Get PDF
    In recent technology nodes, reliability is considered a part of the standard design ¿ow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approac

    From FPGA to ASIC: A RISC-V processor experience

    Get PDF
    This work document a correct design flow using these tools in the Lagarto RISC- V Processor and the RTL design considerations that must be taken into account, to move from a design for FPGA to design for ASIC

    FPGA ARCHITECTURE AND VERIFICATION OF BUILT IN SELF-TEST (BIST) FOR 32-BIT ADDER/SUBTRACTER USING DE0-NANO FPGA AND ANALOG DISCOVERY 2 HARDWARE

    Get PDF
    The integrated circuit (IC) is an integral part of everyday modern technology, and its application is very attractive to hardware and software design engineers because of its versatility, integration, power consumption, cost, and board area reduction. IC is available in various types such as Field Programming Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), System on Chip (SoC) architecture, Digital Signal Processing (DSP), microcontrollers (μC), and many more. With technology demand focused on faster, low power consumption, efficient IC application, design engineers are facing tremendous challenges in developing and testing integrated circuits that guaranty functionality, high fault coverage, and reliability as the transistor technology is shrinking to the point where manufacturing defects of ICs are affecting yield which associates with the increased cost of the part. The competitive IC market is pressuring manufactures of ICs to develop and market IC in a relatively quick turnaround which in return requires design and verification engineers to develop an integrated self-test structure that would ensure fault-free and the quality product is delivered on the market. 70-80% of IC design is spent on verification and testing to ensure high quality and reliability for the enduser. To test complex and sophisticated IC designs, the verification engineers must produce laborious and costly test fixtures which affect the cost of the part on the competitive market. To avoid increasing the part cost due to yield and test time to the end-user and to keep up with the competitive market many IC design engineers are deviating from complex external test fixture approach and are focusing on integrating Built-in Self-Test (BIST) or Design for Test (DFT) techniques onto IC’s which would reduce time to market but still guarantee high coverage for the product. Understanding the BIST, the architecture, as well as the application of IC, must be understood before developing IC. The architecture of FPGA is elaborated in this paper followed by several BIST techniques and applications of those BIST relative to FPGA, SoC, analog to digital (ADC), or digital to analog converters (DAC) that are integrated on IC. Paper is concluded with verification of BIST for the 32-bit adder/subtracter designed in Quartus II software using the Analog Discovery 2 module as stimulus and DE0-NANO FPGA board for verification

    Platform-based design, test and fast verification flow for mixed-signal systems on chip

    Get PDF
    This research is providing methodologies to enhance the design phase from architectural space exploration and system study to verification of the whole mixed-signal system. At the beginning of the work, some innovative digital IPs have been designed to develop efficient signal conditioning for sensor systems on-chip that has been included in commercial products. After this phase, the main focus has been addressed to the creation of a re-usable and versatile test of the device after the tape-out which is close to become one of the major cost factor for ICs companies, strongly linking it to model’s test-benches to avoid re-design phases and multi-environment scenarios, producing a very effective approach to a single, fast and reliable multi-level verification environment. All these works generated different publications in scientific literature. The compound scenario concerning the development of sensor systems is presented in Chapter 1, together with an overview of the related market with a particular focus on the latest MEMS and MOEMS technology devices, and their applications in various segments. Chapter 2 introduces the state of the art for sensor interfaces: the generic sensor interface concept (based on sharing the same electronics among similar applications achieving cost saving at the expense of area and performance loss) versus the Platform Based Design methodology, which overcomes the drawbacks of the classic solution by keeping the generality at the highest design layers and customizing the platform for a target sensor achieving optimized performances. An evolution of Platform Based Design achieved by implementation into silicon of the ISIF (Intelligent Sensor InterFace) platform is therefore presented. ISIF is a highly configurable mixed-signal chip which allows designers to perform an effective design space exploration and to evaluate directly on silicon the system performances avoiding the critical and time consuming analysis required by standard platform based approach. In chapter 3 we describe the design of a smart sensor interface for conditioning next generation MOEMS. The adoption of a new, high performance and high integrated technology allow us to integrate not only a versatile platform but also a powerful ARM processor and various IPs providing the possibility to use the platform not only as a conditioning platform but also as a processing unit for the application. In this chapter a description of the various blocks is given, with a particular emphasis on the IP developed in order to grant the highest grade of flexibility with the minimum area occupation. The architectural space evaluation and the application prototyping with ISIF has enabled an effective, rapid and low risk development of a new high performance platform achieving a flexible sensor system for MEMS and MOEMS monitoring and conditioning. The platform has been design to cover very challenging test-benches, like a laser-based projector device. In this way the platform will not only be able to effectively handle the sensor but also all the system that can be built around it, reducing the needed for further electronics and resulting in an efficient test bench for the algorithm developed to drive the system. The high costs in ASIC development are mainly related to re-design phases because of missing complete top-level tests. Analog and digital parts design flows are separately verified. Starting from these considerations, in the last chapter a complete test environment for complex mixed-signal chips is presented. A semi-automatic VHDL-AMS flow to provide totally matching top-level is described and then, an evolution for fast self-checking test development for both model and real chip verification is proposed. By the introduction of a Python interface, the designer can easily perform interactive tests to cover all the features verification (e.g. calibration and trimming) into the design phase and check them all with the same environment on the real chip after the tape-out. This strategy has been tested on a consumer 3D-gyro for consumer application, in collaboration with SensorDynamics AG

    System-on-chip Computing and Interconnection Architectures for Telecommunications and Signal Processing

    Get PDF
    This dissertation proposes novel architectures and design techniques targeting SoC building blocks for telecommunications and signal processing applications. Hardware implementation of Low-Density Parity-Check decoders is approached at both the algorithmic and the architecture level. Low-Density Parity-Check codes are a promising coding scheme for future communication standards due to their outstanding error correction performance. This work proposes a methodology for analyzing effects of finite precision arithmetic on error correction performance and hardware complexity. The methodology is throughout employed for co-designing the decoder. First, a low-complexity check node based on the P-output decoding principle is designed and characterized on a CMOS standard-cells library. Results demonstrate implementation loss below 0.2 dB down to BER of 10^{-8} and a saving in complexity up to 59% with respect to other works in recent literature. High-throughput and low-latency issues are addressed with modified single-phase decoding schedules. A new "memory-aware" schedule is proposed requiring down to 20% of memory with respect to the traditional two-phase flooding decoding. Additionally, throughput is doubled and logic complexity reduced of 12%. These advantages are traded-off with error correction performance, thus making the solution attractive only for long codes, as those adopted in the DVB-S2 standard. The "layered decoding" principle is extended to those codes not specifically conceived for this technique. Proposed architectures exhibit complexity savings in the order of 40% for both area and power consumption figures, while implementation loss is smaller than 0.05 dB. Most modern communication standards employ Orthogonal Frequency Division Multiplexing as part of their physical layer. The core of OFDM is the Fast Fourier Transform and its inverse in charge of symbols (de)modulation. Requirements on throughput and energy efficiency call for FFT hardware implementation, while ubiquity of FFT suggests the design of parametric, re-configurable and re-usable IP hardware macrocells. In this context, this thesis describes an FFT/IFFT core compiler particularly suited for implementation of OFDM communication systems. The tool employs an accuracy-driven configuration engine which automatically profiles the internal arithmetic and generates a core with minimum operands bit-width and thus minimum circuit complexity. The engine performs a closed-loop optimization over three different internal arithmetic models (fixed-point, block floating-point and convergent block floating-point) using the numerical accuracy budget given by the user as a reference point. The flexibility and re-usability of the proposed macrocell are illustrated through several case studies which encompass all current state-of-the-art OFDM communications standards (WLAN, WMAN, xDSL, DVB-T/H, DAB and UWB). Implementations results are presented for two deep sub-micron standard-cells libraries (65 and 90 nm) and commercially available FPGA devices. Compared with other FFT core compilers, the proposed environment produces macrocells with lower circuit complexity and same system level performance (throughput, transform size and numerical accuracy). The final part of this dissertation focuses on the Network-on-Chip design paradigm whose goal is building scalable communication infrastructures connecting hundreds of core. A low-complexity link architecture for mesochronous on-chip communication is discussed. The link enables skew constraint looseness in the clock tree synthesis, frequency speed-up, power consumption reduction and faster back-end turnarounds. The proposed architecture reaches a maximum clock frequency of 1 GHz on 65 nm low-leakage CMOS standard-cells library. In a complex test case with a full-blown NoC infrastructure, the link overhead is only 3% of chip area and 0.5% of leakage power consumption. Finally, a new methodology, named metacoding, is proposed. Metacoding generates correct-by-construction technology independent RTL codebases for NoC building blocks. The RTL coding phase is abstracted and modeled with an Object Oriented framework, integrated within a commercial tool for IP packaging (Synopsys CoreTools suite). Compared with traditional coding styles based on pre-processor directives, metacoding produces 65% smaller codebases and reduces the configurations to verify up to three orders of magnitude

    Co-simulation techniques based on virtual platforms for SoC design and verification in power electronics applications

    Get PDF
    En las últimas décadas, la inversión en el ámbito energético ha aumentado considerablemente. Actualmente, existen numerosas empresas que están desarrollando equipos como convertidores de potencia o máquinas eléctricas con sistemas de control de última generación. La tendencia actual es usar System-on-chips y Field Programmable Gate Arrays para implementar todo el sistema de control. Estos dispositivos facilitan el uso de algoritmos de control más complejos y eficientes, mejorando la eficiencia de los equipos y habilitando la integración de los sistemas renovables en la red eléctrica. Sin embargo, la complejidad de los sistemas de control también ha aumentado considerablemente y con ello la dificultad de su verificación. Los sistemas Hardware-in-the-loop (HIL) se han presentado como una solución para la verificación no destructiva de los equipos energéticos, evitando accidentes y pruebas de alto coste en bancos de ensayo. Los sistemas HIL simulan en tiempo real el comportamiento de la planta de potencia y su interfaz para realizar las pruebas con la placa de control en un entorno seguro. Esta tesis se centra en mejorar el proceso de verificación de los sistemas de control en aplicaciones de electrónica potencia. La contribución general es proporcionar una alternativa a al uso de los HIL para la verificación del hardware/software de la tarjeta de control. La alternativa se basa en la técnica de Software-in-the-loop (SIL) y trata de superar o abordar las limitaciones encontradas hasta la fecha en el SIL. Para mejorar las cualidades de SIL se ha desarrollado una herramienta software denominada COSIL que permite co-simular la implementación e integración final del sistema de control, sea software (CPU), hardware (FPGA) o una mezcla de software y hardware, al mismo tiempo que su interacción con la planta de potencia. Dicha plataforma puede trabajar en múltiples niveles de abstracción e incluye soporte para realizar co-simulación mixtas en distintos lenguajes como C o VHDL. A lo largo de la tesis se hace hincapié en mejorar una de las limitaciones de SIL, su baja velocidad de simulación. Se proponen diferentes soluciones como el uso de emuladores software, distintos niveles de abstracción del software y hardware, o relojes locales en los módulos de la FPGA. En especial se aporta un mecanismo de sincronizaron externa para el emulador software QEMU habilitando su emulación multi-core. Esta aportación habilita el uso de QEMU en plataformas virtuales de co-simulacion como COSIL. Toda la plataforma COSIL, incluido el uso de QEMU, se ha analizado bajo diferentes tipos de aplicaciones y bajo un proyecto industrial real. Su uso ha sido crítico para desarrollar y verificar el software y hardware del sistema de control de un convertidor de 400 kVA

    Energy efficient mining on a quantum-enabled blockchain using light

    Full text link
    We outline a quantum-enabled blockchain architecture based on a consortium of quantum servers. The network is hybridised, utilising digital systems for sharing and processing classical information combined with a fibre--optic infrastructure and quantum devices for transmitting and processing quantum information. We deliver an energy efficient interactive mining protocol enacted between clients and servers which uses quantum information encoded in light and removes the need for trust in network infrastructure. Instead, clients on the network need only trust the transparent network code, and that their devices adhere to the rules of quantum physics. To demonstrate the energy efficiency of the mining protocol, we elaborate upon the results of two previous experiments (one performed over 1km of optical fibre) as applied to this work. Finally, we address some key vulnerabilities, explore open questions, and observe forward--compatibility with the quantum internet and quantum computing technologies.Comment: 25 pages, 5 figure
    corecore