47 research outputs found

    Event-Driven Simulation Methodology for Analog/Mixed-Signal Systems

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 김재하.Recent system-on-chip's (SoCs) are composed of tightly coupled analog and digital components. The resulting mixed-signal systems call for efficient system-level behavioral simulators for fast and systematic verifications. As the system-level verifications rely heavily on digital verification tools, it is desirable to build the mixed-signal simulator based on a digital simulator. However, the existing solutions in digital simulators suffer from a trade-off between simulation speed and accuracy. This work breaks down the trade-off and realizes a fast and accurate analog/mixed-signal behavior simulation in a digital simulator SystemVerilog. The main difference of the proposed methodology from existing ones is its way of representing continuous-time signals. Specifically, a clock signal expresses accurate timing information by carrying an additional real-value time offset, and an analog signal represents its continuous-time waveform in a functional form by employing a set of coefficients. With these signal representations, the proposed method accurately simulates mixed-signal behaviors independently of a simulator's time-step and achieves a purely event-driven simulation without involving any numerical iteration. The speed and accuracy of the proposed methodology are examined for various types of analog/mixed-signal systems. First, timing-sensitive circuits (a phase-locked loops and a clock and data recovery loop) and linear analog circuits (a channel and linear equalizers) are simulated in a high-speed I/O interface example. Second, a switched-linear-behavior simulation is demonstrated through switching power supplies, such as a boost converter and a switched-capacitor converter. Additionally, the proposed method is applied to weakly nonlinear behaviors modeled with a Volterra series for an RF power amplifier and a high-speed I/O linear equalizer. Furthermore, the nonlinear behavior simulation is extended to three different types of injection-locked oscillators exhibiting time-varying nonlinear behaviors. The experimental results show that the proposed simulation methodology achieved tens to hundreds of speed-ups while maintaining the same accuracy as commercial analog simulators.ABSTRACT I CONTENTS III LIST OF FIGURES V LIST OF TABLES XII CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND 1 1.2 MAIN CONTRIBUTION 6 1.3 THESIS ORGANIZATION 8 CHAPTER 2 EVENT-DRIVEN SIMULATION OF ANALOG/MIXED-SIGNAL BEHAVIORS 9 2.1 PROPOSED CLOCK AND ANALOG SIGNAL REPRESENTATIONS 10 2.2 SIGNAL TYPE DEFINITIONS IN SYSTEMVERILOG 14 2.3 EVENT-DRIVEN SIMULATION METHODOLOGY 16 CHAPTER 3 HIGH-SPEED I/O INTERFACE SIMULATION 21 3.1 CHARGE-PUMP PHASE-LOCKED LOOP 23 3.2 BANGBANG CLOCK AND DATA RECOVERY 37 3.3 CHANNEL AND EQUALIZERS 45 3.4 HIGH-SPEED I/O SYSTEM SIMULATION 52 CHAPTER 4 SWITCHING POWER SUPPLY SIMULATION 55 4.1 BOOST CONVERTER 57 4.2 TIME-INTERLEAVED SWITCHED-CAPACITOR CONVERTER 66 CHAPTER 5 VOLTERRA SERIES MODEL SIMULATION 72 5.1 VOLTERRA SERIES MODEL 74 5.2 CLASS-A POWER AMPLIFIER 79 5.3 CONTINUOUS-TIME EQUALIZER 84 CHAPTER 6 INJECTION-LOCKED OSCILLATOR SIMULATION 89 6.1 PPV-BASED ILO MODEL 91 6.2 LC OSCILLATOR 99 6.3 RING OSCILLATOR 104 6.4 BURST-MODE CLOCK AND DATA RECOVERY 109 CONCLUSION 116 BIBLIOGRAPHY 118 초 록 126Docto

    Modern methods of mixed-signal integrated circuit verification

    Get PDF
    Tato diplomová práce se zabývá verifikací integrovaných obvodů pracujících ve smíšeném módu. Teoretická část práce obsahuje přehled moderních verifikačních metod a zaměřuje se zejména na „assertion based methodology“ . V praktické části práce jsou pak rozebrány popisné jazyky používané u této metody, a následně je vytvořen kód pro verifikaci bloku řídícího obvodu spínaných zdrojů.This master thesis deals with verification methods of mixed-signal integrated circuits. Theoretical part contains summary of modern verification methods with emphasis on „assertion based methodology“ . The practical part analyses descriptive languages used in this method and a code for verification of a power supply control circuit block is created.

    Design for testability of a latch-based design

    Get PDF
    Abstract. The purpose of this thesis was to decrease the area of digital logic in a power management integrated circuit (PMIC), by replacing selected flip-flops with latches. The thesis consists of a theory part, that provides background theory for the thesis, and a practical part, that presents a latch register design and design for testability (DFT) method for achieving an acceptable level of manufacturing fault coverage for it. The total area was decreased by replacing flip-flops of read-write and one-time programmable registers with latches. One set of negative level active primary latches were shared with all the positive level active latch registers in the same register bank. Clock gating was used to select which latch register the write data was loaded to from the primary latches. The latches were made transparent during the shift operation of partial scan testing. The observability of the latch register clock gating logic was improved by leaving the first bit of each latch register as a flip-flop. The controllability was improved by inserting control points. The latch register design, developed in this thesis, resulted in a total area decrease of 5% and a register bank area decrease of 15% compared to a flip-flop-based reference design. The latch register design manages to maintain the same stuck-at fault coverage as the reference design.Salpaperäisen piirin testattavuuden suunnittelu. Tiivistelmä. Tämän opinnäytetyön tarkoituksena oli pienentää digitaalisen logiikan pinta-alaa integroidussa tehonhallintapiirissä, korvaamalla valitut kiikut salpapiireillä. Opinnäytetyö koostuu teoriaosasta, joka antaa taustatietoa opinnäytetyölle, ja käytännön osuudesta, jossa esitellään salparekisteripiiri ja testattavuussuunnittelun menetelmä, jolla saavutettiin riittävän hyvä virhekattavuus salparekisteripiirille. Kokonaispinta-alaa pienennettiin korvaamalla luku-kirjoitusrekistereiden ja kerran ohjelmoitavien rekistereiden kiikut salpapiireillä. Yhdet negatiivisella tasolla aktiiviset isäntä-salpapiirit jaettiin kaikkien samassa rekisteripankissa olevien positiivisella tasolla aktiivisten salparekistereiden kanssa. Kellon portittamisella valittiin mihin salparekisteriin kirjoitusdata ladattiin yhteisistä isäntä-salpapireistä. Osittaisessa testipolkuihin perustuvassa testauksessa salpapiirit tehtiin läpinäkyviksi siirtooperaation aikana. Salparekisterin kellon portituslogiikan havaittavuutta parannettiin jättämällä jokaisen salparekisterin ensimmäinen bitti kiikuksi. Ohjattavuutta parannettiin lisäämällä ohjauspisteitä. Salparekisteripiiri, joka suunniteltiin tässä diplomityössä, pienensi kokonaispinta-alaa 5 % ja rekisteripankin pinta-alaa 15 % verrattuna kiikkuperäiseen vertailupiiriin. Salparekisteripiiri onnistuu pitämään saman juuttumisvikamallin virhekattavuuden kuin vertailupiiri

    Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems

    Full text link
    [ES] La utilización de sistemas empotrados en cada vez más ámbitos de aplicación está llevando a que su diseño deba enfrentarse a mayores requisitos de rendimiento, consumo de energía y área (PPA). Asimismo, su utilización en aplicaciones críticas provoca que deban cumplir con estrictos requisitos de confiabilidad para garantizar su correcto funcionamiento durante períodos prolongados de tiempo. En particular, el uso de dispositivos lógicos programables de tipo FPGA es un gran desafío desde la perspectiva de la confiabilidad, ya que estos dispositivos son muy sensibles a la radiación. Por todo ello, la confiabilidad debe considerarse como uno de los criterios principales para la toma de decisiones a lo largo del todo flujo de diseño, que debe complementarse con diversos procesos que permitan alcanzar estrictos requisitos de confiabilidad. Primero, la evaluación de la robustez del diseño permite identificar sus puntos débiles, guiando así la definición de mecanismos de tolerancia a fallos. Segundo, la eficacia de los mecanismos definidos debe validarse experimentalmente. Tercero, la evaluación comparativa de la confiabilidad permite a los diseñadores seleccionar los componentes prediseñados (IP), las tecnologías de implementación y las herramientas de diseño (EDA) más adecuadas desde la perspectiva de la confiabilidad. Por último, la exploración del espacio de diseño (DSE) permite configurar de manera óptima los componentes y las herramientas seleccionados, mejorando así la confiabilidad y las métricas PPA de la implementación resultante. Todos los procesos anteriormente mencionados se basan en técnicas de inyección de fallos para evaluar la robustez del sistema diseñado. A pesar de que existe una amplia variedad de técnicas de inyección de fallos, varias problemas aún deben abordarse para cubrir las necesidades planteadas en el flujo de diseño. Aquellas soluciones basadas en simulación (SBFI) deben adaptarse a los modelos de nivel de implementación, teniendo en cuenta la arquitectura de los diversos componentes de la tecnología utilizada. Las técnicas de inyección de fallos basadas en FPGAs (FFI) deben abordar problemas relacionados con la granularidad del análisis para poder localizar los puntos débiles del diseño. Otro desafío es la reducción del coste temporal de los experimentos de inyección de fallos. Debido a la alta complejidad de los diseños actuales, el tiempo experimental dedicado a la evaluación de la confiabilidad puede ser excesivo incluso en aquellos escenarios más simples, mientras que puede ser inviable en aquellos procesos relacionados con la evaluación de múltiples configuraciones alternativas del diseño. Por último, estos procesos orientados a la confiabilidad carecen de un soporte instrumental que permita cubrir el flujo de diseño con toda su variedad de lenguajes de descripción de hardware, tecnologías de implementación y herramientas de diseño. Esta tesis aborda los retos anteriormente mencionados con el fin de integrar, de manera eficaz, estos procesos orientados a la confiabilidad en el flujo de diseño. Primeramente, se proponen nuevos métodos de inyección de fallos que permiten una evaluación de la confiabilidad, precisa y detallada, en diferentes niveles del flujo de diseño. Segundo, se definen nuevas técnicas para la aceleración de los experimentos de inyección que mejoran su coste temporal. Tercero, se define dos estrategias DSE que permiten configurar de manera óptima (desde la perspectiva de la confiabilidad) los componentes IP y las herramientas EDA, con un coste experimental mínimo. Cuarto, se propone un kit de herramientas que automatiza e incorpora con eficacia los procesos orientados a la confiabilidad en el flujo de diseño semicustom. Finalmente, se demuestra la utilidad y eficacia de las propuestas mediante un caso de estudio en el que se implementan tres procesadores empotrados en un FPGA de Xilinx serie 7.[CA] La utilització de sistemes encastats en cada vegada més àmbits d'aplicació està portant al fet que el seu disseny haja d'enfrontar-se a majors requisits de rendiment, consum d'energia i àrea (PPA). Així mateix, la seua utilització en aplicacions crítiques provoca que hagen de complir amb estrictes requisits de confiabilitat per a garantir el seu correcte funcionament durant períodes prolongats de temps. En particular, l'ús de dispositius lògics programables de tipus FPGA és un gran desafiament des de la perspectiva de la confiabilitat, ja que aquests dispositius són molt sensibles a la radiació. Per tot això, la confiabilitat ha de considerar-se com un dels criteris principals per a la presa de decisions al llarg del tot flux de disseny, que ha de complementar-se amb diversos processos que permeten aconseguir estrictes requisits de confiabilitat. Primer, l'avaluació de la robustesa del disseny permet identificar els seus punts febles, guiant així la definició de mecanismes de tolerància a fallades. Segon, l'eficàcia dels mecanismes definits ha de validar-se experimentalment. Tercer, l'avaluació comparativa de la confiabilitat permet als dissenyadors seleccionar els components predissenyats (IP), les tecnologies d'implementació i les eines de disseny (EDA) més adequades des de la perspectiva de la confiabilitat. Finalment, l'exploració de l'espai de disseny (DSE) permet configurar de manera òptima els components i les eines seleccionats, millorant així la confiabilitat i les mètriques PPA de la implementació resultant. Tots els processos anteriorment esmentats es basen en tècniques d'injecció de fallades per a poder avaluar la robustesa del sistema dissenyat. A pesar que existeix una àmplia varietat de tècniques d'injecció de fallades, diverses problemes encara han d'abordar-se per a cobrir les necessitats plantejades en el flux de disseny. Aquelles solucions basades en simulació (SBFI) han d'adaptar-se als models de nivell d'implementació, tenint en compte l'arquitectura dels diversos components de la tecnologia utilitzada. Les tècniques d'injecció de fallades basades en FPGAs (FFI) han d'abordar problemes relacionats amb la granularitat de l'anàlisi per a poder localitzar els punts febles del disseny. Un altre desafiament és la reducció del cost temporal dels experiments d'injecció de fallades. A causa de l'alta complexitat dels dissenys actuals, el temps experimental dedicat a l'avaluació de la confiabilitat pot ser excessiu fins i tot en aquells escenaris més simples, mentre que pot ser inviable en aquells processos relacionats amb l'avaluació de múltiples configuracions alternatives del disseny. Finalment, aquests processos orientats a la confiabilitat manquen d'un suport instrumental que permeta cobrir el flux de disseny amb tota la seua varietat de llenguatges de descripció de maquinari, tecnologies d'implementació i eines de disseny. Aquesta tesi aborda els reptes anteriorment esmentats amb la finalitat d'integrar, de manera eficaç, aquests processos orientats a la confiabilitat en el flux de disseny. Primerament, es proposen nous mètodes d'injecció de fallades que permeten una avaluació de la confiabilitat, precisa i detallada, en diferents nivells del flux de disseny. Segon, es defineixen noves tècniques per a l'acceleració dels experiments d'injecció que milloren el seu cost temporal. Tercer, es defineix dues estratègies DSE que permeten configurar de manera òptima (des de la perspectiva de la confiabilitat) els components IP i les eines EDA, amb un cost experimental mínim. Quart, es proposa un kit d'eines (DAVOS) que automatitza i incorpora amb eficàcia els processos orientats a la confiabilitat en el flux de disseny semicustom. Finalment, es demostra la utilitat i eficàcia de les propostes mitjançant un cas d'estudi en el qual s'implementen tres processadors encastats en un FPGA de Xilinx serie 7.[EN] Embedded systems are steadily extending their application areas, dealing with increasing requirements in performance, power consumption, and area (PPA). Whenever embedded systems are used in safety-critical applications, they must also meet rigorous dependability requirements to guarantee their correct operation during an extended period of time. Meeting these requirements is especially challenging for those systems that are based on Field Programmable Gate Arrays (FPGAs), since they are very susceptible to Single Event Upsets. This leads to increased dependability threats, especially in harsh environments. In such a way, dependability should be considered as one of the primary criteria for decision making throughout the whole design flow, which should be complemented by several dependability-driven processes. First, dependability assessment quantifies the robustness of hardware designs against faults and identifies their weak points. Second, dependability-driven verification ensures the correctness and efficiency of fault mitigation mechanisms. Third, dependability benchmarking allows designers to select (from a dependability perspective) the most suitable IP cores, implementation technologies, and electronic design automation (EDA) tools. Finally, dependability-aware design space exploration (DSE) allows to optimally configure the selected IP cores and EDA tools to improve as much as possible the dependability and PPA features of resulting implementations. The aforementioned processes rely on fault injection testing to quantify the robustness of the designed systems. Despite nowadays there exists a wide variety of fault injection solutions, several important problems still should be addressed to better cover the needs of a dependability-driven design flow. In particular, simulation-based fault injection (SBFI) should be adapted to implementation-level HDL models to take into account the architecture of diverse logic primitives, while keeping the injection procedures generic and low-intrusive. Likewise, the granularity of FPGA-based fault injection (FFI) should be refined to the enable accurate identification of weak points in FPGA-based designs. Another important challenge, that dependability-driven processes face in practice, is the reduction of SBFI and FFI experimental effort. The high complexity of modern designs raises the experimental effort beyond the available time budgets, even in simple dependability assessment scenarios, and it becomes prohibitive in presence of alternative design configurations. Finally, dependability-driven processes lack an instrumental support covering the semicustom design flow in all its variety of description languages, implementation technologies, and EDA tools. Existing fault injection tools only partially cover the individual stages of the design flow, being usually specific to a particular design representation level and implementation technology. This work addresses the aforementioned challenges by efficiently integrating dependability-driven processes into the design flow. First, it proposes new SBFI and FFI approaches that enable an accurate and detailed dependability assessment at different levels of the design flow. Second, it improves the performance of dependability-driven processes by defining new techniques for accelerating SBFI and FFI experiments. Third, it defines two DSE strategies that enable the optimal dependability-aware tuning of IP cores and EDA tools, while reducing as much as possible the robustness evaluation effort. Fourth, it proposes a new toolkit (DAVOS) that automates and seamlessly integrates the aforementioned dependability-driven processes into the semicustom design flow. Finally, it illustrates the usefulness and efficiency of these proposals through a case study consisting of three soft-core embedded processors implemented on a Xilinx 7-series SoC FPGA.Tuzov, I. (2020). Dependability-driven Strategies to Improve the Design and Verification of Safety-Critical HDL-based Embedded Systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159883TESI

    Energy Efficient True Random Number Generator

    Get PDF
    For modern cryptography, the availability of true random numbers is indispensable. While recent technology trends require secure communication, they combine this requirement with the need for energy efficient solutions. As a result, true random number generators (TRNG) which satisfy both aspects have to be developed. Based on this motivation, the here presented project has been focused on the realization of a TRNG for a mixed-signal microcontroller unit (MCU) environment. These kind of MCUs generally contain an analog-to-digital converter (ADC), which is well known to be influenced by random noise processes, as for example thermal noise. To avoid unnecessary design and prototype costs, it is therefore reasonable to try to implement the entropy source of a TRNG based on the existing ADC design. Possible non-random imperfections in the output of the source can be mask by deterministic post-processor algorithms. Two possible post-processors are the von Neumann corrector (VNC) and an extractor based on pairwise independent hash functions (IHF). To evaluate the proposed concept, during this project the ADC of a typical MCU has been set up to function as an entropy source. The so generated data has been used as a basis for further simulations and analyses of the statistical characteristics of different TRNG designs. To ease these analyses, a novel test has been developed, which can be used to test a bit stream for the existence of special statistical characteristics, required by the VNC. In addition, both the VNC and IHF have been analyzed with regard to their complexity and implemented in SystemVerilog. In order to find an energy efficient implementation of the IHF, two different algorithmic solutions have been considered and the chosen design has been kept generic to be tunable. For the VNC different approaches of clock gating have been explored to reduce unnecessary dynamic power consumption. After the verification of the proposed designs, both post-processors have been synthesized in a standard 65nm technology, in order to estimate their power performance. Finally, in connection with the ADC based source, both post-processor designs have been evaluated with regard to both randomness and energy performance. While the output of the approach using the VNC is classified as not random, the IHF based design passes the so called NIST test suite for random numbers, and can therefore be considered to be random. Hence the connection of the ADC based entropy source and the IHF depicts a functional TRNG solution. By tuning the IHF, it has been able to reach an approximated minimum energy consumption of 5.9nJ for this approac

    Implementation and Characterization of Mixed-Signal Neuromorphic ASICs

    Get PDF
    Accelerated neuromorphic hardware allows the emulation of spiking neural networks with a high speed-up factor compared to classical computer simulation approaches. However, realizing a high degree of versatility and configurability in the implemented models is challenging. In this thesis, we present two mixed-signal ASICs that improve upon previous architectures by augmenting the versatility of the modeled synapses and neurons. In the first part, we present the integration of an analog multi-compartment neuron model into the Multi-Compartment Chip. We characterize the properties of this neuron model and describe methods to compensate for deviations from ideal behavior introduced by the physical implementation. The implemented features of the multi-compartment neurons are demonstrated with a compact prototype setup. In the second part, the integration of a general-purpose microprocessor with analog models of neurons and synapses is described. This allows to define learning rules that go beyond spike-timing dependent plasticity in software without decreasing the speed-up of the underlying network emulation. In the third part, the importance of testability and pre-tapeout verification is discussed and exemplified by the design process of both chips

    Modeling and Design of High-Performance DC-DC Converters

    Get PDF
    The goal of the research that was pursued during this PhD is to eventually facilitate the development of high-performance, fast-switching DC-DC converters. High-switching frequency in switching mode power supplies (SMPS) can be exploited by reducing the output voltage ripple for the same size of passives (mainly inductors and capacitors) and improve overall system performance by providing a voltage supply with less unwanted harmonics to the subsystems that they support. The opposite side of the trade-off is also attractive for designers as the same amount of ripple can be achieved with smaller values of inductance and/or capacitance which can result in a physically smaller and potentially cheaper end product. Another benefit is that the spectrum of the resulting switching noise is shifted to higher frequencies which in turn allows designers to push the corner frequency of the control loop of the system higher without the switching noise affecting the behavior of the system. This in turn, is translated to a system capable of responding faster to strong transients that are common in modern systems that may contain microprocessors or other electronics that tend to consume power in bursts and may even require the use of features like dynamic voltage scaling to minimize the overall consumption of the system. While the analysis of the open loop behavior of a DC-DC converter is relatively straightforward, it is of limited usefulness as they almost always operate in closed loop and therefore can suffer from degraded stability. Therefore, it is important to have a way to simulate their closed loop behavior in the most efficient manner possible. The first chapter is dedicated to a library of technology-agnostic high-level models that can be used to improve the efficiency of transient simulations without sacrificing the ability to model and localize the different losses. This work also focuses further in fixed-frequency converters that employ Peak Current Mode Control (PCM) schemes. PCM schemes are frequently used due to their simple implementation and their ability to respond quickly to line transients since any change of the battery voltage is reflected in the slope of the rising inductor current which in turn is monitored by a fast internal control loop that is closed with the help of a current sensor. Most existing models for current sensors assume that they behave in an ideal manner with infinite bandwidth and ideal constant gain. These assumptions tend to be in significant error as the minimum on-time of the sensor and therefore the settling time requirements of the sensor are reduced. Some sensing architectures, like the ones that approximate the inductor current with the high-side switch current, can be even more complex to analyze as they require the use of extended masking time to prevent spike currents caused by the switch commutation to be injected to the output of the sensor and therefore the signal processing blocks of the control loop. In order to solve this issue, this work also proposes a current sensor model that is compatible with time averaged models of DC-DC converters and is able to predict the effects of static and transient non-idealities of the block on the behavior of a PCM DC-DC converter. Lastly, this work proposes a new 40 V, 6 A, fully-integrated, high-side current sensing circuit with a response time of 51 . The proposed sensor is able to achieve this performance with the help of a feedback resistance emulation technique that prevents the sensor from debiasing during its masking phase which tends to extend the response time of similar fully integrated sensors

    A New Approach to Learning in Neuromorphic Hardware

    Get PDF
    This thesis presents a novel, highly flexible approach to plasticity and learning in brain-inspired computing systems. A classical digital processor was combined with local analog processing to achieve flexibility and efficiency. In particular, this allows for the implementation of modulated spike-timing dependent plasticity. The approach was formalized into an abstract hybrid hardware model. This model was used to simulate a reward-based learning task to estimate the effect of hardware constraints. To investigate the feasibility of the proposed architecture, a synthesizeable plasticity processor was designed and tested using the CoreMark general purpose benchmark (best score: 1.89 per MHz). The processor was also produced as part of a 65 nm proto- type chip, requiring 0.14 mm2 of die-area, and reaching a maximum clock frequency of 769 MHz. In a preparatory step a non-programmable plasticity implementation was developed, that is now part of the operational BrainScaleS wafer-scale system. This design was later extended with the plasticity processor to implement the proposed hybrid architecture. Simulations show a speed improvement of 42 % over the non- programmable variant. By preparation for production, the area requirement for the digital part is estimated to be 6.2 % of total area
    corecore