47 research outputs found

    Circuit Design

    Get PDF
    Circuit Design = Science + Art! Designers need a skilled "gut feeling" about circuits and related analytical techniques, plus creativity, to solve all problems and to adhere to the specifications, the written and the unwritten ones. You must anticipate a large number of influences, like temperature effects, supply voltages changes, offset voltages, layout parasitics, and numerous kinds of technology variations to end up with a circuit that works. This is challenging for analog, custom-digital, mixed-signal or RF circuits, and often researching new design methods in relevant journals, conference proceedings and design tools unfortunately gives the impression that just a "wild bunch" of "advanced techniques" exist. On the other hand, state-of-the-art tools nowadays indeed offer a good cockpit to steer the design flow, which include clever statistical methods and optimization techniques.Actually, this almost presents a second breakthrough, like the introduction of circuit simulators 40 years ago! Users can now conveniently analyse all the problems (discover, quantify, verify), and even exploit them, for example for optimization purposes. Most designers are caught up on everyday problems, so we fit that "wild bunch" into a systematic approach for variation-aware design, a designer's field guide and more. That is where this book can help! Circuit Design: Anticipate, Analyze, Exploit Variations starts with best-practise manual methods and links them tightly to up-to-date automation algorithms. We provide many tractable examples and explain key techniques you have to know. We then enable you to select and setup suitable methods for each design task - knowing their prerequisites, advantages and, as too often overlooked, their limitations as well. The good thing with computers is that you yourself can often verify amazing things with little effort, and you can use software not only to your direct advantage in solving a specific problem, but also for becoming a better skilled, more experienced engineer. Unfortunately, EDA design environments are not good at all to learn about advanced numerics. So with this book we also provide two apps for learning about statistic and optimization directly with circuit-related examples, and in real-time so without the long simulation times. This helps to develop a healthy statistical gut feeling for circuit design. The book is written for engineers, students in engineering and CAD / methodology experts. Readers should have some background in standard design techniques like entering a design in a schematic capture and simulating it, and also know about major technology aspects

    Circuit Design

    Get PDF
    Circuit Design = Science + Art! Designers need a skilled "gut feeling" about circuits and related analytical techniques, plus creativity, to solve all problems and to adhere to the specifications, the written and the unwritten ones. You must anticipate a large number of influences, like temperature effects, supply voltages changes, offset voltages, layout parasitics, and numerous kinds of technology variations to end up with a circuit that works. This is challenging for analog, custom-digital, mixed-signal or RF circuits, and often researching new design methods in relevant journals, conference proceedings and design tools unfortunately gives the impression that just a "wild bunch" of "advanced techniques" exist. On the other hand, state-of-the-art tools nowadays indeed offer a good cockpit to steer the design flow, which include clever statistical methods and optimization techniques.Actually, this almost presents a second breakthrough, like the introduction of circuit simulators 40 years ago! Users can now conveniently analyse all the problems (discover, quantify, verify), and even exploit them, for example for optimization purposes. Most designers are caught up on everyday problems, so we fit that "wild bunch" into a systematic approach for variation-aware design, a designer's field guide and more. That is where this book can help! Circuit Design: Anticipate, Analyze, Exploit Variations starts with best-practise manual methods and links them tightly to up-to-date automation algorithms. We provide many tractable examples and explain key techniques you have to know. We then enable you to select and setup suitable methods for each design task - knowing their prerequisites, advantages and, as too often overlooked, their limitations as well. The good thing with computers is that you yourself can often verify amazing things with little effort, and you can use software not only to your direct advantage in solving a specific problem, but also for becoming a better skilled, more experienced engineer. Unfortunately, EDA design environments are not good at all to learn about advanced numerics. So with this book we also provide two apps for learning about statistic and optimization directly with circuit-related examples, and in real-time so without the long simulation times. This helps to develop a healthy statistical gut feeling for circuit design. The book is written for engineers, students in engineering and CAD / methodology experts. Readers should have some background in standard design techniques like entering a design in a schematic capture and simulating it, and also know about major technology aspects

    Ultrashort laser pulses from optical parametric amplifiers and oscillators

    Get PDF
    [no abstract

    Degradation Models and Optimizations for CMOS Circuits

    Get PDF
    Die Gewährleistung der Zuverlässigkeit von CMOS-Schaltungen ist derzeit eines der größten Herausforderungen beim Chip- und Schaltungsentwurf. Mit dem Ende der Dennard-Skalierung erhöht jede neue Generation der Halbleitertechnologie die elektrischen Felder innerhalb der Transistoren. Dieses stärkere elektrische Feld stimuliert die Degradationsphänomene (Alterung der Transistoren, Selbsterhitzung, Rauschen, usw.), was zu einer immer stärkeren Degradation (Verschlechterung) der Transistoren führt. Daher erleiden die Transistoren in jeder neuen Technologiegeneration immer stärkere Verschlechterungen ihrer elektrischen Parameter. Um die Funktionalität und Zuverlässigkeit der Schaltung zu wahren, wird es daher unerlässlich, die Auswirkungen der geschwächten Transistoren auf die Schaltung präzise zu bestimmen. Die beiden wichtigsten Auswirkungen der Verschlechterungen sind ein verlangsamtes Schalten, sowie eine erhöhte Leistungsaufnahme der Schaltung. Bleiben diese Auswirkungen unberücksichtigt, kann die verlangsamte Schaltgeschwindigkeit zu Timing-Verletzungen führen (d.h. die Schaltung kann die Berechnung nicht rechtzeitig vor Beginn der nächsten Operation abschließen) und die Funktionalität der Schaltung beeinträchtigen (fehlerhafte Ausgabe, verfälschte Daten, usw.). Um diesen Verschlechterungen der Transistorparameter im Laufe der Zeit Rechnung zu tragen, werden Sicherheitstoleranzen eingeführt. So wird beispielsweise die Taktperiode der Schaltung künstlich verlängert, um ein langsameres Schaltverhalten zu tolerieren und somit Fehler zu vermeiden. Dies geht jedoch auf Kosten der Performanz, da eine längere Taktperiode eine niedrigere Taktfrequenz bedeutet. Die Ermittlung der richtigen Sicherheitstoleranz ist entscheidend. Wird die Sicherheitstoleranz zu klein bestimmt, führt dies in der Schaltung zu Fehlern, eine zu große Toleranz führt zu unnötigen Performanzseinbußen. Derzeit verlässt sich die Industrie bei der Zuverlässigkeitsbestimmung auf den schlimmstmöglichen Fall (maximal gealterter Schaltkreis, maximale Betriebstemperatur bei minimaler Spannung, ungünstigste Fertigung, etc.). Diese Annahme des schlimmsten Falls garantiert, dass der Chip (oder integrierte Schaltung) unter allen auftretenden Betriebsbedingungen funktionsfähig bleibt. Darüber hinaus ermöglicht die Betrachtung des schlimmsten Falles viele Vereinfachungen. Zum Beispiel muss die eigentliche Betriebstemperatur nicht bestimmt werden, sondern es kann einfach die schlimmstmögliche (sehr hohe) Betriebstemperatur angenommen werden. Leider lässt sich diese etablierte Praxis der Berücksichtigung des schlimmsten Falls (experimentell oder simulationsbasiert) nicht mehr aufrechterhalten. Diese Berücksichtigung bedingt solch harsche Betriebsbedingungen (maximale Temperatur, etc.) und Anforderungen (z.B. 25 Jahre Betrieb), dass die Transistoren unter den immer stärkeren elektrischen Felder enorme Verschlechterungen erleiden. Denn durch die Kombination an hoher Temperatur, Spannung und den steigenden elektrischen Feldern bei jeder Generation, nehmen die Degradationphänomene stetig zu. Das bedeutet, dass die unter dem schlimmsten Fall bestimmte Sicherheitstoleranz enorm pessimistisch ist und somit deutlich zu hoch ausfällt. Dieses Maß an Pessimismus führt zu erheblichen Performanzseinbußen, die unnötig und demnach vermeidbar sind. Während beispielsweise militärische Schaltungen 25 Jahre lang unter harschen Bedingungen arbeiten müssen, wird Unterhaltungselektronik bei niedrigeren Temperaturen betrieben und muss ihre Funktionalität nur für die Dauer der zweijährigen Garantie aufrechterhalten. Für letzteres können die Sicherheitstoleranzen also deutlich kleiner ausfallen, um die Performanz deutlich zu erhöhen, die zuvor im Namen der Zuverlässigkeit aufgegeben wurde. Diese Arbeit zielt darauf ab, maßgeschneiderte Sicherheitstoleranzen für die einzelnen Anwendungsszenarien einer Schaltung bereitzustellen. Für fordernde Umgebungen wie Weltraumanwendungen (wo eine Reparatur unmöglich ist) ist weiterhin der schlimmstmögliche Fall relevant. In den meisten Anwendungen, herrschen weniger harsche Betriebssbedingungen (z.B. sorgen Kühlsysteme für niedrigere Temperaturen). Hier können Sicherheitstoleranzen maßgeschneidert und anwendungsspezifisch bestimmt werden, sodass Verschlechterungen exakt toleriert werden können und somit die Zuverlässigkeit zu minimalen Kosten (Performanz, etc.) gewahrt wird. Leider sind die derzeitigen Standardentwurfswerkzeuge für diese anwendungsspezifische Bestimmung der Sicherheitstoleranz nicht gut gerüstet. Diese Arbeit zielt darauf ab, Standardentwurfswerkzeuge in die Lage zu versetzen, diesen Bedarf an Zuverlässigkeitsbestimmungen für beliebige Schaltungen unter beliebigen Betriebsbedingungen zu erfüllen. Zu diesem Zweck stellen wir unsere Forschungsbeiträge als vier Schritte auf dem Weg zu anwendungsspezifischen Sicherheitstoleranzen vor: Schritt 1 verbessert die Modellierung der Degradationsphänomene (Transistor-Alterung, -Selbsterhitzung, -Rauschen, etc.). Das Ziel von Schritt 1 ist es, ein umfassendes, einheitliches Modell für die Degradationsphänomene zu erstellen. Durch die Verwendung von materialwissenschaftlichen Defektmodellierungen werden die zugrundeliegenden physikalischen Prozesse der Degradationsphänomena modelliert, um ihre Wechselwirkungen zu berücksichtigen (z.B. Phänomen A kann Phänomen B beschleunigen) und ein einheitliches Modell für die simultane Modellierung verschiedener Phänomene zu erzeugen. Weiterhin werden die jüngst entdeckten Phänomene ebenfalls modelliert und berücksichtigt. In Summe, erlaubt dies eine genaue Degradationsmodellierung von Transistoren unter gleichzeitiger Berücksichtigung aller essenziellen Phänomene. Schritt 2 beschleunigt diese Degradationsmodelle von mehreren Minuten pro Transistor (Modelle der Physiker zielen auf Genauigkeit statt Performanz) auf wenige Millisekunden pro Transistor. Die Forschungsbeiträge dieser Dissertation beschleunigen die Modelle um ein Vielfaches, indem sie zuerst die Berechnungen so weit wie möglich vereinfachen (z.B. sind nur die Spitzenwerte der Degradation erforderlich und nicht alle Werte über einem zeitlichen Verlauf) und anschließend die Parallelität heutiger Computerhardware nutzen. Beide Ansätze erhöhen die Auswertungsgeschwindigkeit, ohne die Genauigkeit der Berechnung zu beeinflussen. In Schritt 3 werden diese beschleunigte Degradationsmodelle in die Standardwerkzeuge integriert. Die Standardwerkzeuge berücksichtigen derzeit nur die bestmöglichen, typischen und schlechtestmöglichen Standardzellen (digital) oder Transistoren (analog). Diese drei Typen von Zellen/Transistoren werden von der Foundry (Halbleiterhersteller) aufwendig experimentell bestimmt. Da nur diese drei Typen bestimmt werden, nehmen die Werkzeuge keine Zuverlässigkeitsbestimmung für eine spezifische Anwendung (Temperatur, Spannung, Aktivität) vor. Simulationen mit Degradationsmodellen ermöglichen eine Bestimmung für spezifische Anwendungen, jedoch muss diese Fähigkeit erst integriert werden. Diese Integration ist eines der Beiträge dieser Dissertation. Schritt 4 beschleunigt die Standardwerkzeuge. Digitale Schaltungsentwürfe, die nicht auf Standardzellen basieren, sowie komplexe analoge Schaltungen können derzeit nicht mit analogen Schaltungssimulatoren ausgewertet werden. Ihre Performanz reicht für solch umfangreiche Simulationen nicht aus. Diese Dissertation stellt Techniken vor, um diese Werkzeuge zu beschleunigen und somit diese umfangreichen Schaltungen simulieren zu können. Diese Forschungsbeiträge, die sich jeweils über mehrere Veröffentlichungen erstrecken, ermöglichen es Standardwerkzeugen, die Sicherheitstoleranz für kundenspezifische Anwendungsszenarien zu bestimmen. Für eine gegebene Schaltungslebensdauer, Temperatur, Spannung und Aktivität (Schaltverhalten durch Software-Applikationen) können die Auswirkungen der Transistordegradation ausgewertet werden und somit die erforderliche (weder unter- noch überschätzte) Sicherheitstoleranz bestimmt werden. Diese anwendungsspezifische Sicherheitstoleranz, garantiert die Zuverlässigkeit und Funktionalität der Schaltung für genau diese Anwendung bei minimalen Performanzeinbußen

    Development of the Beam Position Monitors for the Diagnostics of the Test Beam Line in the CTF3 at CERN

    Get PDF
    The work for this thesis is in line with the field of Instrumentation for Particle Accelerators, so called Beam Diagnostics. It is presented the development of a series of electro-mechanical devices called Inductive Pick-Ups (IPU) for Beam Position Monitoring (BPM). A full set of 17 BPM units (16 + 1 spare), named BPS units, were built and installed into the Test Beam Line (TBL), an electron beam decelerator, of the 3rd CLIC Test Facility (CTF3) at CERN ¿European Organization for the Nuclear Research¿. The CTF3, built at CERN by an international collaboration, was meant to demonstrate the technical feasibility of the key concepts for CLIC ¿Compact Linear Collider¿ as a future linear collider based on the novel two-beam acceleration scheme, and in order to achieve the next energy frontier for a lepton collider in theMulti-TeV scale. Modern particle accelerators and in particular future colliders like CLIC requires an extreme alignment and stabilization of the beam in order to enhance its quality, which rely heavily on a beam based alignment techniques. Here the BPMs, like the BPS-IPU, play an important role providing the beam position with precision and high resolution, besides a beam current measurement in the case of the BPS, along the beam lines. The BPS project carried out at IFIC was mainly developed in two phases: prototyping and series production and test for the TBL. In the first project phase two fully functional BPS prototypes were constructed, focusing in this thesis work on the electronic design of the BPS on-board PCBs (Printed Circuit Boards) which are based on transformers for the current sensing and beam position measurement. Furthermore, it is described the monitor mechanical design with emphasis on all the parts directly involved in its electromagnetic functioning, as a result of the coupling of the EM fields generated by the beam with those parts. For that, it was studied its operational parameters, according the TBL specifications, and it was also simulated a new circuital model reproducing the BPS monitor frequency response for its operational bandwidth (1kHz-100MHz). These prototypes were initially tested in the laboratories of the BI-PI section¿Beam Instrumentation - Position and Intensity¿ at CERN. In the second project phase the BPS monitor series, which were built based on the experience acquired during the prototyping phase, the work was focused on the realization of the characterization tests to measure the main operational parameters of each series monitor, for which it was designed and constructed two test benches with different purposes and frequency regions. The first one is designed to work in the low frequency region, between 1kHz-100MHz, in the time scale of the electron beam pulse with a repetition period of 1s and an approximate duration of 140ns. This kind of test setups called Wire Test-bench are commonly used in the accelerators instrumentation field in order to determine the characteristic parameters of a BPM (or pick-up) like its linearity and precision in the position measurement, and also its frequency response (bandwidth). This is done by emulating a low current intensity beam with a stretched wire carrying a current signals which can be precisely positioned with respect the device under test. This test bench was specifically made for the BPS monitor and conceived to perform the measurement data acquisition in an automated way, managing the measurement equipment and the wire positioning motors controller from a PC workstation. Each one of the BPS monitors series were characterized by using this system at the IFIC labs, and the test results and analysis are presented in this work. On the other hand, the high frequency tests, above the X band in the microwave spectrum and at the time scale of the micro-bunch pulses with a bunching period of 83ps (12GHz) inside a long 140ns pulse, were performed in order to measure the longitudinal impedance of the BPS monitor. This must be low enough in order to minimize the perturbations on the beam produced at crossing the monitor, which affects to its stability during the propagation along the line. For that, it was built the high frequency test bench as a coaxial waveguide structure of 24mm diameter matched at 50¿ and with a bandwidth from 18MHz to 30GHz, which was previously simulated, and having room in the middle to place the BPS as the device under test. This high frequency test bench is able to reproduce the TEM (Transversal Electro-Magnetic) propagative modes corresponding to an ultra-relativistic electron beam of 12GHz bunching frequency, so that the Scattering parameters can be measured to obtain the longitudinal impedance of the BPS in the frequency range of interest. Finally, it is also presented the results of the beam test made in the TBL line, with beam currents from 3.5A to 13A (max. available at the moment of the test). In order to determine the minimum resolution attainable by a BPS monitor in the measurement of the beam position, being the device figure of merit, with a resolution goal of 5¿m at maximum beam current of 28A according to the TBL specifications.García Garrigós, JJ. (2013). Development of the Beam Position Monitors for the Diagnostics of the Test Beam Line in the CTF3 at CERN [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34327TESI

    Statistical analysis of Total Ionizing Dose response in 25-nm NAND Flash memory

    Get PDF
    Variabilità degli errori, ovvero bit flip, dovuti alla dose totale ionizzante (TID) in memorie Flash SLC da 25 nm. Più di 1 Terabit di celle è stato esposto a raggi gamma da Co-60 e sono stati misurati gli errori indotti dalla radiazione ionizzante. L'obiettivo della tesi è stato lo studio del comportamento delle memorie Flash nello spazio e prevederne l’affidabilità.ope

    Circuit design and technological limitations of silicon RFICs for wireless applications

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 201-206).Semiconductor technologies have been a key to the growth in wireless communication over the past decade, bringing added convenience and accessibility through advantages in cost, size, and power dissipation. A better understanding of how an IC technology affects critical RF signal chain components will greatly aid the design of wireless systems and the development of process technologies for the increasingly complex applications that lie on the horizon. Many of the evolving applications will embody the concept of adaptive performance to extract the maximum capability from the RF link in terms of bandwidth, dynamic range, and power consumption-further engaging the interplay of circuits and devices is this design space and making it even more difficult to discern a clear guide upon which to base technology decisions. Rooted in these observations, this research focuses on two key themes: 1) devising methods of implementing RF circuits which allow the performance to be dynamically tuned to match real-time conditions in a power-efficient manner, and 2) refining approaches for thinking about the optimization of RF circuits at the device level. Working toward a 5.8 GHz receiver consistent with 1 GBit/s operation, signal path topologies and adjustable biasing circuits are developed for low-noise amplifiers (LNAs) and voltage-controlled oscillators (VCOs) to provide a facility by which power can be conserved when the demand for sensitivity is low. As an integral component in this effort, tools for exploring device level issues are illustrated with both circuit types, helping to identify physical limitations and design techniques through which they can be mitigated.(cont.) The design of two LNAs and four VCOs is described, each realized to provide a fully-integrated solution in a 0.5 tm SiGe BiCMOS process, and each incorporating all biasing and impedance matching on chip. Measured results for these 5-6GHz circuits allow a number of poignant technology issues to be enlightened, including an exhibition of the importance of terminal resistances and capacitances, a demonstration of where the transistor fT is relevant and where it is not, and the most direct comparison of bipolar and CMOS solutions offered to date in this frequency range. In addition to covering a number of new circuit techniques, this work concludes with some new views regarding IC technologies for RF applications.by Donald A. Hitko.Ph.D

    Polarization behaviour of on-body communication channels at 2.45 GHz

    Get PDF
    The advent of body worn devices and the use of them for a wide range of applications, from entertainment to military purposes, indicate the need to investigate to the behaviour of antennas and wave propagation on the body in depth. Knowledge and understanding and of the on-body channel can lead to the design of efficient antennas and systems for wearable devices. The objective of this work is to identify the propagation mechanism on the body for different polarisation states at 2.45 GHz. In particular, the effect of the body on the antenna performance with normal and parallel polarisation is studied and their capability in launching surface waves is evaluated. It is shown that both vertically and horizontally polarised antennas can launch a transverse magnetic (TM) Norton surface wave mode regardless of their polarisation states. However, horizontally polarised antennas do not launch the wave as strongly as vertically polarised antennas. Also, the change in the far field and near field behaviour of the antennas such as a dipole in proximity to the body is investigated and the observations lead to the design of a novel surface wave parasitic array. This new antenna is directive and can increase the path gain by almost 10 dB compared to other planar antennas. In addition, the effect of the polarization of the antenna on channel path gain is studied and channel cross polarization discrimination is quantified, using both simulation and measurement.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore