70 research outputs found

    A statistical study of time dependent reliability degradation of nanoscale MOSFET devices

    Get PDF
    Charge trapping at the channel interface is a fundamental issue that adversely affects the reliability of metal-oxide semiconductor field effect transistor (MOSFET) devices. This effect represents a new source of statistical variability as these devices enter the nano-scale era. Recently, charge trapping has been identified as the dominant phenomenon leading to both random telegraph noise (RTN) and bias temperature instabilities (BTI). Thus, understanding the interplay between reliability and statistical variability in scaled transistors is essential to the implementation of a ‘reliability-aware’ complementary metal oxide semiconductor (CMOS) circuit design. In order to investigate statistical reliability issues, a methodology based on a simulation flow has been developed in this thesis that allows a comprehensive and multi-scale study of charge-trapping phenomena and their impact on transistor and circuit performance. The proposed methodology is accomplished by using the Gold Standard Simulations (GSS) technology computer-aided design (TCAD)-based design tool chain co-optimization (DTCO) tool chain. The 70 nm bulk IMEC MOSFET and the 22 nm Intel fin-shape field effect transistor (FinFET) have been selected as targeted devices. The simulation flow starts by calibrating the device TCAD simulation decks against experimental measurements. This initial phase allows the identification of the physical structure and the doping distributions in the vertical and lateral directions based on the modulation in the inversion layer’s depth as well as the modulation of short channel effects. The calibration is further refined by taking into account statistical variability to match the statistical distributions of the transistors’ figures of merit obtained by measurements. The TCAD simulation investigation of RTN and BTI phenomena is then carried out in the presence of several sources of statistical variability. The study extends further to circuit simulation level by extracting compact models from the statistical TCAD simulation results. These compact models are collected in libraries, which are then utilised to investigate the impact of the BTI phenomenon, and its interaction with statistical variability, in a six transistor-static random access memory (6T-SRAM) cell. At the circuit level figures of merit, such as the static noise margin (SNM), and their statistical distributions are evaluated. The focus of this thesis is to highlight the importance of accounting for the interaction between statistical variability and statistical reliability in the simulation of advanced CMOS devices and circuits, in order to maintain predictivity and obtain a quantitative agreement with a measured data. The main findings of this thesis can be summarised by the following points: Based on the analysis of the results, the dispersions of VT and ΔVT indicate that a change in device technology must be considered, from the planar MOSFET platform to a new device architecture such as FinFET or SOI. This result is due to the interplay between a single trap charge and statistical variability, which has a significant impact on device operation and intrinsic parameters as transistor dimensions shrink further. The ageing process of transistors can be captured by using the trapped charge density at the interface and observing the VT shift. Moreover, using statistical analysis one can highlight the extreme transistors and their probable effect on the circuit or system operation. The influence of the passgate (PG) transistor in a 6T-SRAM cell gives a different trend of the mean static noise margin

    High-Density Solid-State Memory Devices and Technologies

    Get PDF
    This Special Issue aims to examine high-density solid-state memory devices and technologies from various standpoints in an attempt to foster their continuous success in the future. Considering that broadening of the range of applications will likely offer different types of solid-state memories their chance in the spotlight, the Special Issue is not focused on a specific storage solution but rather embraces all the most relevant solid-state memory devices and technologies currently on stage. Even the subjects dealt with in this Special Issue are widespread, ranging from process and design issues/innovations to the experimental and theoretical analysis of the operation and from the performance and reliability of memory devices and arrays to the exploitation of solid-state memories to pursue new computing paradigms

    Flash Memory Devices

    Get PDF
    Flash memory devices have represented a breakthrough in storage since their inception in the mid-1980s, and innovation is still ongoing. The peculiarity of such technology is an inherent flexibility in terms of performance and integration density according to the architecture devised for integration. The NOR Flash technology is still the workhorse of many code storage applications in the embedded world, ranging from microcontrollers for automotive environment to IoT smart devices. Their usage is also forecasted to be fundamental in emerging AI edge scenario. On the contrary, when massive data storage is required, NAND Flash memories are necessary to have in a system. You can find NAND Flash in USB sticks, cards, but most of all in Solid-State Drives (SSDs). Since SSDs are extremely demanding in terms of storage capacity, they fueled a new wave of innovation, namely the 3D architecture. Today “3D” means that multiple layers of memory cells are manufactured within the same piece of silicon, easily reaching a terabit capacity. So far, Flash architectures have always been based on "floating gate," where the information is stored by injecting electrons in a piece of polysilicon surrounded by oxide. On the contrary, emerging concepts are based on "charge trap" cells. In summary, flash memory devices represent the largest landscape of storage devices, and we expect more advancements in the coming years. This will require a lot of innovation in process technology, materials, circuit design, flash management algorithms, Error Correction Code and, finally, system co-design for new applications such as AI and security enforcement

    Miniaturized Transistors, Volume II

    Get PDF
    In this book, we aim to address the ever-advancing progress in microelectronic device scaling. Complementary Metal-Oxide-Semiconductor (CMOS) devices continue to endure miniaturization, irrespective of the seeming physical limitations, helped by advancing fabrication techniques. We observe that miniaturization does not always refer to the latest technology node for digital transistors. Rather, by applying novel materials and device geometries, a significant reduction in the size of microelectronic devices for a broad set of applications can be achieved. The achievements made in the scaling of devices for applications beyond digital logic (e.g., high power, optoelectronics, and sensors) are taking the forefront in microelectronic miniaturization. Furthermore, all these achievements are assisted by improvements in the simulation and modeling of the involved materials and device structures. In particular, process and device technology computer-aided design (TCAD) has become indispensable in the design cycle of novel devices and technologies. It is our sincere hope that the results provided in this Special Issue prove useful to scientists and engineers who find themselves at the forefront of this rapidly evolving and broadening field. Now, more than ever, it is essential to look for solutions to find the next disrupting technologies which will allow for transistor miniaturization well beyond silicon’s physical limits and the current state-of-the-art. This requires a broad attack, including studies of novel and innovative designs as well as emerging materials which are becoming more application-specific than ever before

    Silicon Photomultipliers for Scintillation Detection Systems.

    Full text link
    Sensitive photodetectors find application in radiation detection systems where light from scintillating crystals must be measured to determine the energy, position, and/or time of radiation interactions. The use of traditional glass photomultiplier tubes has recently been challenged by the advent of solid state photosensors capable of resolving single optical photons. The silicon photomultiplier is an array of such sensors and presents unique detection and noise characteristics which must be thoroughly understood for optimal detection. This dissertation exposes the underlying statistical detection processes in order to indicate device parameters of greatest interest. Several unique fabrication methods are employed and analyzed which attempt to tailor the large electric fields required for acceleration and amplification of single photoelectrons. Transparent thin film processes are developed for integration of antireflection coatings and large-value avalanche quench resistors.Ph.D.Nuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86277/1/pbarton_1.pd

    High Performance CMOS Range Imaging

    Get PDF
    Diese Arbeit fokussiert sich auf die Modellierung, Charakterisierung und Optimierung von Rauschen um den Entwurf von hochperformanten CMOS-Bildsensoren im Allgemeinen und von distanzgebenden Bildsensoren im Speziellen zu unterstützen. CMOS Bildsensorik ist bekannt dafür, den CCD-Sensoren bezüglich Flexibilität überlegen zu sein, aber modifizierter Prozesse zu bedürfen um vergleichbare Leistung in Parametern wie Rauschen, Dynamik oder Empfindlichkeit zu erreichen. Rauschen wird als einer der wichtigsten Parameter erachtet, da es die erreichbare Genauigkeit maßgeblich limitiert und nicht korrigiert werden kann. Diese Thesis präsentiert einen Überblick über die weit gefächerte Theorie des Rauschens und fügt ihr eine Methodik hinzu die Rauschperformance von zeitlich abgetasteten Systemen zu schätzen. Eine Charakterisierung der verfügbaren Bauelemente des verwendeten 0:35 µm 2P4M CMOS-Prozesses wurde durchgeführt und anhand heuristischer Betrachtungen und dem Kenntnisstand der Rausch-Theorie evaluiert. Diese fundamentalen Untersuchungen werden als Grundlage erachtet, die Vorhersagbarkeit der Rauschperformance von z.B. Bildsensoren zu verbessern. Rauschquellen von Fotodetektoren wurden in der Vergangenheit erforscht, wobei viele mit der Einführung der PPD minimiert werden konnten. Üblicherweise sind die verbleibenden dominanten Rauschquellen das Resetrauschen und das Rauschen der Ausleseschaltung. Um Letzteres zu verbessern, wurde eine neuartige JFET-basierte Auslesestruktur entwickelt, welche im Vergleich zu verfügbaren Standard-MOSFETs eine um ca. Faktor 100 verbesserte Rauschperformance für niedrige Frequenzen aufweist. ToF wird als eine Schlüssel-Technologie erachtet, die neue Applikationen z.B. in Machine Vision, Automobil, Surveillance und Unterhaltungselektronik ermöglicht. Das konkurrierende CW-Verfahren ist bekannt dafür, anfällig bzgl. Störungen z.B. durch Hintergrundbestrahlung zu sein. Das PM-ToF-Prinzip wird als eine vielversprechende Methode für widrige Bedingungen erachtet, die allerdings eines schnellen Fotodetektors bedarf. Diese Arbeit trug zu zwei Generationen von LDPD basierten ToF-Bildsensoren bei und präsentiert eine alternative Implementierung des MSI-PM-ToF Verfahrens. Es wurde nachgewiesen, dass diese eine wesentlich bessere Performance bzgl. Geschwindigkeit, Linearität, Dunkelstrom und Matching bietet. Ferner bietet diese Arbeit ein nichtlineares und zeitvariantes Modell des realisierten Sensorprinzips, welches ungewünschte Phänomene wie die endliche Ladungsträgergeschwindigkeit und eine parasitäre Fotoempfindlichkeit der Speicherknoten berücksichtigt, um Großsignal-, Sensitivitäts- und Rauschperformance erforschen zu können. Es wurde gezeigt, dass das Modell gegen ein "Standard"-Modell konvergiert und die Messungen gut nachbildet. Letztlich wurde die Auswirkung dieser ungewünschten Phänomene auf die Performance der Distanzmessung präsentiert.This work is dedicated to CMOS based imaging with the emphasis on the noise modeling, characterization and optimization in order to contribute to the design of high performance imagers in general and range imagers in particular. CMOS is known to be superior to CCD due to its flexibility in terms of integration capabilities, but typically has to be enhanced to compete at parameters as for instance noise, dynamic range or spectral response. Temporal noise is an important topic, since it is one of the most crucial parameters that ultimately limits the performance and cannot be corrected. This thesis gathers the widespread theory on noise and extends the theory by a non-rigorous but potentially computing efficient algorithm to estimate noise in time sampled systems. The available devices of the 0:35 µm 2P4M CMOS process were characterized for their low-frequency noise performance and mutually compared by heuristic observations and a comparison to the state of research. These investigations set the foundation for a more rigorous treatment of noise exhibition and are thus believed to improve the predictability of the performance of e.g. image sensors. Many noise sources of CMOS APS have been investigated in the past and most of them can be minimized by usage of a PPD as a photodetector. Remaining dominant noise sources typically are the reset noise and the noise from the readout circuitry. In order to improve the latter, an alternative JFET based readout structure is proposed that was designed, manufactured and measured, proving the superior low-frequency noise performance of approximately a factor of 100 compared to standard MOSFETs. ToF is one key technology to enable new applications in e.g. machine vision, automotive, surveillance or entertainment. The competing CW principle is known to be prone to errors introduced by e.g. high ambient illuminance levels. The PM ToF principle is considered to be a promising method to supply the need for depth-map perception in harsh environmental conditions, but requires a high-speed photodetector. This work contributed to two generations of LDPD based ToF range image sensors and proposed a new approach to implement the MSI PM ToF principle. This was verified to yield a significantly faster charge transfer, better linearity, dark current and matching performance. A non-linear and time-variant model is provided that takes into account undesired phenomena such as finite charge transfer speed and a parasitic sensitivity to light when the shutters should remain OFF, to allow for investigations of large-signal characteristics, sensitivity and precision. It was demonstrated that the model converges to a standard photodetector model and properly resembles the measurements. Finally the impact of these undesired phenomena on the range measurement performance is demonstrated

    Reliability-aware memory design using advanced reconfiguration mechanisms

    Get PDF
    Fast and Complex Data Memory systems has become a necessity in modern computational units in today's integrated circuits. These memory systems are integrated in form of large embedded memory for data manipulation and storage. This goal has been achieved by the aggressive scaling of transistor dimensions to few nanometer (nm) sizes, though; such a progress comes with a drawback, making it critical to obtain high yields of the chips. Process variability, due to manufacturing imperfections, along with temporal aging, mainly induced by higher electric fields and temperature, are two of the more significant threats that can no longer be ignored in nano-scale embedded memory circuits, and can have high impact on their robustness. Static Random Access Memory (SRAM) is one of the most used embedded memories; generally implemented with the smallest device dimensions and therefore its robustness can be highly important in nanometer domain design paradigm. Their reliable operation needs to be considered and achieved both in cell and also in architectural SRAM array design. Recently, and with the approach to near/below 10nm design generations, novel non-FET devices such as Memristors are attracting high attention as a possible candidate to replace the conventional memory technologies. In spite of their favorable characteristics such as being low power and highly scalable, they also suffer with reliability challenges, such as process variability and endurance degradation, which needs to be mitigated at device and architectural level. This thesis work tackles such problem of reliability concerns in memories by utilizing advanced reconfiguration techniques. In both SRAM arrays and Memristive crossbar memories novel reconfiguration strategies are considered and analyzed, which can extend the memory lifetime. These techniques include monitoring circuits to check the reliability status of the memory units, and architectural implementations in order to reconfigure the memory system to a more reliable configuration before a fail happens.Actualmente, el diseño de sistemas de memoria en circuitos integrados busca continuamente que sean más rápidos y complejos, lo cual se ha vuelto de gran necesidad para las unidades de computación modernas. Estos sistemas de memoria están integrados en forma de memoria embebida para una mejor manipulación de los datos y de su almacenamiento. Dicho objetivo ha sido conseguido gracias al agresivo escalado de las dimensiones del transistor, el cual está llegando a las dimensiones nanométricas. Ahora bien, tal progreso ha conllevado el inconveniente de una menor fiabilidad, dado que ha sido altamente difícil obtener elevados rendimientos de los chips. La variabilidad de proceso - debido a las imperfecciones de fabricación - junto con la degradación de los dispositivos - principalmente inducido por el elevado campo eléctrico y altas temperaturas - son dos de las más relevantes amenazas que no pueden ni deben ser ignoradas por más tiempo en los circuitos embebidos de memoria, echo que puede tener un elevado impacto en su robusteza final. Static Random Access Memory (SRAM) es una de las celdas de memoria más utilizadas en la actualidad. Generalmente, estas celdas son implementadas con las menores dimensiones de dispositivos, lo que conlleva que el estudio de su robusteza es de gran relevancia en el actual paradigma de diseño en el rango nanométrico. La fiabilidad de sus operaciones necesita ser considerada y conseguida tanto a nivel de celda de memoria como en el diseño de arquitecturas complejas basadas en celdas de memoria SRAM. Actualmente, con el diseño de sistemas basados en dispositivos de 10nm, dispositivos nuevos no-FET tales como los memristores están atrayendo una elevada atención como posibles candidatos para reemplazar las actuales tecnologías de memorias convencionales. A pesar de sus características favorables, tales como el bajo consumo como la alta escabilidad, ellos también padecen de relevantes retos de fiabilidad, como son la variabilidad de proceso y la degradación de la resistencia, la cual necesita ser mitigada tanto a nivel de dispositivo como a nivel arquitectural. Con todo esto, esta tesis doctoral afronta tales problemas de fiabilidad en memorias mediante la utilización de técnicas de reconfiguración avanzada. La consideración de nuevas estrategias de reconfiguración han resultado ser validas tanto para las memorias basadas en celdas SRAM como en `memristive crossbar¿, donde se ha observado una mejora significativa del tiempo de vida en ambos casos. Estas técnicas incluyen circuitos de monitorización para comprobar la fiabilidad de las unidades de memoria, y la implementación arquitectural con el objetivo de reconfigurar los sistemas de memoria hacia una configuración mucho más fiables antes de que el fallo suced

    Advances in quantum tunneling models for semiconductor optoelectronic device simulation

    Get PDF
    The undiscussed role of solid-state optoelectronics covers nowadays a wide range of applications. Within this scenario, infrared (IR) detection is becoming crucial by the technological point of view, as well as for scientific purposes, from biology to aerospace. Its commercial and strategic role, however, is confirmed by its spreading use for surveillance, clinical diagnostics, environmental analysis, national/private security, military purposes or quality control as in food industry. At the same time solid-state lighting is emerging among the most efficient electronic applications of the modern era, with a billion-dollar business which is just destined to increase in the next decades. The ongoing development of such technologies must be accompanied by a sufficiently fast scientific progress, which is able to meet the growing demand of high-quality production standards and, as immediate but not obvious consequence, the need of performances which would be the highest possible. One issue affecting both kinds of applications we mentioned is the quantum efficiency, no matter the signal they produce is coming from absorbed or emitted photons. At any rate, the balance between the stimulus coming from the surrounding environment is and the generated electrical current is absolutely crucial in each modern optoelectronic device. More in depth, since IR detectors are asked to convert photons into electrons, device designers must ensure that mechanisms concurring to this conversion should be dominant with respect to any opponent phenomenon. Symmetrically, light-emitting diodes should realize the inverse process, where electrons are converted into photons. In real life this mechanism never take place in a one-to-one electron-photon correspondence. Indeed tunneling, a quantum effect related to the probabilistic nature of particles and, thus, also of charges, contributes to unbalance this correspondence by degrading the signal produced within the device active region. In IR photodetectors this translates into of a current even in absence of light (and, by virtue of this fact, this current is known as "dark current") while in light-emitters tunneling is responsible for leakages that may undermine the quantum efficiency and the power consumption also below the optical turn-on. The present dissertation is part of such framework being the result of studying and modeling different tunneling mechanisms occurring in narrow-gap infrared photodetectors (IRPDs) for mid-wavelength IR (MWIR) applications (3 to 5 um) and in wide-gap blue LEDs (around 450 nm) based on nitride material system. This study has been possible thanks to the collaboration with several academic institutions (Boston University, Padua and Modena e Reggio Emilia Universities) and two important German industries, AIM Infrarot Module and OSRAM Opto Semiconductors, which provided the case-study devices here analyzed. After reviewing basic concepts of solid-state physics, the first part of this work deals with the description of the above cited optoelectronic devices, along with their constituent materials: the HgCdTe alloy, in the case of photodetectors, and GaN and its ternary alloys with In and Al, for what concerns blue LEDs. Since the literature focusing on this research area is still not mature enough, in the second part different tunneling mechanisms and models are proposed, described in detail and then tested for the first time, as in the case of a novel formulation intended for direct tunneling in IRPDs or the description of defect-assisted tunneling in LEDs which also includes elements coming from the microscopic theory of multiphonon emission (MPE) in solids. Simulations are carried out by means of several numerical simulation approaches, using either commercial TCAD (Technology Computer Aided Design) tools and codes developed ad hoc for this purpose. The encouraging and fully satisfying results of numerical modeling here proposed confirm, on the one hand, the widely accepted relevance of tunneling in modern electronics and, on the other hand, also propose a new perspective about possible tunneling mechanism in optoelectronic devices and their appropriate physical, mathematical and numerical investigation tools. Furthermore, the role of device modeling does not end here because many physical details and technological information can be inferred from simulations, with enormous beneficial effects for the electronic industry and the quality improvement of its fabrication processes such those invoked above

    Impact of modulation instability on distributed optical fiber sensors

    Get PDF
    Modulation instability (MI) as the main limit to the sensing distance of distributed fiber sensors is thoroughly investigated in this thesis in order to obtain a model for predicting its characteristics and alleviating its effects. Starting from Maxwell's equations in optical fibers, the nonlinear Schrödinger equation (NLSE) describing the propagation of wave envelope in nonlinear dispersive media is derived. As the main tool for analyzing modulation instability, the NLSE is numerically evaluated using the split-step Fourier method and its analytical closed-form solutions such as solitons are utilized to validate the numerical algorithms. As the direct consequence of the NLSE, self-phase modulation is utilized to measure the nonlinear coefficient of optical fibers via a self-aligned interferometer. The modulation instability gain is obtained by applying a linear stability analysis to the NLSE assuming a white background noise as the seeding for the nonlinear interaction. The MI gain spectrum is expressed by hyperbolic functions for lossless fibers and by Bessel functions with complex orders for fibers with attenuation. An approximate gain spectrum is presented for lossy fibers based on the gain in lossless optical fibers. The accuracy of the analytical results and approximate formulas is evaluated by performing Monte Carlo simulations on the NLSE. The impact of background noise on the onset and evolution of modulation instability is analytically investigated and experimentally demonstrated. Power depletion due to the nonlinear process of modulation instability is modeled by integrating its gain spectrum using Laplace's method. Based on that, a critical power for MI is proposed by introducing the notion of depletion ratio. The model is verified by numerical simulation and experimental measurement. An optimal input power for distributed fiber sensors is proposed to maximize the output optical power and thus, the far end signal-to-noise ratio. Furthermore, the recurrence phenomenon of Fermi-Pasta-Ulam is experimentally observed and numerically simulated, validating the utilized numerical techniques. A standard Brillouin optical time-domain analyser serves as the experimental test bench for the proposed model. As the physical phenomenon behind the experiment, stimulated Brillouin scattering is described based on a pump-probe interaction mechanism through an acoustic wave. A 25 km single-mode fiber is employed as a nonlinear medium with anomalous dispersion at the pump wavelength 1550 nm. The evolution of pump power propagating along the fiber is mapped using the Brillouin interaction with the probe lightwave. The measured longitudinal power traces are processed to extract the impact of MI on the pump power. It is experimentally demonstrated that distributed fiber sensors with orthogonally-polarized pumps suffer less from modulation instability. As the scalar modulation instability of each pump reduces, vector modulation instability occurs because of interaction between the pumps; however, the overall performance improves. A version of the coupled nonlinear Schrödinger equations known as the Manakov system is shown to describe the behavior of two-pump distributed fiber sensors employing optical fibers with random birefringence. The excellent agreement between the experimental and numerical results indicates that the performance limit of two-pump distributed fiber sensors is determined by polarization modulation instability
    corecore