520 research outputs found

    Analog CMOS Readout Channel for Time and Amplitude Measurements With Radiation Sensitivity Analysis for Gain-Boosting Amplifiers

    Get PDF
    The front-end readout channel consists of a charge sensitive amplifier (CSA) and two different unipolar-shaping circuits to generate pulses suitable for time and energy measurement. The signal processing chain of the single channel is built of two different parallel processing paths: a fast path with a peaking time of 30 ns to obtain the time of arrival for each particle impinging the detector; and a slow path with a peaking time of 400 ns dedicated for low noise amplitude measurements, which is formed by a pole-zero cancellation circuit and a 4th order complex shaper based on a bridged-T architecture. The tunability of the system is accomplished by the discharge time constant of the CSA in order to accommodate various event rates. The readout system has been implemented in a 180 nm CMOS technology with the size of 525 μm x 290 μm . The building blocks use compact gain-boosting techniques based on quasi-floating gate (QFG) transistors achieving accurate energy measurement with good resolution. The high impedance nodes of QFG transistors require a detailed study of sensitivity to single-effect transients (SET). After carrying out this study, this paper proposes a method to select the value of the QFG capacitors, minimizing the area occupancy while maintaining robustness to radiation. The nonlinearity of the CSA-slow-shaper has been found to be less than 1% over a 10–70 fC input charge. The power dissipation of the readout channel is 4.1 mW with a supply voltage of 1.8 V.Ministerio de Ciencia, Innovación y Universidades PGC2018-095640-B-I00Consejería de Transformación Económica, Industria, Conocimiento y Universidades P18-FR-3852 y P18-FR-431

    Single event upset hardened embedded domain specific reconfigurable architecture

    Get PDF

    Analog-Digital System Modeling for Electromagnetic Susceptibility Prediction

    Get PDF
    The thesis is focused on the noise susceptibility of communication networks. These analog-mixed signal systems operate in an electrically noisy environment, in presence of multiple equipments connected by means of long wiring. Every module communicates using a transceiver as an interface between the local digital signaling and the data transmission through the network. Hence, the performance of the IC transceiver when affected by disturbances is one of the main factors that guarantees the EM immunity of the whole equipment. The susceptibility to RF and transient disturbances is addressed at component level on a CAN transceiver as a test case, highlighting the IC features critical for noise immunity. A novel procedure is proposed for the IC modeling for mixed-signal immunity simulations of communication networks. The procedure is based on a gray-box approach, modeling IC ports with a physical circuit and the internal links with a behavioural block. The parameters are estimated from time and frequency domain measurements, allowing accurate and efficient reproduction of non-linear device switching behaviours. The effectiveness of the modeling process is verified by applying the proposed technique to a CAN transceiver, involved in a real immunity test on a data communication link. The obtained model is successfully implemented in a commercial solver to predict both the functional signals and the RF noise immunity at component level. The noise immunity at system level is then evaluated on a complete communication network, analyzing the results of several tests on a realistic CAN bus. After developing models for wires and injection probes, a noise immunity test in avionic environment is carried out in a simulation environment, observing good overall accuracy and efficiency

    Simulation of charge-trapping in nano-scale MOSFETs in the presence of random-dopants-induced variability

    Get PDF
    The growing variability of electrical characteristics is a major issue associated with continuous downscaling of contemporary bulk MOSFETs. In addition, the operating conditions brought about by these same scaling trends have pushed MOSFET degradation mechanisms such as Bias Temperature Instability (BTI) to the forefront as a critical reliability threat. This thesis investigates the impact of this ageing phenomena, in conjunction with device variability, on key MOSFET electrical parameters. A three-dimensional drift-diffusion approximation is adopted as the simulation approach in this work, with random dopant fluctuations—the dominant source of statistical variability—included in the simulations. The testbed device is a realistic 35 nm physical gate length n-channel conventional bulk MOSFET. 1000 microscopically different implementations of the transistor are simulated and subjected to charge-trapping at the oxide interface. The statistical simulations reveal relatively rare but very large threshold voltage shifts, with magnitudes over 3 times than that predicted by the conventional theoretical approach. The physical origin of this effect is investigated in terms of the electrostatic influences of the random dopants and trapped charges on the channel electron concentration. Simulations with progressively increased trapped charge densities—emulating the characteristic condition of BTI degradation—result in further variability of the threshold voltage distribution. Weak correlations of the order of 10-2 are found between the pre-degradation threshold voltage and post-degradation threshold voltage shift distributions. The importance of accounting for random dopant fluctuations in the simulations is emphasised in order to obtain qualitative agreement between simulation results and published experimental measurements. Finally, the information gained from these device-level physical simulations is integrated into statistical compact models, making the information available to circuit designers

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar
    corecore