93 research outputs found

    Phase Locked Loop Test Methodology

    Get PDF
    Phase locked loops are incorporated into almost every large-scale mixed signal and digital system on chip (SOC). Various types of PLL architectures exist including fully analogue, fully digital, semi-digital, and software based. Currently the most commonly used PLL architecture for SOC environments and chipset applications is the Charge-Pump (CP) semi-digital type. This architecture is commonly used for clock synthesis applications, such as the supply of a high frequency on-chip clock, which is derived from a low frequency board level clock. In addition, CP-PLL architectures are now frequently used for demanding RF (Radio Frequency) synthesis, and data synchronization applications. On chip system blocks that rely on correct PLL operation may include third party IP cores, ADCs, DACs and user defined logic (UDL). Basically, any on-chip function that requires a stable clock will be reliant on correct PLL operation. As a direct consequence it is essential that the PLL function is reliably verified during both the design and debug phase and through production testing. This chapter focuses on test approaches related to embedded CP-PLLs used for the purpose of clock generation for SOC. However, methods discussed will generally apply to CP-PLLs used for other applications

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-”m CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-”m CMOS process

    Conception pour la testabilité des systÚmes biomédicaux implantables

    Get PDF
    Architecture générale des systÚmes implantables -- Principes de stimulation électrique -- Champs d'application des systÚmes implantables -- Les particularités des circuits implantables -- Tendance future -- Conception pour la testabilité de la partie numérique des circuits implantables -- "Desigh and realization of an accurate built-in current sensor for Iddq testing and power dissipation measurement -- Conception pour la testabilité de la partie analogique des circuits implantables -- BIST for digital-to-analog and Analogo-to-digital converters -- Efficient and accurate testing of analog-to-digital converters using oscillation test method -- Design for testability of Embedded integrated operational amplifiers -- Vérification des interfaces bioélectroniques des systÚmes implantables -- Monitorin the electrode and lead failures in implanted microstimulators and sensors -- Capteurs de température intégrés pour la vérification de l'état thermique des puces dédiées -- Built-in temperature sensors for on-line thermal monitoring of microelectronic structures -- Un protocole de communication fiable pour la programmation et la télémétrie des systÚme implantables -- A reliable communication protoco for externally controlled biomedical implanted devices

    Analog System-on-a-Chip with Application to Biosensors

    Get PDF
    This dissertation facilitates the design and fabrication of analog systems-on-a-chip (SoCs). In this work an analog SoC is developed with application to organic fluid analysis. The device contains a built-in self-test method for performing on-chip analysis of analog macros. The analog system-on-a-chip developed in this dissertation can be used to evaluate the properties of fluids for medical diagnoses. The research herein described covers the development of: analog SoC models, an improved set of chemical sensor arrays, a self-contained system-on-a-chip for the determination of fluid properties, and a method of performing on-chip testing of analog SoC sub-blocks

    On-line health monitoring of passive electronic components using digitally controlled power converter

    Get PDF
    This thesis presents System Identification based On-Line Health Monitoring to analyse the dynamic behaviour of the Switch-Mode Power Converter (SMPC), detect, and diagnose anomalies in passive electronic components. The anomaly detection in this research is determined by examining the change in passive component values due to degradation. Degradation, which is a long-term process, however, is characterised by inserting different component values in the power converter. The novel health-monitoring capability enables accurate detection of passive electronic components despite component variations and uncertainties and is valid for different topologies of the switch-mode power converter. The need for a novel on-line health-monitoring capability is driven by the need to improve unscheduled in-service, logistics, and engineering costs, including the requirement of Integrated Vehicle Health Management (IVHM) for electronic systems and components. The detection and diagnosis of degradations and failures within power converters is of great importance for aircraft electronic manufacturers, such as Thales, where component failures result in equipment downtime and large maintenance costs. The fact that existing techniques, including built-in-self test, use of dedicated sensors, physics-of-failure, and data-driven based health-monitoring, have yet to deliver extensive application in IVHM, provides the motivation for this research ... [cont.]

    Fault-tolerant fpga for mission-critical applications.

    Get PDF
    One of the devices that play a great role in electronic circuits design, specifically safety-critical design applications, is Field programmable Gate Arrays (FPGAs). This is because of its high performance, re-configurability and low development cost. FPGAs are used in many applications such as data processing, networks, automotive, space and industrial applications. Negative impacts on the reliability of such applications result from moving to smaller feature sizes in the latest FPGA architectures. This increases the need for fault-tolerant techniques to improve reliability and extend system lifetime of FPGA-based applications. In this thesis, two fault-tolerant techniques for FPGA-based applications are proposed with a built-in fault detection region. A low cost fault detection scheme is proposed for detecting faults using the fault detection region used in both schemes. The fault detection scheme primarily detects open faults in the programmable interconnect resources in the FPGAs. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can be detected. For fault recovery, each scheme has its own fault recovery approach. The first approach uses a spare module and a 2-to-1 multiplexer to recover from any fault detected. On the other hand, the second approach recovers from any fault detected using the property of Partial Reconfiguration (PR) in the FPGAs. It relies on identifying a Partially Reconfigurable block (P_b) in the FPGA that is used in the recovery process after the first faulty module is identified in the system. This technique uses only one location to recover from faults in any of the FPGA’s modules and the FPGA interconnects. Simulation results show that both techniques can detect and recover from open faults. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can also be detected. Finally, both techniques require low area overhead

    Algorithms for Verification of Analog and Mixed-Signal Integrated Circuits

    Get PDF
    Over the past few decades, the tremendous growth in the complexity of analog and mixed-signal (AMS) systems has posed great challenges to AMS verification, resulting in a rapidly growing verification gap. Existing formal methods provide appealing completeness and reliability, yet they suffer from their limited efficiency and scalability. Data oriented machine learning based methods offer efficient and scalable solutions but do not guarantee completeness or full coverage. Additionally, the trend towards shorter time to market for AMS chips urges the development of efficient verification algorithms to accelerate with the joint design and testing phases. This dissertation envisions a hierarchical and hybrid AMS verification framework by consolidating assorted algorithms to embrace efficiency, scalability and completeness in a statistical sense. Leveraging diverse advantages from various verification techniques, this dissertation develops algorithms in different categories. In the context of formal methods, this dissertation proposes a generic and comprehensive model abstraction paradigm to model AMS content with a unifying analog representation. Moreover, an algorithm is proposed to parallelize reachability analysis by decomposing AMS systems into subsystems with lower complexity, and dividing the circuit's reachable state space exploration, which is formulated as a satisfiability problem, into subproblems with a reduced number of constraints. The proposed modeling method and the hierarchical parallelization enhance the efficiency and scalability of reachability analysis for AMS verification. On the subject of learning based method, the dissertation proposes to convert the verification problem into a binary classification problem solved using support vector machine (SVM) based learning algorithms. To reduce the need of simulations for training sample collection, an active learning strategy based on probabilistic version space reduction is proposed to perform adaptive sampling. An expansion of the active learning strategy for the purpose of conservative prediction is leveraged to minimize the occurrence of false negatives. Moreover, another learning based method is proposed to characterize AMS systems with a sparse Bayesian learning regression model. An implicit feature weighting mechanism based on the kernel method is embedded in the Bayesian learning model for concurrent quantification of influence of circuit parameters on the targeted specification, which can be efficiently solved in an iterative method similar to the expectation maximization (EM) algorithm. Besides, the achieved sparse parameter weighting offers favorable assistance to design analysis and test optimization

    Enhancement and validation of a test technique for integrated circuits

    Get PDF
    This thesis focuses on a scan-based delay testing technique that was recently developed at ETS. This new approach, called Captureless Delay Testing (CDT), has been proposed as a technique that complements traditional methods of test, ensuring the integrated circuits will function at their proposed clock speed, further improving the test coverage of the particular type of test. Furthermore, CDT incorporates the use of sensors enabling the detection of the presence of transitions at strategic locations. The purpose of this project is to improve on certain aspects of this novel technique. At first, we analyze the delay distribution of the non-covered nodes by traditional methods of test, in order to develop the best way possible of placement of the CDT sensors. We present, using Perl Language, the ensemble of tools developed for this purpose. The end results obtained confirm that the paths that pass through the non-covered nodes are longer than those that traverse the covered ones. The difference between the two types of paths exceeds 20%) of the clock period when considering the shorter path delay values. Secondly, we propose a fially automated algorithm that enables, at the earliest stages of the test vectors generation process: 1) the identification of the non-covered nodes, 2) the identification of the placements of the CDT sensors at the inputs of the flip-flops for further improvement of the test coverage, and 3) the minimization of the number of sensors with regards to requirements. Our results indicate that when we apply CDT on top of transitionbased fault model we can improve the test coverage by 5%. Moreover, the algorithm of CDT sensors minimization allows a reduction of more than 85% the number of those sensors with a minimal test coverage loss, on average of 1.6%

    Modelling methods for testability analysis of analog integrated circuits based on pole-zero analysis

    Get PDF
    Testability analysis for analog circuits provides valuable information for designers and test engineers. Such information includes a number of testable and nontestable elements of a circuit, ambiguity groups, and nodes to be tested. This information is useful for solving the fault diagnosis problem. In order to verify the functionality of analog circuits, a large number of specifications have to be checked. However, checking all circuit specifications can result in prohibitive testing times on expensive automated test equipment. Therefore, the test engineer has to select a finite subset of specifications to be measured. This subset of specifications must result in reducing the test time and guaranteeing that no faulty chips are shipped. This research develops a novel methodology for testability analysis of linear analog circuits based on pole-zero analysis and on pole-zero sensitivity analysis. Based on this methodology, a new interpretation of ambiguity groups is provided relying on the circuit theory. The testability analysis methodology can be employed as a guideline for constructing fault diagnosis equations and for selecting the test nodes. We have also proposed an algorithm for selecting specifications that need to be measured. The element testability concept will be introduced. This concept provides the degree of difficulty in testing circuit elements. The value of the element testability can easily be obtained using the pole sensitivities. Then, specifications which need to be measured can be selected based on this concept. Consequently, the selected measurements can be utilized for reducing the test time without sacrificing the fault coverage and maximizing the information for fault diagnosis

    Commissioning Perspectives for the ATLAS Pixel Detector

    Get PDF
    The ATLAS Pixel Detector, the innermost sub-detector of the ATLAS experiment at the Large Hadron Collider, CERN, is an 80 million channel silicon pixel tracking detector designed for high-precision charged particle tracking and secondary vertex reconstruction. It was installed in the ATLAS experiment and commissioning for the first proton-proton collision data taking in 2008 has begun. Due to the complex layout and limited accessibility, quality assurance measurements were continuously performed during production and assembly to ensure that no problematic components are integrated. The assembly of the detector at CERN and related quality assurance measurement results, including comparison to previous production measurements, will be presented. In order to verify that the integrated detector, its data acquisition readout chain, the ancillary services and cooling system as well as the detector control and data acquisition software perform together as expected approximately 8% of the detector system was progressively assembled as close to the final layout as possible. The so-called System Test laboratory setup was operated for several months under experiment-like environment conditions. The interplay between different detector components was studied with a focus on the performance and tunability of the optical data transmission system. Operation and optical tuning procedures were developed and qualified for the upcoming commission ing. The front-end electronics preamplifier threshold tuning and noise performance were studied and noise occupancy of the detector with low sensor bias voltages was investigated. Data taking with cosmic muons was performed to test the data acquisition and trigger system as well as the offline reconstruction and analysis software. The data quality was verified with an extended version of the pixel online monitoring package which was implemented for the ATLAS Combined Testbeam. The detector raw data of the Combined Testbeam and of the System Test cosmic run was converted for offline data analysis with the Pixel bytestream converter which was continuously extended and adapted according to the offline analysis software needs
    • 

    corecore