28 research outputs found

    Towards an embedded board-level tester: study of a configurable test processor

    Get PDF
    The demand for electronic systems with more features, higher performance, and less power consumption increases continuously. This is a real challenge for design and test engineers because they have to deal with electronic systems with ever-increasing complexity maintaining production and test costs low and meeting critical time to market deadlines. For a test engineer working at the board-level, this means that manufacturing defects must be detected as soon as possible and at a low cost. However, the use of classical test techniques for testing modern printed circuit boards is not sufficient, and in the worst case these techniques cannot be used at all. This is mainly due to modern packaging technologies, a high device density, and high operation frequencies of modern printed circuit boards. This leads to very long test times, low fault coverage, and high test costs. This dissertation addresses these issues and proposes an FPGA-based test approach for printed circuit boards. The concept is based on a configurable test processor that is temporarily implemented in the on-board FPGA and provides the corresponding mechanisms to communicate to external test equipment and co-processors implemented in the FPGA. This embedded test approach provides the flexibility to implement test functions either in the external test equipment or in the FPGA. In this manner, tests are executed at-speed increasing the fault coverage, test times are reduced, and the test system can be adapted automatically to the properties of the FPGA and devices located on the board. An essential part of the FPGA-based test approach deals with the development of a test processor. In this dissertation the required properties of the processor are discussed, and it is shown that the adaptation to the specific test scenario plays a very important role for the optimization. For this purpose, the test processor is equipped with configuration parameters at the instruction set architecture and microarchitecture level. Additionally, an automatic generation process for the test system and for the computation of some of the processor’s configuration parameters is proposed. The automatic generation process uses as input a model known as the device under test model (DUT-M). In order to evaluate the entire FPGA-based test approach and the viability of a processor for testing printed circuit boards, the developed test system is used to test interconnections to two different devices: a static random memory (SRAM) and a liquid crystal display (LCD). Experiments were conducted in order to determine the resource utilization of the processor and FPGA-based test system and to measure test time when different test functions are implemented in the external test equipment or the FPGA. It has been shown that the introduced approach is suitable to test printed circuit boards and that the test processor represents a realistic alternative for testing at board-level.Der Bedarf an elektronischen Systemen mit zusätzlichen Merkmalen, höherer Leistung und geringerem Energieverbrauch nimmt ständig zu. Dies stellt eine erhebliche Herausforderung für Entwicklungs- und Testingenieure dar, weil sie sich mit elektronischen Systemen mit einer steigenden Komplexität zu befassen haben. Außerdem müssen die Herstellungs- und Testkosten gering bleiben und die Produkteinführungsfristen so kurz wie möglich gehalten werden. Daraus folgt, dass ein Testingenieur, der auf Leiterplatten-Ebene arbeitet, die Herstellungsfehler so früh wie möglich entdecken und dabei möglichst niedrige Kosten verursachen soll. Allerdings sind die klassischen Testmethoden nicht in der Lage, die Anforderungen von modernen Leiterplatten zu erfüllen und im schlimmsten Fall können diese Testmethoden überhaupt nicht verwendet werden. Dies liegt vor allem an modernen Gehäuse-Technologien, der hohen Bauteildichte und den hohen Arbeitsfrequenzen von modernen Leiterplatten. Das führt zu sehr langen Testzeiten, geringer Testabdeckung und hohen Testkosten. Die Dissertation greift diese Problematik auf und liefert einen FPGA-basierten Testansatz für Leiterplatten. Das Konzept beruht auf einem konfigurierbaren Testprozessor, welcher im On-Board-FPGA temporär implementiert wird und die entsprechenden Mechanismen für die Kommunikation mit der externen Testeinrichtung und Co-Prozessoren im FPGA bereitstellt. Dadurch ist es möglich Testfunktionen flexibel entweder auf der externen Testeinrichtung oder auf dem FPGA zu implementieren. Auf diese Weise werden Tests at-speed ausgeführt, um die Testabdeckung zu erhöhen. Außerdem wird die Testzeit verkürzt und das Testsystem automatisch an die Eigenschaften des FPGAs und anderer Bauteile auf der Leiterplatte angepasst. Ein wesentlicher Teil des FPGA-basierten Testansatzes umfasst die Entwicklung eines Testprozessors. In dieser Dissertation wird über die benötigten Eigenschaften des Prozessors diskutiert und es wird gezeigt, dass die Anpassung des Prozessors an den spezifischen Testfall von großer Bedeutung für die Optimierung ist. Zu diesem Zweck wird der Prozessor mit Konfigurationsparametern auf der Befehlssatzarchitektur-Ebene und Mikroarchitektur-Ebene ausgerüstet. Außerdem wird ein automatischer Generierungsprozess für die Realisierung des Testsystems und für die Berechnung einer Untergruppe von Konfigurationsparametern des Prozessors vorgestellt. Der automatische Generierungsprozess benutzt als Eingangsinformation ein Modell des Prüflings (device under test model, DUT-M). Das entwickelte Testsystem wurde zum Testen von Leiterplatten für Verbindungen zwischen dem FPGA und zwei Bauteilen verwendet, um den FPGA-basierten Testansatz und die Durchführbarkeit des Testprozessors für das Testen auf Leiterplatte-Ebene zu evaluieren. Die zwei Bauteile sind ein Speicher mit direktem Zugriff (static random-access memory, SRAM) und eine Flüssigkristallanzeige (liquid crystal display, LCD). Die Experimente wurden durchgeführt, um den Ressourcenverbrauch des Prozessors und Testsystems festzustellen und um die Testzeit zu messen. Dies geschah durch die Implementierung von unterschiedlichen Testfunktionen auf der externen Testeinrichtung und dem FPGA. Dadurch konnte gezeigt werden, dass der FPGA-basierte Ansatz für das Testen von Leiterplatten geeignet ist und dass der Testprozessor eine realistische Alternative für das Testen auf Leiterplatten-Ebene ist

    High Quality Test Generation Targeting Power Supply Noise

    Get PDF
    Delay test is an essential structural manufacturing test used to determine the maximal frequency at which a chip can run without incurring any functional failures. The central unsolved challenge is achieving high delay correlation with the functional test, which is dominated by power supply noise (PSN). Differences in PSN between functional and structural tests can lead to differences in chip operating frequencies of 30% or more. Pseudo functional test (PFT), based on a multiple-cycle clocking scheme, has better PSN correlation with functional test compared with traditional two-cycle at-speed test. However, PFT is vulnerable to under-testing when applied to delay test. This work aims to generate high quality PFT patterns, achieving high PSN correlation with functional test. First, a simulation-based don’t-care filling algorithm, Bit-Flip, is proposed to improve the PSN for PFT. It relies on randomly flipping a group of bits in the test pattern to explore the search space and find patterns that stress the circuits with the worst-case, but close to functional PSN. Experimental results on un-compacted patterns show Bit-Flip is able to improve PSN as much as 38.7% compared with the best random fill. Second, techniques are developed to improve the efficiency of Bit-Flip. A set of partial patterns, which sensitize transitions on critical cells, are pre-computed and later used to guide the selection of bits to flip. Combining random and deterministic flipping, we achieve similar PSN control as Bit-Flip but with much less simulation time. Third, we address the problem of automatic test pattern generation for extracting circuit timing sensitivity to power supply noise during post-silicon validation. A layout-aware path selection algorithm selects long paths to fully span the power delivery network. The selected patterns are intelligently filled to bring the PSN to a desired level. These patterns can be used to understand timing sensitivity in post-silicon validation by repeatedly applying the path delay test while sweeping the PSN experienced by the path from low to high. Finally, the impacts of compression on power supply noise control are studied. Illinois Scan and embedded deterministic test (EDT) patterns are generated. Then Bit-Flip is extended to incorporate the compression constraints and applied to compressible patterns. The experimental results show that EDT lowers the maximal PSN by 24.15% and Illinois Scan lowers it by 2.77% on un-compacted patterns

    Coupled Thermal-Hydraulic-Mechanical (THM) modelling of underground gas storage – A case study from the Molasse Basin, South Germany

    Get PDF
    Thermal-hydraulic-mechanical (THM) models of gas storage in porous media provide valuable information for various applications. The range of these applications varies from prediction of ground surface displacements, determination of stress path changes, and maximum reservoir pressure to storage capacity for maintaining fault stability and overburden integrity. The study, conducted in collaboration with research institutes and storage companies in Germany, addresses the numerical modelling of geomechanical effects caused by the storage of methane in a depleted gas field. The geomechanical assessment focuses on a former gas reservoir in the Bavarian Molasse Basin east of Munich, for which a hypothetical conversion into underground gas storage (UGS) is considered. The target reservoir is of Late Oligocene age, i.e., the Chattian Hauptsand with three gas bearing layers having a total thickness of 85 m. The reservoir formation is highly porous with an average porosity of 23% and permeability is in the range between 20 mD and 80 mD. The reservoir has produced natural gas from 1958 till 1978 and has been in a shut-in phase ever since. The storage operations require precise understanding of reservoir mechanics and stresses; therefore, the selected methodology helps to analyze these issues in detail. The geomechanical analysis is performed with the help of a state-of-the-art THM model with the following objectives: (1) analyze the variation of principal stress field induced by the field activities (2) analyze the effective stress changes with changing pore pressure in short-term as well as long-term using hypothetical injection-production schedule cases (3) prediction of ground surface displacements over the field, (4) analyze the possible reactivation of faults and fractures as well as the safe storage capacity of the reservoir; and (5) thermal stress changes with injection of colder foreign gas in underground reservoir. The methodology comprises 1D mechanical earth modelling (MEM) to calculate elastic properties as well as a first estimate for the vertical and horizontal stresses at well locations by using log data. This modelling phase provide complete analyses of log, core and laboratory data which leads to detailed 1D MEM of all the wells available for case study reservoir. This information is then used to populate a 3D finite element MEM) which has been built from seismic data and comprises not only the reservoir but the entire overburden up to the earth’s surface as well as part of the underburden. The size of this model is 30 × 24 × 5 km3 and 3D property modelling has been done by applying geostatistical approach for property inter-/extrapolation. The behavior of pore pressure in the field has been derived from dynamic fluid flow simulation through history matching for the production and subsequent shut-down phases of the field. Subsequently, changes in the pore pressure field during injection-production and subsequent shut-down phases are analyzed for weekly and seasonal loading and unloading scenario cases. The resulting pore pressure changes are coupled with 3D geomechanical model in order to have complete understanding of stress changes during these operations. In two scenario cases, the surplus electricity in Germany from renewable energy sources such as solar and wind from the year 2017 is considered. It results that the German surplus electricity can be stored in underground gas storage facilities with a Power-to-Gas (PtG) concept and that the stored gas can be reused again. Additionally, fault reactivation and thermal stress analyses are also performed on THM model in order to evaluate maximum threshold (injection) pressure as well as safe storage capacity of the reservoir. The fault reactivation already occurs at 1.25 times the initial reservoir pressure which provides a safe storage rate of 100,000-150,000 m3/day in the case study reservoir. The validated THM model is ready to be used for analyzing new wells for future field development and testing further arbitrary injection-production schedules, among others. The methodology can be applied on to any UGS facility not only in German Molasse Basin but anywhere in the world

    Pressure and saturation estimation from PRM time-lapse seismic data for a compacting reservoir

    Get PDF
    Observed 4D effects are influenced by a combination of changes in both pressure and saturation in the reservoir. Decomposition of pressure and saturation changes is crucial to explain the different physical variables that have contributed to the 4D seismic responses. This thesis addresses the challenges of pressure and saturation decomposition from such time-lapse seismic data in a compacting chalk reservoir. The technique employed integrates reservoir engineering concepts and geophysical knowledge. The innovation in this methodology is the ability to capture the complicated water weakening behaviour of the chalk as a non-linear proxy model controlled by only three constants. Thus, changes in pressure and saturation are estimated via a Bayesian inversion by employing compaction curves derived from the laboratory, constraints from the simulation model predictions, time strain information and the observed fractional change in and . The approach is tested on both synthetic and field data from the Ekofisk field in the North Sea. The results are in good agreement with well production data, and help explain strong localized anomalies in both the Ekofisk and Tor formations. These results also suggest updates to the reservoir simulation model. The second part of the thesis focuses on the geomechanics of the overburden, and the opportunity to use time-lapse time-shifts to estimate pore pressure changes in the reservoir. To achieve this, a semi-analytical approach by Geertsma is used, which numerically integrates the displacements from a nucleus of strain. This model relates the overburden time-lapse time-shifts to reservoir pressure. The existing method by Hodgson (2009) is modified to estimate reservoir pressure change and also the average dilation factor or R-factor for both the reservoir and overburden. The R-factors can be quantified when prior constraints are available from a well history matched simulation model, and their uncertainty defined. The results indicate that the magnitude of R is a function of strain change polarity, and that this asymmetry is required to match the observed timeshifts. The recovered average R-factor is 16, using the permanent reservoir monitoring (PRM) data. The streamer data has recovered average R-factors in the range of 7.2 to 18.4. Despite the limiting assumptions of a homogeneous medium, the method is beneficial, as it treats arbitrary subsurface geometries, and, in contrast to the complex numerical approaches, it is simple to parameterise and computationally fast. Finally, the aim and objective of this research have been met predominantly by the use of PRM data. These applications could not have been achieved without such highly repeatable and short repeat period acquisitions. This points to the value in using these data in reservoir characterisation, inversion and history matching

    Low Cost Power and Supply Noise Estimation and Control in Scan Testing of VLSI Circuits

    Get PDF
    Test power is an important issue in deep submicron semiconductor testing. Too much power supply noise and too much power dissipation can result in excessive temperature rise, both leading to overkill during delay test. Scan-based test has been widely adopted as one of the most commonly used VLSI testing method. The test power during scan testing comprises shift power and capture power. The power consumed in the shift cycle dominates the total power dissipation. It is crucial for IC manufacturing companies to achieve near constant power consumption for a given timing window in order to keep the chip under test (CUT) at a near constant temperature, to make it easy to characterize the circuit behavior and prevent delay test over kill. To achieve constant test power, first, we built a fast and accurate power model, which can estimate the shift power without logic simulation of the circuit. We also proposed an efficient and low power X-bit Filling process, which could potentially reduce both the shift power and capture power. Then, we introduced an efficient test pattern reordering algorithm, which achieves near constant power between groups of patterns. The number of patterns in a group is determined by the thermal constant of the chip. Experimental results show that our proposed power model has very good correlation. Our proposed X-Fill process achieved both minimum shift power and capture power. The algorithm supports multiple scan chains and can achieve constant power within different regions of the chip. The greedy test pattern reordering algorithm can reduce the power variation from 29-126 percent to 8-10 percent or even lower if we reduce the power variance threshold. Excessive noise can significantly affect the timing performance of Deep Sub-Micron (DSM) designs and cause non-trivial additional delay. In delay test generation, test compaction and test fill techniques can produce excessive power supply noise. This can result in delay test overkill. Prior approaches to power supply noise aware delay test compaction are too costly due to many logic simulations, and are limited to static compaction. We proposed a realistic low cost delay test compaction flow that guardbands the delay using a sequence of estimation metrics to keep the circuit under test supply noise more like functional mode. This flow has been implemented in both static compaction and dynamic compaction. We analyzed the relationship between delay and voltage drop, and the relationship between effective weighted switching activity (WSA) and voltage drop. Based on these correlations, we introduce the low cost delay test pattern compaction framework considering power supply noise. Experimental results on ISCAS89 circuits show that our low cost framework is up to ten times faster than the prior high cost framework. Simulation results also verify that the low cost model can correctly guardband every path‟s extra noise-induced delay. We discussed the rules to set different constraints in the levelized framework. The veto process used in the compaction can be also applied to other constraints, such as power and temperature

    Diagenetic–Porosity Evolution and Reservoir Evaluation in Multiprovenance Tight Sandstones: Insight from the Lower Shihezi Formation in Hangjinqi Area, Northern Ordos Basin

    Get PDF
    AbstractThe reservoir property of tight sandstones is closely related to the provenance and diagenesis, and multiprovenance system and complex diagenesis are developed in Hangjinqi area. However, the relationship between provenance, diagenesis, and physical characteristics of tight reservoirs in Hangjinqi area has not yet been reported. The Middle Permian Lower Shihezi Formation is one of the most important tight gas sandstone reservoirs in the Hangjinqi area of Ordos Basin. This research compared the diagenesis-porosity quantitative evolution mechanisms of Lower Shihezi Formation sandstones from various provenances in the Hangjinqi area using thin-section descriptions, cathodoluminescence imaging, X-ray diffraction (XRD), scanning electron microscopy (SEM), and homogenization temperature of fluid inclusions, along with general physical data and high-pressure mercury intrusion (HPMI) data. The sandstones mainly comprise quartzarenite, sublitharenite, and litharenite with low porosity and low permeability and display obvious zonation in the content of detrital components as a result of multiprovenance. Pore space of those sandstone mainly consists of primary pores, secondary pores, and microfractures, but their proportion varies in different provenances. According to HPMI, the order of the pore-throat radius from largest to smallest is central provenance, eastern provenance, and western provenance, which is consistent with the change tend of porosity (middle part>northern part>western part) in Hangjinqi region. The diagenetic evolution path of those sandstones is comparable, with compaction, cementation, dissolution, and fracture development. The central provenance has the best reservoir quality, followed by the eastern provenance and the western provenance, and this variation due to the diverse diagenesis (diagenetic stage and intensity) of different provenances. These findings reveal that the variations in detrital composition and structure caused by different provenances are the material basis of reservoir differentiation, and the main rationale for reservoir differentiation is varying degrees of diagenesis during burial process

    Design, Analysis and Test of Logic Circuits under Uncertainty.

    Full text link
    Integrated circuits are increasingly susceptible to uncertainty caused by soft errors, inherently probabilistic devices, and manufacturing variability. As device technologies scale, these effects become detrimental to circuit reliability. In order to address this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer that uses functional-simulation signatures to capture error effects, 2) novel design techniques that improve reliability using little area and performance overhead, 3) a matrix-based reliability-analysis framework that captures many types of probabilistic faults, and 4) test-generation/compaction methods aimed at probabilistic faults in logic circuits. SER analysis must account for the main error-masking mechanisms in ICs: logic, timing, and electrical masking. We relate logic masking to node testability of the circuit and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute estability (signal probability and observability). To account for timing masking, we compute error-latching windows (ELWs) from timing analysis information. Electrical masking is incorporated into our estimates through derating factors for gate error probabilities. The SER of a circuit is computed by combining the effects of all three masking mechanisms within our SER analyzer called AnSER. Using AnSER, we develop several low-overhead techniques that increase reliability, including: 1) an SER-aware design method that uses redundancy already present within the circuit, 2) a technique that resynthesizes small logic windows to improve area and reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs. We develop the probabilistic transfer matrix (PTM) modeling framework to analyze effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs) to improve computational efficiency. Several ADD algorithms are developed to extract reliability and error susceptibility information from PTMs representing circuits. We propose new algorithms for circuit testing under probabilistic faults, which require a reformulation of existing test techniques. For instance, a test vector may need to be repeated many times to detect a fault. Also, different vectors detect the same fault with different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd
    corecore