2,405 research outputs found

    Transition Faults and Transition Path Delay Faults: Test Generation, Path Selection, and Built-In Generation of Functional Broadside Tests

    Get PDF
    As the clock frequency and complexity of digital integrated circuits increase rapidly, delay testing is indispensable to guarantee the correct timing behavior of the circuits. In this dissertation, we describe methods developed for three aspects of delay testing in scan-based circuits: test generation, path selection and built-in test generation. We first describe a deterministic broadside test generation procedure for a path delay fault model named the transition path delay fault model, which captures both large and small delay defects. Under this fault model, a path delay fault is detected only if all the individual transition faults along the path are detected by the same test. To reduce the complexity of test generation, sub-procedures with low complexity are applied before a complete branch-and-bound procedure. Next, we describe a method based on static timing analysis to select critical paths for test generation. Logic conditions that are necessary for detecting a path delay fault are considered to refine the accuracy of static timing analysis, using input necessary assignments. Input necessary assignments are input values that must be assigned to detect a fault. The method calculates more accurate path delays, selects paths that are critical during test application, and identifies undetectable path delay faults. These two methods are applicable to off-line test generation. For large circuits with high complexity and frequency, built-in test generation is a cost-effective method for delay testing. For a circuit that is embedded in a larger design, we developed a method for built-in generation of functional broadside tests to avoid excessive power dissipation during test application and the overtesting of delay faults, taking the functional constraints on the primary input sequences of the circuit into consideration. Functional broadside tests are scan-based two-pattern tests for delay faults that create functional operation conditions during test application. To avoid the potential fault coverage loss due to the exclusive use of functional broadside tests, we also developed an optional DFT method based on state holding to improve fault coverage. High delay fault coverage can be achieved by the developed method for benchmark circuits using simple hardware

    Pressure and saturation estimation from PRM time-lapse seismic data for a compacting reservoir

    Get PDF
    Observed 4D effects are influenced by a combination of changes in both pressure and saturation in the reservoir. Decomposition of pressure and saturation changes is crucial to explain the different physical variables that have contributed to the 4D seismic responses. This thesis addresses the challenges of pressure and saturation decomposition from such time-lapse seismic data in a compacting chalk reservoir. The technique employed integrates reservoir engineering concepts and geophysical knowledge. The innovation in this methodology is the ability to capture the complicated water weakening behaviour of the chalk as a non-linear proxy model controlled by only three constants. Thus, changes in pressure and saturation are estimated via a Bayesian inversion by employing compaction curves derived from the laboratory, constraints from the simulation model predictions, time strain information and the observed fractional change in and . The approach is tested on both synthetic and field data from the Ekofisk field in the North Sea. The results are in good agreement with well production data, and help explain strong localized anomalies in both the Ekofisk and Tor formations. These results also suggest updates to the reservoir simulation model. The second part of the thesis focuses on the geomechanics of the overburden, and the opportunity to use time-lapse time-shifts to estimate pore pressure changes in the reservoir. To achieve this, a semi-analytical approach by Geertsma is used, which numerically integrates the displacements from a nucleus of strain. This model relates the overburden time-lapse time-shifts to reservoir pressure. The existing method by Hodgson (2009) is modified to estimate reservoir pressure change and also the average dilation factor or R-factor for both the reservoir and overburden. The R-factors can be quantified when prior constraints are available from a well history matched simulation model, and their uncertainty defined. The results indicate that the magnitude of R is a function of strain change polarity, and that this asymmetry is required to match the observed timeshifts. The recovered average R-factor is 16, using the permanent reservoir monitoring (PRM) data. The streamer data has recovered average R-factors in the range of 7.2 to 18.4. Despite the limiting assumptions of a homogeneous medium, the method is beneficial, as it treats arbitrary subsurface geometries, and, in contrast to the complex numerical approaches, it is simple to parameterise and computationally fast. Finally, the aim and objective of this research have been met predominantly by the use of PRM data. These applications could not have been achieved without such highly repeatable and short repeat period acquisitions. This points to the value in using these data in reservoir characterisation, inversion and history matching

    The impact of shale pressure diffusion on 4D seismic interpretation

    Get PDF
    Shale typically has a low but non-negligible permeability of the order of nanodarcys (recognized an appreciated in production of unconventional resources), which could affect the magnitude and pattern of the pressure in conventional reservoirs over the lifetime of a producing field. The implications of this phenomenon for reservoir monitoring by 4D seismic can be significant, but depend on the geology of the field, the time-lines for production and recovery, and the timing of the seismic surveys. In this PhD thesis I developed an integrated workflow to assess the process of shale pressure diffusion and its elastic implications in the 4D seismic interpretation of four conventional reservoirs (three North Sea case studies and one from West Africa), with different geological settings (shallow marine and turbidites) and production mechanisms. To accomplish that, first, a detailed petrophysical evaluation was performed to characterize the overburden, intra-reservoir and underburden shales. Next, the simulation models were adjusted to activate the shale-related contributions, and then, applying simulator to seismic workflows, 3D and 4D synthetic seismic modelling were performed, for comparison with the observed seismic data and to establish the impact of the shale pressure diffusion in the elastic dynamic behaviour of the reservoir. This work also includes a case study where evaluation of shale pressure diffusion was integrated with geomechanical simulations to assess the propagation of time shifts and time strain in the overburden of a high pressure/high temperature reservoir under compaction, improving the understanding of the distribution and polarity of the observed seismic time strain. Fluid flow simulation results of this work indicate that activation of the shale improves the overall reservoir connectivity, enhancing model prediction (production history matched data). The fit to observed 4D seismic data was improved in all the field applications with a noticeable reduction (up to 6%) in the mismatch (hardening and softening signal distribution) for the models with active shales. In reservoirs where the saturation was very sensitive to changes in pressure, shale activation proved to impact strongly on the breakout and distribution of gas liberated from solution. Overall, this work found that inclusion of shale in the 3D and 4D reservoir seismic modelling can provide valuable insights for the interpretation of the reservoir’s dynamic behaviour and that, under particular conditions such as strong reservoir compartmentalization, shale pressure diffusion could be a significant process in the interpretation of the 4D seismic signature

    Test Vector Decomposition Based Static Compaction Algorithms for Combinational Circuits

    Get PDF
    Testing system-on-chips involves applying huge amounts of test data, which is stored in the tester memory and then transferred to the chip under test during test application. Therefore, practical techniques, such as test compression and compaction, are required to reduce the amount of test data in order to reduce both the total testing time and memory requirements for the tester. In this paper, a new approach to static compaction for combinational circuits, referred to as test vector decomposition (TVD), is proposed. In addition, two new TVD based static compaction algorithms are presented. Experimental results for benchmark circuits demonstrate the effectiveness of the two new static compaction algorithms

    Test Vector Decomposition Based Static Compaction Algorithms for Combinational Circuits

    Get PDF
    Testing system-on-chips involves applying huge amounts of test data, which is stored in the tester memory and then transferred to the chip under test during test application. Therefore, practical techniques, such as test compression and compaction, are required to reduce the amount of test data in order to reduce both the total testing time and memory requirements for the tester. In this paper, a new approach to static compaction for combinational circuits, referred to as test vector decomposition (TVD), is proposed. In addition, two new TVD based static compaction algorithms are presented. Experimental results for benchmark circuits demonstrate the effectiveness of the two new static compaction algorithms

    Test Vector Decomposition Based Static Compaction Algorithms for Combinational Circuits

    Get PDF
    Testing system-on-chips involves applying huge amounts of test data, which is stored in the tester memory and then transferred to the chip under test during test application. Therefore, practical techniques, such as test compression and compaction, are required to reduce the amount of test data in order to reduce both the total testing time and memory requirements for the tester. In this paper, a new approach to static compaction for combinational circuits, referred to as test vector decomposition (TVD), is proposed. In addition, two new TVD based static compaction algorithms are presented. Experimental results for benchmark circuits demonstrate the effectiveness of the two new static compaction algorithms

    Dynamic reservoir characterization from overburden time-lapse strains

    Get PDF
    Accurate reservoir depletion or pressure change patterns are of great value when planning infill drilling programs for field development, as well as when monitoring injection wells and swept/unswept areas. In addition, a precise dynamic geomechanical description of the reservoir and overburden stress state could prevent costly undesired effects on the production infrastructure such as sea floor subsidence, casing shear and well failure. Dynamic characterization of reservoirs, until recently, had only well data to rely on, which apart from the inherent uncertainties (e.g. due to formation damage), provides no direct information on what is taking place between the wells. The advent of time-lapse seismic at the end of the 1990s meant that this gap could be bridged, providing measurements of the changes taking place in the subsurface. In its origins, time-lapse seismic was conceived as a tool to image intra-reservoir fluid movements via the dependency of reflection amplitudes on acoustic impedance, which is affected by fluid saturation changes in the porous reservoir rocks. However, depletion induced velocity changes are also non negligible. Furthermore, the reflectors may undergo deformation and displacement where compaction and subsidence are involved. As a consequence, analysis of amplitude changes is not straightforward, since in most cases, amplitudes have been shifted by a non negligible time difference or time-shift, presenting not only challenges, but also new possibilities. It is in the possibilities of these time-shifts that the present study is based. This research presents a novel method which numerically solves the static field problem in a multilayered heterogeneous media, relating overburden strain to reservoir depletion. It builds up on previous works based on Geertsma type solutions requiring a homogeneous half-space. This technique makes it possible to estimate the reservoir’s stress state, strain and pressure changes from measured overburden strain by considering the earth as a linear filter with reservoir compaction and overburden strain as parameters. However, some a priori knowledge is needed in the form of a rough subsurface model and a preliminary geomechanics simulation in order to approximate the transfer functions as Wiener filters. In this thesis, the Wiener filter concept has been applied to three real North Sea fields. First, to the Elgin field, an HP/HT shallow marine Upper Jurassic sandstone reservoir located in the UK sector of the North Sea. Then, to the Ekofisk and South Arne fields, both compacting chalk reservoirs in the Norwegian and Danish sector of the North Sea respectively. Additionally, by using a synthetic example the method has been validated and compared with a linear inversion approach using a Geertsma type Green’s function achieving higher accuracy. The project involved not only the development and application of the method itself, but the calculation of time-strains from the measured seismic and the construction and implementation of full field geomechanical models

    Geomechanical study of the Tarfaya basin, West African coast, using 3D/2D static models and 2D evolutionary models

    Get PDF
    [eng] This thesis uses different variants of geomechanical modelling approaches to investigate stress, strain and geometry distribution and evolution through time of the Tarfaya salt basin, located on the West African coast. This work has been conducted by geomechanically simulating a sector of the Tarfaya basin containing key features such as diapirs, faults and encasing sediments using 3D and 2D static models and 2D evolutionary models. The 3D and 2D static geomechanical models of the Tarfaya basin system allowed to predict the stresses and strains at present day. Both models are based on present-day basin geometries extracted from seismic data and use a poro-elastic description for the sediments based on calibrated log data and a visco-plastic description for the salt based on values from Avery Island. The models predict a significant horizontal stress reduction in the sediments located at the top of the principal salt structure, the Sandia diapir, consistent with wellbore data. However, the 2D static geomechanical model shows broader areas affected by the stress reduction compared to the 3D model and overestimates its magnitude by less than 1.5 MPa. These results highlight the possibility of using 2D static modelling as a valid approximation to the more complex and time-consuming 3D static models. A more in-depth study of the 2D static model using sensitivity analysis yielded a series of interesting observations: (1) the salt bodies and their geometry have the strongest impact on the final model results; (2) the elastic properties of the sediments do not impact the model results. In other words, the correct definition of the sediments with the highest material contrasts such as salt should be a priority when building static models. Such definition should be ranked ahead of the precise determination of the rheologic parameters for the sediments present in the basin. In this thesis, we also present the results of introducing an evolutionary geomechanical modelling approach to the Tarfaya basin. This study incorporates information of burial history, sea floor geometry and tectonic loads from a sequential kinematic restoration model to geologically constrain the 2D evolutionary geomechanical model. The sediments in the model follow a poro-elastoplastic description and the salt follows a visco-plastic description. The 2D evolutionary model predicts a similar Sandia diapir evolution when compared to the kinematic restoration. This proves this approach can offer a significant advance in the study of the basin, by not only providing the stress and strain distribution and salt geometry at present day, but also reproducing their evolution during the Tarfaya basin history. Sensitivity analysis on the evolutionary model indicates that temporal and spatial variation in sedimentation rate is a key control on the kinematic structural evolution of the salt system. The variation of sedimentation rates in the model controls whether the modelled salt body gets buried by Tertiary sediments (after a continuous growth during the Jurassic and Cretaceous periods) or is able to remain active until the present day. Also, the imposed shortening affects the final stress distribution of the sediments at the present day. To conclude, the results obtained during this study allowed us to understand the formation and evolution of the diapirs in the Tarfaya basin using carefully built geomechanical models. The study demonstrates that carefully built 2D static models can provide information comparable to the 3D models, but without the time and computational power requirements of the 3D models. That makes the 2D approach very appropriate for the exploration stages of a particular prospect. If carefully built, such 2D models can approximate and yield useful information, even from complex 3D structures such as the Tarfaya basin salt structures. This thesis also concludes that incorporating kinematic restoration data into 2D evolutionary models provides insights into the key parameters controlling the evolution of the studied system. Furthermore, it enables more realistic geomechanical models, which, in turn, provide more insights into sediment stress and porosity

    Efficient Test Compaction for Combinational Circuits Based on Fault Detection Count-Directed Clustering

    Get PDF
    Test compaction is an effective technique for reducing test data volume and test application time. In this paper, we present a new static test compaction algorithm based on test vector decomposition and clustering. Test vectors are decomposed and clustered in an increasing order of faults detection count. This clustering order gives more degree of freedom and results in better compaction. Experimental results demonstrate the effectiveness of the proposed approach in achieving higher compaction in a much more efficient CPU time than previous clustering-based test compaction approaches
    corecore