52 research outputs found

    Advances in high-gradient magnetic fishing for bioprocessing

    Get PDF

    Towards an embedded board-level tester: study of a configurable test processor

    Get PDF
    The demand for electronic systems with more features, higher performance, and less power consumption increases continuously. This is a real challenge for design and test engineers because they have to deal with electronic systems with ever-increasing complexity maintaining production and test costs low and meeting critical time to market deadlines. For a test engineer working at the board-level, this means that manufacturing defects must be detected as soon as possible and at a low cost. However, the use of classical test techniques for testing modern printed circuit boards is not sufficient, and in the worst case these techniques cannot be used at all. This is mainly due to modern packaging technologies, a high device density, and high operation frequencies of modern printed circuit boards. This leads to very long test times, low fault coverage, and high test costs. This dissertation addresses these issues and proposes an FPGA-based test approach for printed circuit boards. The concept is based on a configurable test processor that is temporarily implemented in the on-board FPGA and provides the corresponding mechanisms to communicate to external test equipment and co-processors implemented in the FPGA. This embedded test approach provides the flexibility to implement test functions either in the external test equipment or in the FPGA. In this manner, tests are executed at-speed increasing the fault coverage, test times are reduced, and the test system can be adapted automatically to the properties of the FPGA and devices located on the board. An essential part of the FPGA-based test approach deals with the development of a test processor. In this dissertation the required properties of the processor are discussed, and it is shown that the adaptation to the specific test scenario plays a very important role for the optimization. For this purpose, the test processor is equipped with configuration parameters at the instruction set architecture and microarchitecture level. Additionally, an automatic generation process for the test system and for the computation of some of the processor’s configuration parameters is proposed. The automatic generation process uses as input a model known as the device under test model (DUT-M). In order to evaluate the entire FPGA-based test approach and the viability of a processor for testing printed circuit boards, the developed test system is used to test interconnections to two different devices: a static random memory (SRAM) and a liquid crystal display (LCD). Experiments were conducted in order to determine the resource utilization of the processor and FPGA-based test system and to measure test time when different test functions are implemented in the external test equipment or the FPGA. It has been shown that the introduced approach is suitable to test printed circuit boards and that the test processor represents a realistic alternative for testing at board-level.Der Bedarf an elektronischen Systemen mit zusätzlichen Merkmalen, höherer Leistung und geringerem Energieverbrauch nimmt ständig zu. Dies stellt eine erhebliche Herausforderung für Entwicklungs- und Testingenieure dar, weil sie sich mit elektronischen Systemen mit einer steigenden Komplexität zu befassen haben. Außerdem müssen die Herstellungs- und Testkosten gering bleiben und die Produkteinführungsfristen so kurz wie möglich gehalten werden. Daraus folgt, dass ein Testingenieur, der auf Leiterplatten-Ebene arbeitet, die Herstellungsfehler so früh wie möglich entdecken und dabei möglichst niedrige Kosten verursachen soll. Allerdings sind die klassischen Testmethoden nicht in der Lage, die Anforderungen von modernen Leiterplatten zu erfüllen und im schlimmsten Fall können diese Testmethoden überhaupt nicht verwendet werden. Dies liegt vor allem an modernen Gehäuse-Technologien, der hohen Bauteildichte und den hohen Arbeitsfrequenzen von modernen Leiterplatten. Das führt zu sehr langen Testzeiten, geringer Testabdeckung und hohen Testkosten. Die Dissertation greift diese Problematik auf und liefert einen FPGA-basierten Testansatz für Leiterplatten. Das Konzept beruht auf einem konfigurierbaren Testprozessor, welcher im On-Board-FPGA temporär implementiert wird und die entsprechenden Mechanismen für die Kommunikation mit der externen Testeinrichtung und Co-Prozessoren im FPGA bereitstellt. Dadurch ist es möglich Testfunktionen flexibel entweder auf der externen Testeinrichtung oder auf dem FPGA zu implementieren. Auf diese Weise werden Tests at-speed ausgeführt, um die Testabdeckung zu erhöhen. Außerdem wird die Testzeit verkürzt und das Testsystem automatisch an die Eigenschaften des FPGAs und anderer Bauteile auf der Leiterplatte angepasst. Ein wesentlicher Teil des FPGA-basierten Testansatzes umfasst die Entwicklung eines Testprozessors. In dieser Dissertation wird über die benötigten Eigenschaften des Prozessors diskutiert und es wird gezeigt, dass die Anpassung des Prozessors an den spezifischen Testfall von großer Bedeutung für die Optimierung ist. Zu diesem Zweck wird der Prozessor mit Konfigurationsparametern auf der Befehlssatzarchitektur-Ebene und Mikroarchitektur-Ebene ausgerüstet. Außerdem wird ein automatischer Generierungsprozess für die Realisierung des Testsystems und für die Berechnung einer Untergruppe von Konfigurationsparametern des Prozessors vorgestellt. Der automatische Generierungsprozess benutzt als Eingangsinformation ein Modell des Prüflings (device under test model, DUT-M). Das entwickelte Testsystem wurde zum Testen von Leiterplatten für Verbindungen zwischen dem FPGA und zwei Bauteilen verwendet, um den FPGA-basierten Testansatz und die Durchführbarkeit des Testprozessors für das Testen auf Leiterplatte-Ebene zu evaluieren. Die zwei Bauteile sind ein Speicher mit direktem Zugriff (static random-access memory, SRAM) und eine Flüssigkristallanzeige (liquid crystal display, LCD). Die Experimente wurden durchgeführt, um den Ressourcenverbrauch des Prozessors und Testsystems festzustellen und um die Testzeit zu messen. Dies geschah durch die Implementierung von unterschiedlichen Testfunktionen auf der externen Testeinrichtung und dem FPGA. Dadurch konnte gezeigt werden, dass der FPGA-basierte Ansatz für das Testen von Leiterplatten geeignet ist und dass der Testprozessor eine realistische Alternative für das Testen auf Leiterplatten-Ebene ist

    Seismicity in a model governed by competing frictional weakening and healing mechanisms

    Get PDF
    Observations from laboratory, field and numerical work spanning a wide range of space and time scales suggest a strain dependent progressive evolution of material properties that control the stability of earthquake faults. The associated weakening mechanisms are counterbalanced by a variety of restrengthening mechanisms. The efficiency of the healing processes depends on local material properties and on rheologic, temperature, and hydraulic conditions. We investigate the relative effects of these competing non-linear feedbacks on seismogenesis in the context of evolving frictional properties, using a mechanical earthquake model that is governed by slip weakening friction. Weakening and strengthening mechanisms are parametrized by the evolution of the frictional control variable—the slip weakening rate R—using empirical relationships obtained from laboratory experiments. In our model, weakening depends on the slip of an earthquake and tends to increase R, following the behaviour of real and simulated frictional interfaces. Healing causes R to decrease and depends on the time passed since the last slip. Results from models with these competing feedbacks are compared with simulations using non-evolving friction. Compared to fixed R conditions, evolving properties result in a significantly increased variability in the system dynamics. We find that for a given set of weakening parameters the resulting seismicity patterns are sensitive to details of the restrengthening process, such as the healing rate b and a lower cutoff time, tc , up to which no significant change in the friction parameter is observed. For relatively large and small cutoff times, the statistics are typical of fixed large and small R values, respectively. However, a wide range of intermediate values leads to significant fluctuations in the internal energy levels. The frequency-size statistics of earthquake occurrence show corresponding non-stationary characteristics on time scales over which negligible fluctuations are observed in the fixed-R case. The progressive evolution implies that -— except for extreme weakening and healing rates -— faults and fault networks possibly are not well characterized by steady states on typical catalogue time scales, thus highlighting the essential role of memory and history dependence in seismogenesis. The results suggest that an extrapolation to future seismicity occurrence based on temporally limited data may be misleading due to variability in seismicity patterns associated with competing mechanisms that affect fault stability

    Pressure and saturation estimation from PRM time-lapse seismic data for a compacting reservoir

    Get PDF
    Observed 4D effects are influenced by a combination of changes in both pressure and saturation in the reservoir. Decomposition of pressure and saturation changes is crucial to explain the different physical variables that have contributed to the 4D seismic responses. This thesis addresses the challenges of pressure and saturation decomposition from such time-lapse seismic data in a compacting chalk reservoir. The technique employed integrates reservoir engineering concepts and geophysical knowledge. The innovation in this methodology is the ability to capture the complicated water weakening behaviour of the chalk as a non-linear proxy model controlled by only three constants. Thus, changes in pressure and saturation are estimated via a Bayesian inversion by employing compaction curves derived from the laboratory, constraints from the simulation model predictions, time strain information and the observed fractional change in and . The approach is tested on both synthetic and field data from the Ekofisk field in the North Sea. The results are in good agreement with well production data, and help explain strong localized anomalies in both the Ekofisk and Tor formations. These results also suggest updates to the reservoir simulation model. The second part of the thesis focuses on the geomechanics of the overburden, and the opportunity to use time-lapse time-shifts to estimate pore pressure changes in the reservoir. To achieve this, a semi-analytical approach by Geertsma is used, which numerically integrates the displacements from a nucleus of strain. This model relates the overburden time-lapse time-shifts to reservoir pressure. The existing method by Hodgson (2009) is modified to estimate reservoir pressure change and also the average dilation factor or R-factor for both the reservoir and overburden. The R-factors can be quantified when prior constraints are available from a well history matched simulation model, and their uncertainty defined. The results indicate that the magnitude of R is a function of strain change polarity, and that this asymmetry is required to match the observed timeshifts. The recovered average R-factor is 16, using the permanent reservoir monitoring (PRM) data. The streamer data has recovered average R-factors in the range of 7.2 to 18.4. Despite the limiting assumptions of a homogeneous medium, the method is beneficial, as it treats arbitrary subsurface geometries, and, in contrast to the complex numerical approaches, it is simple to parameterise and computationally fast. Finally, the aim and objective of this research have been met predominantly by the use of PRM data. These applications could not have been achieved without such highly repeatable and short repeat period acquisitions. This points to the value in using these data in reservoir characterisation, inversion and history matching

    High Quality Test Generation Targeting Power Supply Noise

    Get PDF
    Delay test is an essential structural manufacturing test used to determine the maximal frequency at which a chip can run without incurring any functional failures. The central unsolved challenge is achieving high delay correlation with the functional test, which is dominated by power supply noise (PSN). Differences in PSN between functional and structural tests can lead to differences in chip operating frequencies of 30% or more. Pseudo functional test (PFT), based on a multiple-cycle clocking scheme, has better PSN correlation with functional test compared with traditional two-cycle at-speed test. However, PFT is vulnerable to under-testing when applied to delay test. This work aims to generate high quality PFT patterns, achieving high PSN correlation with functional test. First, a simulation-based don’t-care filling algorithm, Bit-Flip, is proposed to improve the PSN for PFT. It relies on randomly flipping a group of bits in the test pattern to explore the search space and find patterns that stress the circuits with the worst-case, but close to functional PSN. Experimental results on un-compacted patterns show Bit-Flip is able to improve PSN as much as 38.7% compared with the best random fill. Second, techniques are developed to improve the efficiency of Bit-Flip. A set of partial patterns, which sensitize transitions on critical cells, are pre-computed and later used to guide the selection of bits to flip. Combining random and deterministic flipping, we achieve similar PSN control as Bit-Flip but with much less simulation time. Third, we address the problem of automatic test pattern generation for extracting circuit timing sensitivity to power supply noise during post-silicon validation. A layout-aware path selection algorithm selects long paths to fully span the power delivery network. The selected patterns are intelligently filled to bring the PSN to a desired level. These patterns can be used to understand timing sensitivity in post-silicon validation by repeatedly applying the path delay test while sweeping the PSN experienced by the path from low to high. Finally, the impacts of compression on power supply noise control are studied. Illinois Scan and embedded deterministic test (EDT) patterns are generated. Then Bit-Flip is extended to incorporate the compression constraints and applied to compressible patterns. The experimental results show that EDT lowers the maximal PSN by 24.15% and Illinois Scan lowers it by 2.77% on un-compacted patterns

    The uplift of high voltage transmission tower foundations

    No full text
    The in-service performance of transmission tower foundation systems is poorly understood. This knowledge deficiency is particularly acute with regard to the dynamic and transient loading of these foundations in uplift. There is also uncertainty surrounding the integrity of existing assets as design practice appears to overestimate the capacity of the foundations when they are subject to testing. A significant component of cost of new high voltage overhead line route construction or uprating involves the maintenance or reinforcement of the individual transmission tower foundation systems. Therefore, a more developed understanding of the foundation system behaviour is required to facilitate these works in a cost-effective and timely manner. To gain a better understanding of foundation system performance, a series of full scale rapid uplift tests were carried out in July 2012. The tests bridged understanding of the load-displacement, load-rate and rate effects of soils from previous experimental research to field scale, with associated construction and in situ soil nonlinearities. The tests made use of modern instrumentation and monitoring techniques in combination with rigorous numerical finite element back analysis to update understanding of in situ failure mechanisms and capture uplift capacity enhancements due to the application of rapid loading. The field tests and numerical back analysis results highlighted significant limitations in current design practice particularly the reliance on an outdated failure mechanism and ultimate limit state criterion. The results of the rapid uplift tests compared to standard industry practice suggested that the latter method may be unduly conservative leading to an underestimation of in-service capacities. The results presented will lead to a better understanding of foundation system performance and more legitimate design and testing practice technical specifications

    Coupled Thermal-Hydraulic-Mechanical (THM) modelling of underground gas storage – A case study from the Molasse Basin, South Germany

    Get PDF
    Thermal-hydraulic-mechanical (THM) models of gas storage in porous media provide valuable information for various applications. The range of these applications varies from prediction of ground surface displacements, determination of stress path changes, and maximum reservoir pressure to storage capacity for maintaining fault stability and overburden integrity. The study, conducted in collaboration with research institutes and storage companies in Germany, addresses the numerical modelling of geomechanical effects caused by the storage of methane in a depleted gas field. The geomechanical assessment focuses on a former gas reservoir in the Bavarian Molasse Basin east of Munich, for which a hypothetical conversion into underground gas storage (UGS) is considered. The target reservoir is of Late Oligocene age, i.e., the Chattian Hauptsand with three gas bearing layers having a total thickness of 85 m. The reservoir formation is highly porous with an average porosity of 23% and permeability is in the range between 20 mD and 80 mD. The reservoir has produced natural gas from 1958 till 1978 and has been in a shut-in phase ever since. The storage operations require precise understanding of reservoir mechanics and stresses; therefore, the selected methodology helps to analyze these issues in detail. The geomechanical analysis is performed with the help of a state-of-the-art THM model with the following objectives: (1) analyze the variation of principal stress field induced by the field activities (2) analyze the effective stress changes with changing pore pressure in short-term as well as long-term using hypothetical injection-production schedule cases (3) prediction of ground surface displacements over the field, (4) analyze the possible reactivation of faults and fractures as well as the safe storage capacity of the reservoir; and (5) thermal stress changes with injection of colder foreign gas in underground reservoir. The methodology comprises 1D mechanical earth modelling (MEM) to calculate elastic properties as well as a first estimate for the vertical and horizontal stresses at well locations by using log data. This modelling phase provide complete analyses of log, core and laboratory data which leads to detailed 1D MEM of all the wells available for case study reservoir. This information is then used to populate a 3D finite element MEM) which has been built from seismic data and comprises not only the reservoir but the entire overburden up to the earth’s surface as well as part of the underburden. The size of this model is 30 × 24 × 5 km3 and 3D property modelling has been done by applying geostatistical approach for property inter-/extrapolation. The behavior of pore pressure in the field has been derived from dynamic fluid flow simulation through history matching for the production and subsequent shut-down phases of the field. Subsequently, changes in the pore pressure field during injection-production and subsequent shut-down phases are analyzed for weekly and seasonal loading and unloading scenario cases. The resulting pore pressure changes are coupled with 3D geomechanical model in order to have complete understanding of stress changes during these operations. In two scenario cases, the surplus electricity in Germany from renewable energy sources such as solar and wind from the year 2017 is considered. It results that the German surplus electricity can be stored in underground gas storage facilities with a Power-to-Gas (PtG) concept and that the stored gas can be reused again. Additionally, fault reactivation and thermal stress analyses are also performed on THM model in order to evaluate maximum threshold (injection) pressure as well as safe storage capacity of the reservoir. The fault reactivation already occurs at 1.25 times the initial reservoir pressure which provides a safe storage rate of 100,000-150,000 m3/day in the case study reservoir. The validated THM model is ready to be used for analyzing new wells for future field development and testing further arbitrary injection-production schedules, among others. The methodology can be applied on to any UGS facility not only in German Molasse Basin but anywhere in the world

    New Carbon Materials from Biomass and Their Applications

    Get PDF
    Carbon-based materials, such as chars, activated carbons, one-dimensional carbon nanotubes, and two-dimensional graphene nanosheets, have shown great potential for a wide variety of applications. These materials can be synthesized from any precursor with a high proportion of carbon in its composition. Although fossil fuels have been extensively used as precursors, their unstable cost and supply have led to the synthesis of carbon materials from biomass. Biomass covers all forms of organic materials, including plants both living and in waste form and animal waste products. It appears to be a renewable resource because it yields value-added products prepared using environmentally friendly processes. The applications of these biomass-derived carbon materials include electronic, electromagnetic, electrochemical, environmental and biomedical applications. Thus, novel carbon materials from biomass are a subject of intense research, with strong relevance to both science and technology. The main aim of this reprint is to present the most relevant and recent insights in the field of the synthesis of biomass-derived carbons for sustainable applications, including adsorption, catalysis and/or energy storage applications
    corecore