10 research outputs found

    Modeling and Analysis of Large-Scale On-Chip Interconnects

    Get PDF
    As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted by the increasingly high dimensional process-voltage-temperature (PVT) variations demand much more modeling and analysis efforts than ever before, while the analysis of large scale on-chip interconnects that requires solving tens of millions of unknowns imposes great challenges in computer aided design areas. This dissertation presents new methodologies for addressing the above two important challenging issues for large scale on-chip interconnect modeling and analysis: In the past, the standard statistical circuit modeling techniques usually employ principal component analysis (PCA) and its variants to reduce the parameter dimensionality. Although widely adopted, these techniques can be very limited since parameter dimension reduction is achieved by merely considering the statistical distributions of the controlling parameters but neglecting the important correspondence between these parameters and the circuit performances (responses) under modeling. This dissertation presents a variety of performance-oriented parameter dimension reduction methods that can lead to more than one order of magnitude parameter reduction for a variety of VLSI circuit modeling and analysis problems. The sheer size of present day power/ground distribution networks makes their analysis and verification tasks extremely runtime and memory inefficient, and at the same time, limits the extent to which these networks can be optimized. Given today?s commodity graphics processing units (GPUs) that can deliver more than 500 GFlops (Flops: floating point operations per second). computing power and 100GB/s memory bandwidth, which are more than 10X greater than offered by modern day general-purpose quad-core microprocessors, it is very desirable to convert the impressive GPU computing power to usable design automation tools for VLSI verification. In this dissertation, for the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with very promising performance. Our GPU based network analyzer is capable of solving tens of millions of power grid nodes in just a few seconds. Additionally, with the above GPU based simulation framework, more challenging three-dimensional full-chip thermal analysis can be solved in a much more efficient way than ever before

    The design of hardware and signal processing for a stepped frequency continuous wave ground penetrating radar

    Get PDF
    Includes bibliographical references.A Ground Penetrating Radar (GPR) sensor is required to provide information that will allow the user to detect, classify and identify the target. This is an extremely tough requirement, especially when one considers the limited amount of information provided by most GPRs to accomplish this task. One way of increasing this information is to capture the complete scattering matrix of the received radar waveform. The objective of this thesis is to develop a signal processing technique to extract polarimetric feature vectors from Stepped Frequency Continuous Wave (SFGWV) GPR data. This was achieved by first developing an algorithm to extract the parameters from single polarization SFCW GPR data and then extending this algorithm to extract target features from fully polarimetric data. A model is required to enable the extraction of target parameters from raw radar data. A single polarization SFCW GPR model is developed based on the radar geometry and linear approximations to the wavenumber in a lossy medium. Assuming high operating frequencies and/or low conductive losses, the model is shown to be equivalent to the exponential model found in signal processing theory. A number of algorithms exist to extract the required target parameters from the measured data in a least squared sense. In this thesis the Matrix Pencil-of-Function Method is used. Numerical simulations are presented to show the performance of this algorithm for increasing model error. Simulations are also provided to compare the standard Inverse Discrete Fourier Transform (IDFT) with the algorithm presented in this thesis. The processing is applied to two sets of measured radar data using the radar developed in the thesis. The technique was able to locate the position of the scatterers for both sets of data, thus demonstrating the success of the algorithm on practical measurements. The single polarization model is extended to a fully polarimetric SFCW GPR model. The model is shown to relate to the multi-dimensional exponential signal processing model, given certain assumptions about the target scattering damping factor. The multi-snapshot Matrix Pencil-of-Function Method is used to extract the scattering matrix parameters from the raw polarimetric stepped frequency data. Those Huynen target parameters that are independent of the properties of the medium, are extracted from the estimated scattering matrices. Simulations are performed to examine the performance of the algorithm for increasing conductive and dielectric losses. The algorithm is also applied to measured data for a number of targets buried a few centimeters below the ground surface, with promising results. Finally, the thesis describes the design and development of a low cost, compact and low power SFCW GPR system. It addresses both the philosophy as well as the technology that was used to develop a 200 - 1600 MHz and a 1 - 2 GHz system. The system is built around a dual synthesizer heterodyne architecture with a single intermediate frequency stage and a novel coherent demodulator system - with a single reference source. Comparison of the radar system with a commercial impulse system, shows that the results are of a similar quality. Further measurements demonstrate the radar performance for different field test cases, including the mapping of the bottom of an outdoor test site down to 1.6 m

    Multi-level simulation of nano-electronic digital circuits on GPUs

    Get PDF
    Simulation of circuits and faults is an essential part in design and test validation tasks of contemporary nano-electronic digital integrated CMOS circuits. Shrinking technology processes with smaller feature sizes and strict performance and reliability requirements demand not only detailed validation of the functional properties of a design, but also accurate validation of non-functional aspects including the timing behavior. However, due to the rising complexity of the circuit behavior and the steady growth of the designs with respect to the transistor count, timing-accurate simulation of current designs requires a lot of computational effort which can only be handled by proper abstraction and a high degree of parallelization. This work presents a simulation model for scalable and accurate timing simulation of digital circuits on data-parallel graphics processing unit (GPU) accelerators. By providing compact modeling and data-structures as well as through exploiting multiple dimensions of parallelism, the simulation model enables not only fast and timing-accurate simulation at logic level, but also massively-parallel simulation with switch level accuracy. The model facilitates extensions for fast and efficient fault simulation of small delay faults at logic level, as well as first-order parametric and parasitic faults at switch level. With the parallelization on GPUs, detailed and scalable simulation is enabled that is applicable even to multi-million gate designs. This way, comprehensive analyses of realistic timing-related faults in presence of process- and parameter variations are enabled for the first time. Additional simulation efficiency is achieved by merging the presented methods in a unified simulation model, that allows to combine the unique advantages of the different levels of abstraction in a mixed-abstraction multi-level simulation flow to reach even higher speedups. Experimental results show that the implemented parallel approach achieves unprecedented simulation throughput as well as high speedup compared to conventional timing simulators. The underlying model scales for multi-million gate designs and gives detailed insights into the timing behavior of digital CMOS circuits, thereby enabling large-scale applications to aid even highly complex design and test validation tasks

    A Multidisciplinary Analysis of Frequency Domain Metal Detectors for Humanitarian Demining

    Get PDF
    This thesis details an analysis of metal detectors (low frequency electromagnetic induction devices) with emphasis on Frequency Domain (FD) systems and the operational conditions of interest to humanitarian demining. After an initial look at humanitarian demining and a review of their basic principles we turn our attention to electromagnetic induction modelling and to analytical solutions to some basic FD direct (forward) problems. The second half of the thesis focuses then on the analysis of an extensive amount of experimental data. The possibility of target classification is first discussed on a qualitative basis, then quantitatively. Finally, we discuss shape and size determination via near field imaging

    Groundborne vibrations caused by railway construction and operation in buildings : design, implementation and analysis of measurement for assessment of human exposure

    Get PDF
    Environmental issues surrounding railway operation and construction have become more prominent in recent years, increasing the need for administrators and researchers to understand how residents living around railways respond to the noise and vibration generated by them.Within this context, the University of Salford, within the project funded by Defra “Human response to vibration in residential environments” (NANR209), has derived exposure response relationships for railway traffic and construction for a population sample of 1281 people: 931 for railway traffic and 350 for railway construction. Vibration measurements within residences have been used for assessing human exposure to vibration alongside a social study questionnaire based on face-to-face interviews for quantifying the human response. The first part of this work is concerned with the exposure side of NANR209. The design and implementation of measurement methodologies are presented and discussed, which provide exposure data suitable for building an exposure response relationship for vibration caused by the sources mentioned above. In light of the large amount of vibration data gathered during the project, the analysis of vibration signals is considered in the second part of the dissertation. Two aspects connected with the assessment of the human exposure to vibration are investigated: wave field assessment and ground to building transmissibility analysis

    A Multidisciplinary Analysis of Frequency Domain Metal Detectors for Humanitarian Demining

    Get PDF

    Mining Safety and Sustainability I

    Get PDF
    Safety and sustainability are becoming ever bigger challenges for the mining industry with the increasing depth of mining. It is of great significance to reduce the disaster risk of mining accidents, enhance the safety of mining operations, and improve the efficiency and sustainability of development of mineral resource. This book provides a platform to present new research and recent advances in the safety and sustainability of mining. More specifically, Mining Safety and Sustainability presents recent theoretical and experimental studies with a focus on safety mining, green mining, intelligent mining and mines, sustainable development, risk management of mines, ecological restoration of mines, mining methods and technologies, and damage monitoring and prediction. It will be further helpful to provide theoretical support and technical support for guiding the normative, green, safe, and sustainable development of the mining industry

    Methods for modeling degradation of electrical engineering components for lifetime prognosis

    Get PDF
    Reliability of electrical components is an issue studied to improve the quality of products, and to plan maintenance in case of failure. Reliability is measured by studying the causes of failure and the mean time to failure. One of the methods applied in this field is the study of component aging, because failure often occurs after degradation. The objective of this thesis is to model the degradation of components in electrical engineering, in order to estimate their lifetime. More specifically, this thesis will study large area organic white light sources (OLEDs). These sources offer several advantages in the world of lighting thanks to their thinness, their low energy consumption and their ability to adapt to a wide range of applications. The second components studied are electrical insulators applied to pairs of twisted copper wires, which are commonly used in low voltage electrical machines. First, the degradation and failure mechanisms of the various electrical components, including OLEDs and insulators, are studied. This is done to identify the operational stresses for including them in the aging model. After identifying the main causes of aging, general physical models are studied to quantify the effects of operational stresses. Empirical models are also presented when the physics of degradation is unknown or difficult to model. Next, methods for estimating the parameters of these models are presented, such as multilinear and nonlinear regression, as well as stochastic methods. Other methods based on artificial intelli­gence and online diagnosis are also presented, but they will not be studied in this thesis. These methods are applied to degradation data of organic LEDs and twisted pair insulators. For this purpose, accelerated and multifactor aging test benches are designed based on factorial experimental designs and response surface methods, in order to optimize the cost of the experiments. Then, a measurement protocol is described, in order to optimize the inspection time and to collect periodic data. Finally, estimation methods tackle unconstrained deterministic degradation models based on the measured data. The best empirical model of the degradation trajectory is then chosen based on model selection criteria. In a second step, the parameters of the degradation trajectories are modeled based on operational constraints. The parameters of the aging factors and their interactions are estimated by multilinear regression and according to different learning sets. The significance of the parameters is evaluated by statistical methods if possible. Finally, the lifetime of the experiments in the validation sets is predicted based on the parameters estimated by the different learning sets. The training set with the best lifetime prediction rate is considered the best
    corecore