8,586 research outputs found

    Cramer-Rao bounds in the estimation of time of arrival in fading channels

    Get PDF
    This paper computes the Cramer-Rao bounds for the time of arrival estimation in a multipath Rice and Rayleigh fading scenario, conditioned to the previous estimation of a set of propagation channels, since these channel estimates (correlation between received signal and the pilot sequence) are sufficient statistics in the estimation of delays. Furthermore, channel estimation is a constitutive block in receivers, so we can take advantage of this information to improve timing estimation by using time and space diversity. The received signal is modeled as coming from a scattering environment that disperses the signal both in space and time. Spatial scattering is modeled with a Gaussian distribution and temporal dispersion as an exponential random variable. The impact of the sampling rate, the roll-off factor, the spatial and temporal correlation among channel estimates, the number of channel estimates, and the use of multiple sensors in the antenna at the receiver is studied and related to the mobile subscriber positioning issue. To our knowledge, this model is the only one of its kind as a result of the relationship between the space-time diversity and the accuracy of the timing estimation.Peer ReviewedPostprint (published version

    Physically justifiable die-level modeling of spatial variation in view of systematic across wafer variability

    Full text link

    Architectural level delay and leakage power modelling of manufacturing process variation

    Get PDF
    PhD ThesisThe effect of manufacturing process variations has become a major issue regarding the estimation of circuit delay and power dissipation, and will gain more importance in the future as device scaling continues in order to satisfy market place demands for circuits with greater performance and functionality per unit area. Statistical modelling and analysis approaches have been widely used to reflect the effects of a variety of variational process parameters on system performance factor which will be described as probability density functions (PDFs). At present most of the investigations into statistical models has been limited to small circuits such as a logic gate. However, the massive size of present day electronic systems precludes the use of design techniques which consider a system to comprise these basic gates, as this level of design is very inefficient and error prone. This thesis proposes a methodology to bring the effects of process variation from transistor level up to architectural level in terms of circuit delay and leakage power dissipation. Using a first order canonical model and statistical analysis approach, a statistical cell library has been built which comprises not only the basic gate cell models, but also more complex functional blocks such as registers, FIFOs, counters, ALUs etc. Furthermore, other sensitive factors to the overall system performance, such as input signal slope, output load capacitance, different signal switching cases and transition types are also taken into account for each cell in the library, which makes it adaptive to an incremental circuit design. The proposed methodology enables an efficient analysis of process variation effects on system performance with significantly reduced computation time compared to the Monte Carlo simulation approach. As a demonstration vehicle for this technique, the delay and leakage power distributions of a 2-stage asynchronous micropipeline circuit has been simulated using this cell library. The experimental results show that the proposed method can predict the delay and leakage power distribution with less than 5% error and at least 50,000 times faster computation time compare to 5000-sample SPICE based Monte Carlo simulation. The methodology presented here for modelling process variability plays a significant role in Design for Manufacturability (DFM) by quantifying the direct impact of process variations on system performance. The advantages of being able to undertake this analysis at a high level of abstraction and thus early in the design cycle are two fold. First, if the predicted effects of process variation render the circuit performance to be outwith specification, design modifications can be readily incorporated to rectify the situation. Second, knowing what the acceptable limits of process variation are to maintain design performance within its specification, informed choices can be made regarding the implementation technology and manufacturer selected to fabricate the design

    Statistical Yield Analysis and Design for Nanometer VLSI

    Get PDF
    Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated

    Hybrid Gate-Level Leakage Model for Monte Carlo Analysis on Multiple GPUs

    Get PDF
    This paper proposes a hybrid gate-level leakage model for the use with the Monte Carlo (MC) analysis approach, which combines a lookup table (LUT) model with a first-order exponential-polynomial model (first-order model, herein). For the process parameters having strong nonlinear relationships with the logarithm of leakage current, the proposed model uses the LUT approach for the sake of modeling accuracy. For the other process parameters, it uses the first-order model for increased efficiency. During the library characterization for each type of logic gates, the proposed approach determines the process parameters for which it will use the LUT model. And, it determines the number of LUT data points, which can maximize analysis efficiency with acceptable accuracy, based on the user-defined threshold. The proposed model was implemented for gate-level MC leakage analysis using three graphic processing units. In experiments, the proposed approach exhibited the average errors of <5% in both mean and standard deviation with reference to SPICE-level MC leakage analysis. In comparison, MC analysis with the first-order model exhibited more than 90% errors. In CPU times, the proposed hybrid approach took only two to five times longer runtimes. In comparison with the full LUT model, the proposed hybrid model was up to one hundred times faster while increasing the average errors by only 3%. Finally, the proposed approach completed a leakage analysis of an OpenSparc T2 core of 4.5 million gates with a runtime of <5 min.1150Ysciescopu

    Non-invasive IC tomography using spatial correlations

    Get PDF
    We introduce a new methodology for post-silicon characterization of the gate-level variations in a manufactured Integrated Circuit (IC). The estimated characteristics are based on the power and the delay measurements that are affected by the process variations. The power (delay) variations are spatially correlated. Thus, there exists a basis in which variations are sparse. The sparse representation suggests using the L1-regularization (the compressive sensing theory). We show how to use the compressive sensing theory to improve post-silicon characterization. We also address the problem by adding spatial constraints directly to the traditional L2-minimization. The proposed methodology is fast, inexpensive, non-invasive, and applicable to legacy designs. Noninvasive IC characterization has a range of emerging applications, including post-silicon optimization, IC identification, and variations' modeling/simulations. The evaluation results on standard benchmark circuits show that, in average, the gate level characteristics estimation accuracy can be improved by more than two times using the proposed methods

    Quantum metrology and its application in biology

    Full text link
    Quantum metrology provides a route to overcome practical limits in sensing devices. It holds particular relevance to biology, where sensitivity and resolution constraints restrict applications both in fundamental biophysics and in medicine. Here, we review quantum metrology from this biological context, focusing on optical techniques due to their particular relevance for biological imaging, sensing, and stimulation. Our understanding of quantum mechanics has already enabled important applications in biology, including positron emission tomography (PET) with entangled photons, magnetic resonance imaging (MRI) using nuclear magnetic resonance, and bio-magnetic imaging with superconducting quantum interference devices (SQUIDs). In quantum metrology an even greater range of applications arise from the ability to not just understand, but to engineer, coherence and correlations at the quantum level. In the past few years, quite dramatic progress has been seen in applying these ideas into biological systems. Capabilities that have been demonstrated include enhanced sensitivity and resolution, immunity to imaging artifacts and technical noise, and characterization of the biological response to light at the single-photon level. New quantum measurement techniques offer even greater promise, raising the prospect for improved multi-photon microscopy and magnetic imaging, among many other possible applications. Realization of this potential will require cross-disciplinary input from researchers in both biology and quantum physics. In this review we seek to communicate the developments of quantum metrology in a way that is accessible to biologists and biophysicists, while providing sufficient detail to allow the interested reader to obtain a solid understanding of the field. We further seek to introduce quantum physicists to some of the central challenges of optical measurements in biological science.Comment: Submitted review article, comments and suggestions welcom
    corecore