1,644 research outputs found
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
A Review of Bayesian Methods in Electronic Design Automation
The utilization of Bayesian methods has been widely acknowledged as a viable
solution for tackling various challenges in electronic integrated circuit (IC)
design under stochastic process variation, including circuit performance
modeling, yield/failure rate estimation, and circuit optimization. As the
post-Moore era brings about new technologies (such as silicon photonics and
quantum circuits), many of the associated issues there are similar to those
encountered in electronic IC design and can be addressed using Bayesian
methods. Motivated by this observation, we present a comprehensive review of
Bayesian methods in electronic design automation (EDA). By doing so, we hope to
equip researchers and designers with the ability to apply Bayesian methods in
solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which
can be sent to [email protected]
Regression modeling for digital test of ΣΔ modulators
The cost of Analogue and Mixed-Signal circuit
testing is an important bottleneck in the industry, due to timeconsuming
verification of specifications that require state-ofthe-
art Automatic Test Equipment. In this paper, we apply
the concept of Alternate Test to achieve digital testing of
converters. By training an ensemble of regression models that
maps simple digital defect-oriented signatures onto Signal to
Noise and Distortion Ratio (SNDR), an average error of 1:7%
is achieved. Beyond the inference of functional metrics, we show
that the approach can provide interesting diagnosis information.Ministerio de Educación y Ciencia TEC2007-68072/MICJunta de Andalucía TIC 5386, CT 30
Memristors for the Curious Outsiders
We present both an overview and a perspective of recent experimental advances
and proposed new approaches to performing computation using memristors. A
memristor is a 2-terminal passive component with a dynamic resistance depending
on an internal parameter. We provide an brief historical introduction, as well
as an overview over the physical mechanism that lead to memristive behavior.
This review is meant to guide nonpractitioners in the field of memristive
circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page
Recommended from our members
Stochastic Yield Analysis of Rare Failure Events in High-Dimensional Variation Space
As semiconductor industry kept shrinking the feature size to nanometer scale, circuit reliability has become an area of growing concern due to the uncertainty introduced by process variations. For highly-replicated standard cells, the failure event for each individual component must be extremely rare in order to maintain sufficiently high yield rate. Existing yield analysis approaches works fine at low dimension, but less effective either when there are a large amount of circuit parameters, or when the failure samples are distributed in multiple regions. In this thesis, four novel high sigma analysis approaches have been proposed. First, we propose an adaptive importance sampling (AIS) algorithm. AIS has several iterations of sampling region adjustments, while existing methods pre-decide a static sampling distribution. At each iteration, AIS generates samples from current proposed distribution. Next, AIS carefully assigns weight to each sample based on its tilted occurrence probability between failure region and current failure region distribution. Then we design two adaptive frameworks based on Resampling and population Metropolis-Hastings (MH) to iteratively search for failure regions. Second, we develop an Adaptive Clustering and Sampling (ACS) method to estimate the failure rate of high-dimensional and multi-failure-region circuit cases. The basic idea of the algorithm is to cluster failure samples and build global sampling distribution at each iteration. Specifically, in clustering step, we propose a multi-cone clustering method, which partitions the parametric space and clusters failure samples. Then global sampling distribution is constructed from a set of weighted Gaussian distributions. Next, we calculate importance weight for each sample based on the discrepancy between sampling distribution and target distribution. Failure probability is updated at the end of each iteration. This clustering and sampling procedure proceeds iteratively until all the failure regions are covered.Moreover, two meta-model based approaches are proposed for high sigma analysis. The Low-Rank Tensor Approximation (LRTA) formulate the meta-model in tensor space by representing a multi-way tensor into a finite sum of rank-one tensor. The polynomial degree of our LRTA model grows linearly with circuit dimension, which makes it especially promising for high-dimensional circuit problems. Then we solve our LRTA model efficiently with a robust greedy algorithm, and calibrate iteratively with an adaptive sampling method. The meta-model based importance sampling (MIS) method utilizes Gaussian Process meta-model to construct quasi-optimal importance sampling distribution, and performs Markov Chain Monte Carlo (MCMC) simulation to generate new samples from the proposed distribution. By updating our global Importance Sampling estimator in an iterated framework, MIS leads to better efficiency and higher accuracy than traditional importance sampling methods. Experiment results validate that the proposed approaches are 3 orders faster than Monte Carlo, and more accurate than both academia solutions such as importance sampling and classification based methods, and industrial solutions such as mixture IS used by Intel
Algorithms for Verification of Analog and Mixed-Signal Integrated Circuits
Over the past few decades, the tremendous growth in the complexity of analog and mixed-signal (AMS) systems has posed great challenges to AMS verification, resulting in a rapidly growing verification gap. Existing formal methods provide appealing completeness and reliability, yet they suffer from their limited efficiency and scalability. Data oriented machine learning based methods offer efficient and scalable solutions but do not guarantee completeness or full coverage. Additionally, the trend towards shorter time to market for AMS chips urges the development of efficient verification algorithms to accelerate with the joint design and testing phases.
This dissertation envisions a hierarchical and hybrid AMS verification framework by consolidating assorted algorithms to embrace efficiency, scalability and completeness in a statistical sense. Leveraging diverse advantages from various verification techniques, this dissertation develops algorithms in different categories.
In the context of formal methods, this dissertation proposes a generic and comprehensive model abstraction paradigm to model AMS content with a unifying analog representation. Moreover, an algorithm is proposed to parallelize reachability analysis by decomposing AMS systems into subsystems with lower complexity, and dividing the circuit's reachable state space exploration, which is formulated as a satisfiability problem, into subproblems with a reduced number of constraints. The proposed modeling method and the hierarchical parallelization enhance the efficiency and scalability of reachability analysis for AMS verification.
On the subject of learning based method, the dissertation proposes to convert the verification problem into a binary classification problem solved using support vector machine (SVM) based learning algorithms. To reduce the need of simulations for training sample collection, an active learning strategy based on probabilistic version space reduction is proposed to perform adaptive sampling. An expansion of the active learning strategy for the purpose of conservative prediction is leveraged to minimize the occurrence of false negatives.
Moreover, another learning based method is proposed to characterize AMS systems with a sparse Bayesian learning regression model. An implicit feature weighting mechanism based on the kernel method is embedded in the Bayesian learning model for concurrent quantification of influence of circuit parameters on the targeted specification, which can be efficiently solved in an iterative method similar to the expectation maximization (EM) algorithm. Besides, the achieved sparse parameter weighting offers favorable assistance to design analysis and test optimization
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
- …