8,125 research outputs found

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Photonic integrated circuit design in a foundry+fabless ecosystem

    Get PDF
    A foundry-based photonic ecosystem is expected to become necessary with increasing demand and adoption of photonics for commercial products. To make foundry-enabled photonics a real success, the photonic circuit design flow should adopt known concepts from analog and mixed signal electronics. Based on the similarities and differences between the existing photonic and the standardized electronics design flow, we project the needs and evolution of the photonic design flow, such as schematic driven design, accurate behavioral models, and yield prediction in the presence of fabrication variability

    Cellular neural networks, Navier-Stokes equation and microarray image reconstruction

    Get PDF
    Copyright @ 2011 IEEE.Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier–Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time

    Calculation of Generalized Polynomial-Chaos Basis Functions and Gauss Quadrature Rules in Hierarchical Uncertainty Quantification

    Get PDF
    Stochastic spectral methods are efficient techniques for uncertainty quantification. Recently they have shown excellent performance in the statistical analysis of integrated circuits. In stochastic spectral methods, one needs to determine a set of orthonormal polynomials and a proper numerical quadrature rule. The former are used as the basis functions in a generalized polynomial chaos expansion. The latter is used to compute the integrals involved in stochastic spectral methods. Obtaining such information requires knowing the density function of the random input {\it a-priori}. However, individual system components are often described by surrogate models rather than density functions. In order to apply stochastic spectral methods in hierarchical uncertainty quantification, we first propose to construct physically consistent closed-form density functions by two monotone interpolation schemes. Then, by exploiting the special forms of the obtained density functions, we determine the generalized polynomial-chaos basis functions and the Gauss quadrature rules that are required by a stochastic spectral simulator. The effectiveness of our proposed algorithm is verified by both synthetic and practical circuit examples.Comment: Published by IEEE Trans CAD in May 201

    Aging-Aware Design Methods for Reliable Analog Integrated Circuits using Operating Point-Dependent Degradation

    Get PDF
    The focus of this thesis is on the development and implementation of aging-aware design methods, which are suitable to satisfy current needs of analog circuit design. Based on the well known \gm/\ID sizing methodology, an innovative tool-assisted aging-aware design approach is proposed, which is able to estimate shifts in circuit characteristics using mostly hand calculation schemes. The developed concept of an operating point-dependent degradation leads to the definition of an aging-aware sensitivity, which is compared to currently available degradation simulation flows and proves to be efficient in the estimation of circuit degradation. Using the aging-aware sensitivity, several analog circuits are investigated and optimized towards higher reliability. Finally, results are presented for numerous target specifications
    corecore