46,498 research outputs found

    Analysis and application of digital spectral warping in analog and mixed-signal testing

    Get PDF
    Spectral warping is a digital signal processing transform which shifts the frequencies contained within a signal along the frequency axis. The Fourier transform coefficients of a warped signal correspond to frequency-domain 'samples' of the original signal which are unevenly spaced along the frequency axis. This property allows the technique to be efficiently used for DSP-based analog and mixed-signal testing. The analysis and application of spectral warping for test signal generation, response analysis, filter design, frequency response evaluation, etc. are discussed in this paper along with examples of the software and hardware implementation

    Progress of analog-hybrid computation

    Get PDF
    Review of fast analog/hybrid computer systems, integrated operational amplifiers, electronic mode-control switches, digital attenuators, and packaging technique

    A Novel (DDCC-SFG)-Based Systematic Design Technique of Active Filters

    Get PDF
    This paper proposes a novel idea for the synthesis of active filters that is based on the use of signal-flow graph (SFG) stamps of differential difference current conveyors (DDCCs). On the basis of an RLC passive network or a filter symbolic transfer function, an equivalent SFG is constructed. DDCCs’ SFGs are identified inside the constructed ‘active’ graph, and thus the equivalent circuit can be easily synthesized. We show that the DDCC and its ‘derivatives’, i.e. differential voltage current conveyors and the conventional current conveyors, are the main basic building blocks in such design. The practicability of the proposed technique is showcased via three application examples. Spice simulations are given to show the viability of the proposed technique

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Innovative teaching of IC design and manufacture using the Superchip platform

    No full text
    In this paper we describe how an intelligent chip architecture has allowed a large cohort of undergraduate students to be given effective practical insight into IC design by designing and manufacturing their own ICs. To achieve this, an efficient chip architecture, the “Superchip”, has been developed, which allows multiple student designs to be fabricated on a single IC, and encapsulated in a standard package without excessive cost in terms of time or resources. We demonstrate how the practical process has been tightly coupled with theoretical aspects of the degree course and how transferable skills are incorporated into the design exercise. Furthermore, the students are introduced at an early stage to the key concepts of team working, exposure to real deadlines and collaborative report writing. This paper provides details of the teaching rationale, design exercise overview, design process, chip architecture and test regime
    • 

    corecore