727 research outputs found

    REDUCING POWER DURING MANUFACTURING TEST USING DIFFERENT ARCHITECTURES

    Get PDF
    Power during manufacturing test can be several times higher than power consumption in functional mode. Excessive power during test can cause IR drop, over-heating, and early aging of the chips. In this dissertation, three different architectures have been introduced to reduce test power in general cases as well as in certain scenarios, including field test. In the first architecture, scan chains are divided into several segments. Every segment needs a control bit to enable capture in a segment when new faults are detectable on that segment for that pattern. Otherwise, the segment should be disabled to reduce capture power. We group the control bits together into one or more control chains. To address the extra pin(s) required to shift data into the control chain(s) and significant post processing in the first architecture, we explored a second architecture. The second architecture stitches the control bits into the chains they control as EECBs (embedded enable capture bits) in between the segments. This allows an ATPG software tool to automatically generate the appropriate EECB values for each pattern to maintain the fault coverage. This also works in the presence of an on-chip decompressor. The last architecture focuses primarily on the self-test of a device in a 3D stacked IC when an existing FPGA in the stack can be programmed as a tester. We show that the energy expended during test is significantly less than would be required using low power patterns fed by an on-chip decompressor for the same very short scan chains

    KAVUAKA: a low-power application-specific processor architecture for digital hearing aids

    Get PDF
    The power consumption of digital hearing aids is very restricted due to their small physical size and the available hardware resources for signal processing are limited. However, there is a demand for more processing performance to make future hearing aids more useful and smarter. Future hearing aids should be able to detect, localize, and recognize target speakers in complex acoustic environments to further improve the speech intelligibility of the individual hearing aid user. Computationally intensive algorithms are required for this task. To maintain acceptable battery life, the hearing aid processing architecture must be highly optimized for extremely low-power consumption and high processing performance.The integration of application-specific instruction-set processors (ASIPs) into hearing aids enables a wide range of architectural customizations to meet the stringent power consumption and performance requirements. In this thesis, the application-specific hearing aid processor KAVUAKA is presented, which is customized and optimized with state-of-the-art hearing aid algorithms such as speaker localization, noise reduction, beamforming algorithms, and speech recognition. Specialized and application-specific instructions are designed and added to the baseline instruction set architecture (ISA). Among the major contributions are a multiply-accumulate (MAC) unit for real- and complex-valued numbers, architectures for power reduction during register accesses, co-processors and a low-latency audio interface. With the proposed MAC architecture, the KAVUAKA processor requires 16 % less cycles for the computation of a 128-point fast Fourier transform (FFT) compared to related programmable digital signal processors. The power consumption during register file accesses is decreased by 6 %to 17 % with isolation and by-pass techniques. The hardware-induced audio latency is 34 %lower compared to related audio interfaces for frame size of 64 samples.The final hearing aid system-on-chip (SoC) with four KAVUAKA processor cores and ten co-processors is integrated as an application-specific integrated circuit (ASIC) using a 40 nm low-power technology. The die size is 3.6 mm2. Each of the processors and co-processors contains individual customizations and hardware features with a varying datapath width between 24-bit to 64-bit. The core area of the 64-bit processor configuration is 0.134 mm2. The processors are organized in two clusters that share memory, an audio interface, co-processors and serial interfaces. The average power consumption at a clock speed of 10 MHz is 2.4 mW for SoC and 0.6 mW for the 64-bit processor.Case studies with four reference hearing aid algorithms are used to present and evaluate the proposed hardware architectures and optimizations. The program code for each processor and co-processor is generated and optimized with evolutionary algorithms for operation merging,instruction scheduling and register allocation. The KAVUAKA processor architecture is com-pared to related processor architectures in terms of processing performance, average power consumption, and silicon area requirements

    Integration of FAPEC as data compressor stage in a SpaceFibre link

    Get PDF
    SpaceFibre is a new technology for use onboard spacecraft that provides point-to-point and networked interconnections at 3.125 Gbits/s in flight qualified technology. SpaceFibre is an European Space Agency (ESA) initiative and will substitute the ubiquitous SpaceWire for high speed applications in space. FAPEC is a lossless data compression algorithm that typically offers better ratios than the CCSDS 121.0 Lossless Data Compression Recommendation on realistic data sets. FAPEC was designed for space communications, where requirements are very strong in terms of energy consumption and efficiency. In this project we have demonstrated that FAPEC can be easily integrated on top of SpaceFibre to reduce the amount of information that the spacecraft network has to deal with. The integration of FAPEC with SpaceFibre has successfully been validated in a representative FPGA platform. In the developed design FAPEC operated at ~12 Msamples/s (~200 Mbit/s) using a Xilinx Spartan-6 but it is expected to reach Gbit/s speeds with some additional work. The speed of the algorithm has been improved by a factor 6 while the resource usage remains low, around 2% of a Xilinx Virtex-5QV or a Microsemi RTG4. The combination of these two technologies can help to reduce the large amounts of data generated by some satellite instruments in a transparent way, without the need of user intervention, and to provide a solution to the increasing data volumes in spacecrafts. Consequently the combination of FAPEC with SpaceFibre can help to save mass, power consumption and reduce system complexity.SpaceFibre es una nueva tecnología para uso embarcado en satélites que proporciona conexiones punto a punto y de red a 3.125 Gbit/s en tecnología calificada para espacio. SpaceFibre es una iniciativa de la Agencia Espacial Europea (ESA) y sustituirá al popular SpaceWire en aplicaciones espaciales de alta velocidad. FAPEC es un algoritmo de compresión sin pérdidas que normalmente ofrece relaciones de compresión para conjuntos de datos realistas mejores que las de la recomendación CCSDS 121.0. FAPEC ha sido diseñado para las comunicaciones espaciales, donde las restricciones de consumo de energía y eficiencia son muy fuertes. En este proyecto hemos demostrado que FAPEC puede ser integrado fácilmente con SpaceFibre para reducir la cantidad de información que la red del satélite tiene que procesar. La integración de FAPEC con SpaceFibre ha sido validada con éxito en una plataforma FPGA representativa. En el diseño desarrollado, FAPEC funciona a ~12 Mmuestras/s (~200 Mbit/s) usando una Xilinx Spartan-6 pero se espera que alcance velocidades de Gbit/s con un poco más de trabajo. La velocidad del algoritmo se ha mejorado un factor 6 mientras que el uso de recursos continua siendo bajo, alrededor de un 2% de una Xilinx Virtex-5QV o Microsemi RTG4. La combinación de estas dos tecnologías puede ayudar a reducir las grandes cantidades de datos generados por los instrumentos de los satélites de una manera transparente, sin necesidad de una intervención por parte del usuario, y de proporcionar una solución al continuo incremento de datos generados. En consecuencia, la combinación de FAPEC y SpaceFibre puede ayudar a ahorrar masa y consumo de energía, y reducir la complejidad de los sistemas.SpaceFibre és una nova tecnologia per a ús embarcat en satèl·lits que proporciona connexions punt a punt i de xarxa a 3.125 Gbit/s en tecnologia qualificada per espai. SpaceFibre és una iniciativa de l'Agència Espacial Europea (ESA) i substituirà el popular SpaceWire en aplicacions espacials d'alta velocitat. FAPEC és un algorisme de compressió sense pèrdues que normalment ofereix relacions de compressió per a conjunts de dades realistes millors que les de la recomanació CCSDS 121.0. FAPEC ha estat dissenyat per a les comunicacions espacials, on les restriccions de consum d'energia i eficiència són molt fortes. En aquest projecte hem demostrat que FAPEC pot ser integrat fàcilment amb SpaceFibre per reduir la quantitat d'informació que la xarxa del satèl·lit ha de processar. La integració de FAPEC amb SpaceFibre ha estat validada amb èxit en una plataforma FPGA representativa. En el disseny desenvolupat, FAPEC funciona a ~12 Mmostres/s (~200 Mbit/s) utilitzant una Xilinx Spartan-6 però s'espera que arribi velocitats de Gbit/s amb una mica més de feina. La velocitat de l'algorisme s'ha millorat un factor 6 mentre que l'ús de recursos continua sent baix, al voltant d'un 2% d'una Xilinx Virtex-5QV o Microsemi RTG4. La combinació d'aquestes dues tecnologies pot ajudar a reduir les grans quantitats de dades generades pels instruments dels satèl·lits d'una manera transparent, sense necessitat d'una intervenció per part de l'usuari, i de proporcionar una solució al continu increment de dades generades. En conseqüència, la combinació de FAPEC i SpaceFibre pot ajudar a estalviar massa i consum d'energia, i reduir la complexitat dels sistemes

    Asynchronous Data Processing Platforms for Energy Efficiency, Performance, and Scalability

    Get PDF
    The global technology revolution is changing the integrated circuit industry from the one driven by performance to the one driven by energy, scalability and more-balanced design goals. Without clock-related issues, asynchronous circuits enable further design tradeoffs and in operation adaptive adjustments for energy efficiency. This dissertation work presents the design methodology of the asynchronous circuit using NULL Convention Logic (NCL) and multi-threshold CMOS techniques for energy efficiency and throughput optimization in digital signal processing circuits. Parallel homogeneous and heterogeneous platforms implementing adaptive dynamic voltage scaling (DVS) based on the observation of system fullness and workload prediction are developed for balanced control of the performance and energy efficiency. Datapath control logic with NULL Cycle Reduction (NCR) and arbitration network are incorporated in the heterogeneous platform for large scale cascading. The platforms have been integrated with the data processing units using the IBM 130 nm 8RF process and fabricated using the MITLL 90 nm FDSOI process. Simulation and physical testing results show the energy efficiency advantage of asynchronous designs and the effective of the adaptive DVS mechanism in balancing the energy and performance in both platforms

    Decomposition and encoding of finite state machines for FPGA implementation

    Get PDF
    xii+187hlm.;24c

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Combinational Circuit Obfuscation through Power Signature Manipulation

    Get PDF
    Today\u27s military systems are composed of hardware and software systems, many of which are critical technologies, and must be protected to ensure our adversaries cannot gain any information from a various analysis attacks. Side Channel Analysis (SCA) attacks allow an attacker to gain the significant information from the measured signatures leaked by side-channels such as power consumption, and electro-magnetic emission. In this research the focus on detecting, characterizing, and manipulating the power signature by designing a power signature estimation and manipulation method. This research has determined that the proposed method capable of characterizing and altering the type of power signature can provide a protection against adversarial SCA attacks
    corecore