141 research outputs found

    Power Minimisation Techniques for Testing Low Power VLSI Circuits (PhD Dissertation)

    No full text
    Testing low power very large scale integrated (VLSI) circuits has recently become an area of concern due to yield and reliability problems. This dissertation focuses on minimising power dissipation during test application at logic level and register-transfer level (RTL) of abstraction of the VLSI design flow. The first part of this dissertation addresses power minimisation techniques in scan sequential circuits at the logic level of abstraction. A new best primary input change (BPIC) technique based on a novel test application strategy has been proposed. The technique increases the correlation between successive states during shifting in test vectors and shifting out test responses by changing the primary inputs such that the smallest number of transitions is achieved. The new technique is test set dependent and it is applicable to small to medium sized full and partial scan sequential circuits. Since the proposed test application strategy depends only on controlling primary input change time, power is minimised with no penalty in test area, performance, test efficiency, test application time or volume of test data. Furthermore, it is shown that partial scan does not provide only the commonly known benefits such as less test area overhead and test application time, but also less power dissipation during test application when compared to full scan. To achieve power savings in large scan sequential circuits a new test set independent multiple scan chain-based technique which employs a new design for test (DFT) architecture and a novel test application strategy, is presented. The technique has been validated using benchmark examples, and it has been shown that power is minimised with low computational time, low overhead in test area and volume of test data, and with no penalty in test application time, test efficiency, or performance. The second part of this dissertation addresses power minimisation techniques for testing low power VLSI circuits using built-in self-test (BIST) at RTL. First, it is important to overcome the shortcomings associated with traditional BIST methodologies. It is shown how a new BIST methodology for RTL data paths using a novel concept called test compatibility classes (TCC) overcomes high test application time, BIST area overhead, performance degradation, volume of test data, fault-escape probability, and complexity of the testable design space exploration. Second, power minimisation in BIST RTL data paths is achieved by analysing the effect of test synthesis and test scheduling on power dissipation during test application and by employing new power conscious test synthesis and test scheduling algorithms. Third, the new BIST methodology has been validated using benchmark examples. Further, it is shown that when the proposed power conscious test synthesis and test scheduling is combined with novel test compatibility classes simultaneous reduction in test application time and power dissipation is achieved with low overhead in computational time

    Echo Cancellation - A Likelihood Ratio Test for Double-talk Versus Channel Change

    Get PDF
    Echo cancellers are in wide use in both electrical (four wire to two wire mismatch) and acoustic (speaker-microphone coupling) applications. One of the main design problems is the control logic for adaptation. Basically, the algorithm weights should be frozen in the presence of double-talk and adapt quickly in the absence of double-talk. The control logic can be quite complicated since it is often not easy to discriminate between the echo signal and the near-end speaker. This paper derives a log likelihood ratio test (LRT) for deciding between double-talk (freeze weights) and a channel change (adapt quickly) using a stationary Gaussian stochastic input signal model. The probability density function of a sufficient statistic under each hypothesis is obtained and the performance of the test is evaluated as a function of the system parameters. The receiver operating characteristics (ROCs) indicate that it is difficult to correctly decide between double-talk and a channel change based upon a single look. However, post-detection integration of approximately one hundred sufficient statistic samples yields a detection probability close to unity (0.99) with a small false alarm probability (0.01)

    MEMS Accelerometers

    Get PDF
    Micro-electro-mechanical system (MEMS) devices are widely used for inertia, pressure, and ultrasound sensing applications. Research on integrated MEMS technology has undergone extensive development driven by the requirements of a compact footprint, low cost, and increased functionality. Accelerometers are among the most widely used sensors implemented in MEMS technology. MEMS accelerometers are showing a growing presence in almost all industries ranging from automotive to medical. A traditional MEMS accelerometer employs a proof mass suspended to springs, which displaces in response to an external acceleration. A single proof mass can be used for one- or multi-axis sensing. A variety of transduction mechanisms have been used to detect the displacement. They include capacitive, piezoelectric, thermal, tunneling, and optical mechanisms. Capacitive accelerometers are widely used due to their DC measurement interface, thermal stability, reliability, and low cost. However, they are sensitive to electromagnetic field interferences and have poor performance for high-end applications (e.g., precise attitude control for the satellite). Over the past three decades, steady progress has been made in the area of optical accelerometers for high-performance and high-sensitivity applications but several challenges are still to be tackled by researchers and engineers to fully realize opto-mechanical accelerometers, such as chip-scale integration, scaling, low bandwidth, etc

    Entropy analysis of acoustic signals recorded with a smartphone for detecting apneas and hypopneas: A comparison with a commercial system for home sleep apnea diagnosis

    Get PDF
    Obstructive sleep apnea (OSA) is a prevalent disease, but most patients remain undiagnosed and untreated. Here we propose analyzing smartphone audio signals for screening OSA patients at home. Our objectives were to: (1) develop an algorithm for detecting silence events and classifying them into apneas or hypopneas; (2) evaluate the performance of this system; and (3) compare the information provided with a type 3 portable sleep monitor, based mainly on nasal airflow. Overnight signals were acquired simultaneously by both systems in 13 subjects (3 healthy subjects and 10 OSA patients). The sample entropy of audio signals was used to identify apnea/hypopnea events. The apnea-hypopnea indices predicted by the two systems presented a very high degree of concordance and the smartphone correctly detected and stratified all the OSA patients. An event-by-event comparison demonstrated good agreement between silence events and apnea/hypopnea events in the reference system (Sensitivity = 76%, Positive Predictive Value = 82%). Most apneas were detected (89%), but not so many hypopneas (61%). We observed that many hypopneas were accompanied by snoring, so there was no sound reduction. The apnea/hypopnea classification accuracy was 70%, but most discrepancies resulted from the inability of the nasal cannula of the reference device to record oral breathing. We provided a spectral characterization of oral and nasal breathing to correct this effect, and the classification accuracy increased to 82%. This novel knowledge from acoustic signals may be of great interest for clinical practice to develop new non-invasive techniques for screening and monitoring OSA patients at homePeer ReviewedPostprint (published version

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria
    corecore