531 research outputs found

    Influence of parasitic capacitance variations on 65 nm and 32 nm predictive technology model SRAM core-cells

    Get PDF
    The continuous improving of CMOS technology allows the realization of digital circuits and in particular static random access memories that, compared with previous technologies, contain an impressive number of transistors. The use of new production processes introduces a set of parasitic effects that gain more and more importance with the scaling down of the technology. In particular, even small variations of parasitic capacitances in CMOS devices are expected to become an additional source of faulty behaviors in future technologies. This paper analyzes and compares the effect of parasitic capacitance variations in a SRAM memory circuit realized with 65 nm and 32 nm predictive technology model

    Using ER Models for Microprocessor Functional Test Coverage Evaluation

    Get PDF
    Test coverage evaluation is one of the most critical issues in microprocessor software-based testing. Whenever the test is developed in the absence of a structural model of the microprocessor, the evaluation of the final test coverage may become a major issue. In this paper, we present a microprocessor modeling technique based on entity-relationship diagrams allowing the definition and the computation of custom coverage functions. The proposed model is very flexible and particularly effective when a structural model of the microprocessor is not availabl

    Statistical Reliability Estimation of Microprocessor-Based Systems

    Get PDF
    What is the probability that the execution state of a given microprocessor running a given application is correct, in a certain working environment with a given soft-error rate? Trying to answer this question using fault injection can be very expensive and time consuming. This paper proposes the baseline for a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success. The presented work goes beyond the dependability evaluation problem; it also has the potential to become the backbone for new tools able to help engineers to choose the best hardware and software architecture to structurally maximize the probability of a correct execution of the target softwar

    GPU cards as a low cost solution for efficient and fast classification of high dimensional gene expression datasets

    Get PDF
    The days when bioinformatics tools will be so reliable to become a standard aid in routine clinical diagnostics are getting very close. However, it is important to remember that the more complex and advanced bioinformatics tools become, the more performances are required by the computing platforms. Unfortunately, the cost of High Performance Computing (HPC) platforms is still prohibitive for both public and private medical practices. Therefore, to promote and facilitate the use of bioinformatics tools it is important to identify low-cost parallel computing solutions. This paper presents a successful experience in using the parallel processing capabilities of Graphical Processing Units (GPU) to speed up classification of gene expression profiles. Results show that using open source CUDA programming libraries allows to obtain a significant increase in performances and therefore to shorten the gap between advanced bioinformatics tools and real medical practic

    Guest Editorial: Special section on emerging trends and computing paradigms for testing, reliability and security in future VLSI systems

    Get PDF
    With the rapid advancement of computing technologies in all domains (i.e., handheld devices, autonomous vehicles, medical devices, and massive supercomputers), testability, reliability and security of electronic systems are crucial issues to guarantee safeness of human life. Emerging technologies coupled with new computing paradigms (e.g., approximate computing, neuromorphic computing, in-memory computing) are together exacerbating these problems posing significant challenges to researchers and designers. To address this increased complexity in the hardware testing/reliability/security domain, it is imperative to employ design and analysis methods working at all levels of abstraction, starting from the system level down to the gate level. In this context, the selected papers span from the important field of the yield analysis and modeling, which is becoming fundamental for the manufacturing of modern technologies to the error detection, correction and recovery when the new devices are operative on field. At the same time, papers do not forget that the fault tolerance can be achieved by a cross-layer approach to the dependability that includes the analysis of the effect of faults and the techniques and methodologies to deploy more resilient devices by means of hardening of the design. Eventually, the dependability of the systems is nowadays deeply linked with the security aspects, including the impact on the design trade-offs and the test and validation. The IEEE VLSI Test Symposium (VTS) invited the highest-ranked papers to be included in this special issue of IEEE Transactions on Emerging Technologies in Computing (TETC) in 2020. All aspects of design, manufacturing, test, monitoring and securing of systems affected by defects and malicious attacks are covered by the accepted paper. It is our great pleasure to publish this special issue containing 12 high-quality papers covering all aspects of the emerging trends on testing and reliability: - FTxAC: Leveraging the Approximate Computing Paradigm in the Design of Fault-Tolerant Embedded Systems to Reduce Overheads by Aponte-Moreno, Alexander; Restrepo-Calle, Felipe; Pedraza, Cesar, the design of Fault-Tolerant systems is exploited by means of approximate computing techniques to reduce the implicit overhead of the common redundancy. - A Statistical Gate Sizing Method for Timing Yield and Lifetime Reliability Optimization of Integrated Circuits by Ghavami, Behnam; Ibrahimi, Milad; Raji, Mohsen, the reliability of CMOS devices is improved tackling the joint effect of process variation and transistor aging. - 3D Ring Oscillator based Test Structures to Detect a Trojan Die in a 3D Die Stack in the Presence of Process Variations by Alhelaly, Soha; Dworak, Jennifer; Nepal, Kundan; Manikas, Theodore; Gui, Ping; Crouch, Alfred, the issue of Trojan insertion into 3D integrated circuits has been explored from the use of in-stack circuitry and various testing procedures point of view, showing their detection capability. - Defect Analysis and Parallel Testing for 3D Hybrid CMOS-Memristor Memory by Liu, Peng; You, Zhiqiang; Wu, Jigang; Elimu, Michael; Wang, Weizheng; Cai, Shuo; Han, Yinhe, a new parallel March-like test is proposed to test CMOS Molecular architectures. - Attacks toward Wireless Network-on-Chip and Countermeasures by Biswas, Arnab Kumar; Chatterjee, Navonil; Mondal, Hemanta; Gogniat, Guy; DIGUET, Jean-Philippe, Wireless Network-on-Chip security vulnerabilities are described and their countermeasures proposed. - A Novel TDMA-Based Fault Tolerance Technique for the TSVs in 3D-ICs Using Honeycomb Topology (by Ni, Tianming; Yang, Zhao; Chang, Hao; Zhang, Xiaoqiang; Lu, Lin; Yan, Aibin; Huang, Zhengfeng; Wen, Xiaoqing) proposes a chain-type time division multiplexing access (TDMA)-based fault tolerance technique showing huge area overheads reduction. - Design and analysis of secure emerging crypto-hardware using HyperFET devices by Delgado-Lozano, Ignacio María; Tena-Sánchez, Erica; Núñez, Juan; Acosta, Antonio J., Power Analysis attacks against FinFET device have been tackled by incorporating HyperFET devices to deliver an x25 factor security level improvement. - Detection, Location, and Concealment of Defective Pixels in Image Sensors by TAKAM TCHENDJOU, Ghislain; SIMEU, Emmanuel, image sensors are empowered with online diagnosis and self-healing methods to improve their dependability. - Defect and Fault Modeling Framework for STT-MRAM Testing by Wu, Lizhou; Rao, Siddharth; Taouil, Mottaqiallah; Cardoso Medeiros, Guilherme; Fieback, Moritz; Marinissen, Erik Jan; Kar, Gouri Sankar; Hamdioui, Said, a framework to derive accurate STT-MRAM fault models is described, together with its employment to model resistive defects in interconnect and pinhole defects in MTJ devices, allowing test solutions for detecting those defects. - Online Safety Checking for Delay Locked Loops via Embedded Phase Error Monitor by Huang, Shi-Yu; Chu, Wei, the Automotive Safety Integrity Level (ASIL) is targeted by proposing a phase error monitoring scheme for Delay-Locked Loops (DLLs). - Protecting Memories against Soft Errors: The Case for Customizable Error Correction Codes by Li, Jiaqiang; Reviriego, Pedro; Xiao, Li; Wu, Haotian, the memory protection is supported by a tool able to automate the error correction code design. - Autonomous Scan Patterns for Laser Voltage Imaging by Tyszer, Jerzy; Cheng, Wu-Tung; Milewski, Sylwester; Mrugalski, Grzegorz; Rajski, Janusz; Trawka, Maciej, authors demonstrate how to reuse on-chip EDT compression environment to generate and apply Laser Voltage Imaging-aware scan patterns for advanced contactless test procedures. We sincerely hope that you enjoy reading this special issue, and would like to thank all authors and reviewers for their tremendous efforts and contributions in producing these high-quality articles. We also take this opportunity to thank the IEEE Transactions on Emerging Topics in Computing (TETC) Editor-in-Chief (EIC) Prof. Cecilia Metra, past Associate Editor Ramesh Karri, the editorial board, and the entire editorial staff for their guidance, encouragement, and assistance in delivering this special issue

    Understanding protein complex formation: the role of charge distribution in the encounter complex

    Get PDF
    Protein–protein complexes are formed via transient states called encounter complexes that greatly influence the formation of the stereospecific complex. Electrostatic charges on the protein surfaces play a major role in encounter complexes of electron transfer proteins. The complex formed by cytochrome c (Cc) and cytochrome c peroxidase (CcP) has been studied intensively because it is an excellent model to explore the properties of transient protein-protein interactions. PRE experiments previously described the encounter complex formed by the two proteins in detail. In this thesis we tested to what degree the electrostatic patch on CcP is optimized to enhance the rate of the formation of the stereospecific complex. Using paramagnetic NMR in combination with Monte Carlo simulations and stopped flow spectrophotometry, we investigate several CcP mutants with reengineered charged patches to create new encounter complexes and measure their effects on electron transfer from Cc to CcP. The results indicate that the interactions with Cc are affected more by the total charge of CcP surface than the specific distribution of the charges, bringing into question the concept of electrostatic patches being highly optimized by evolution.This work was financially supported by the Netherlands Organisation for Scientific Research (NWO-CW grant 711.013.007).Macromolecular Biochemistr

    Efficient Neural Network Approximation via Bayesian Reasoning

    Get PDF
    Approximate Computing (AxC) trades off between the accuracy required by the user and the precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction. Several AxC techniques have been proposed so far in the literature. They work at different abstraction levels and propose both hardware and software implementations. The standard issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. This paper proposes a probabilistic approach based on Bayesian networks to quickly estimate the impact of a given approximation technique on application-level accuracy. Moreover, we have also shown how Bayesian networks allow a backtrack analysis that automatically identifies the most sensitive components. That influence analysis dramatically reduces the space exploration for approximation techniques. Preliminary results on a simple artificial neural network shown the efficiency of the proposed approach

    A low-cost approach for determining the impact of Functional Approximation

    Get PDF
    Approximate Computing (AxC) trades off between the level of accuracy required by the user and the actual precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction etc.. Several AxC techniques have been proposed so far in the literature. They work at different abstraction level and propose both hardware and software implementations. The common issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. In this paper, we propose a probabilistic approach to predict the relation between component-level functional approximation and application-level accuracy. Experimental results on a set of benchmark application show that the proposed approach is able to estimate the approximation error with good accuracy and very low computation time

    Approximate computing design exploration through data lifetime metrics

    Get PDF
    When designing an approximate computing system, the selection of the resources to modify is key. It is important that the error introduced in the system remains reasonable, but the size of the design exploration space can make this extremely difficult. In this paper, we propose to exploit a new metric for this selection: data lifetime. The concept comes from the field of reliability, where it can guide selective hardening: the more often a resource handles "live" data, the more critical it be-comes, the more important it will be to protect it. In this paper, we propose to use this same metric in a new way: identify the less critical resources as approximation targets in order to minimize the impact on the global system behavior and there-fore decrease the impact of approximation while increasing gains on other criteria

    ReNE: A Cytoscape Plugin for Regulatory Network Enhancement

    Get PDF
    One of the biggest challenges in the study of biological regulatory mechanisms is the integration, modeling, and analysis of the complex interactions which take place in biological networks. Despite post transcriptional regulatory elements (i.e., miRNAs) are widely investigated in current research, their usage and visualization in biological networks is very limited. Regulatory networks are commonly limited to gene entities. To integrate networks with post transcriptional regulatory data, researchers are therefore forced to manually resort to specific third party databases. In this context, we introduce ReNE, a Cytoscape 3.x plugin designed to automatically enrich a standard gene-based regulatory network with more detailed transcriptional, post transcriptional, and translational data, resulting in an enhanced network that more precisely models the actual biological regulatory mechanisms. ReNE can automatically import a network layout from the Reactome or KEGG repositories, or work with custom pathways described using a standard OWL/XML data format that the Cytoscape import procedure accepts. Moreover, ReNE allows researchers to merge multiple pathways coming from different sources. The merged network structure is normalized to guarantee a consistent and uniform description of the network nodes and edges and to enrich all integrated data with additional annotations retrieved from genome-wide databases like NCBI, thus producing a pathway fully manageable through the Cytoscape environment. The normalized network is then analyzed to include missing transcription factors, miRNAs, and proteins. The resulting enhanced network is still a fully functional Cytoscape network where each regulatory element (transcription factor, miRNA, gene, protein) and regulatory mechanism (up-regulation/down-regulation) is clearly visually identifiable, thus enabling a better visual understanding of its role and the effect in the network behavior. The enhanced network produced by ReNE is exportable in multiple formats for further analysis via third party applications. ReNE can be freely installed from the Cytoscape App Store (http://apps.cytoscape.org/apps/rene) and the full source code is freely available for download through a SVN repository accessible at http://www.sysbio.polito.it/tools_svn/Bi​oInformatics/Rene/releases/. ReNE enhances a network by only integrating data from public repositories, without any inference or prediction. The reliability of the introduced interactions only depends on the reliability of the source data, which is out of control of ReNe developers
    corecore