399 research outputs found
Guest Editorial: Special section on emerging trends and computing paradigms for testing, reliability and security in future VLSI systems
With the rapid advancement of computing technologies in all domains (i.e., handheld devices, autonomous vehicles, medical devices, and massive supercomputers), testability, reliability and security of electronic systems are crucial issues to guarantee safeness of human life. Emerging technologies coupled with new computing paradigms (e.g., approximate computing, neuromorphic computing, in-memory computing) are together exacerbating these problems posing significant challenges to researchers and designers. To address this increased complexity in the hardware testing/reliability/security domain, it is imperative to employ design and analysis methods working at all levels of abstraction, starting from the system level down to the gate level.
In this context, the selected papers span from the important field of the yield analysis and modeling, which is becoming fundamental for the manufacturing of modern technologies to the error detection, correction and recovery when the new devices are operative on field. At the same time, papers do not forget that the fault tolerance can be achieved by a cross-layer approach to the dependability that includes the analysis of the effect of faults and the techniques and methodologies to deploy more resilient devices by means of hardening of the design. Eventually, the dependability of the systems is nowadays deeply linked with the security aspects, including the impact on the design trade-offs and the test and validation.
The IEEE VLSI Test Symposium (VTS) invited the highest-ranked papers to be included in this special issue of IEEE Transactions on Emerging Technologies in Computing (TETC) in 2020. All aspects of design, manufacturing, test, monitoring and securing of systems affected by defects and malicious attacks are covered by the accepted paper.
It is our great pleasure to publish this special issue containing 12 high-quality papers covering all aspects of the emerging trends on testing and reliability:
- FTxAC: Leveraging the Approximate Computing Paradigm in the Design of Fault-Tolerant Embedded Systems to Reduce Overheads by Aponte-Moreno, Alexander; Restrepo-Calle, Felipe; Pedraza, Cesar, the design of Fault-Tolerant systems is exploited by means of approximate computing techniques to reduce the implicit overhead of the common redundancy.
- A Statistical Gate Sizing Method for Timing Yield and Lifetime Reliability Optimization of Integrated Circuits by Ghavami, Behnam; Ibrahimi, Milad; Raji, Mohsen, the reliability of CMOS devices is improved tackling the joint effect of process variation and transistor aging.
- 3D Ring Oscillator based Test Structures to Detect a Trojan Die in a 3D Die Stack in the Presence of Process Variations by Alhelaly, Soha; Dworak, Jennifer; Nepal, Kundan; Manikas, Theodore; Gui, Ping; Crouch, Alfred, the issue of Trojan insertion into 3D integrated circuits has been explored from the use of in-stack circuitry and various testing procedures point of view, showing their detection capability.
- Defect Analysis and Parallel Testing for 3D Hybrid CMOS-Memristor Memory by Liu, Peng; You, Zhiqiang; Wu, Jigang; Elimu, Michael; Wang, Weizheng; Cai, Shuo; Han, Yinhe, a new parallel March-like test is proposed to test CMOS Molecular architectures.
- Attacks toward Wireless Network-on-Chip and Countermeasures by Biswas, Arnab Kumar; Chatterjee, Navonil; Mondal, Hemanta; Gogniat, Guy; DIGUET, Jean-Philippe, Wireless Network-on-Chip security vulnerabilities are described and their countermeasures proposed.
- A Novel TDMA-Based Fault Tolerance Technique for the TSVs in 3D-ICs Using Honeycomb Topology (by Ni, Tianming; Yang, Zhao; Chang, Hao; Zhang, Xiaoqiang; Lu, Lin; Yan, Aibin; Huang, Zhengfeng; Wen, Xiaoqing) proposes a chain-type time division multiplexing access (TDMA)-based fault tolerance technique showing huge area overheads reduction.
- Design and analysis of secure emerging crypto-hardware using HyperFET devices by Delgado-Lozano, Ignacio MarÃa; Tena-Sánchez, Erica; Núñez, Juan; Acosta, Antonio J., Power Analysis attacks against FinFET device have been tackled by incorporating HyperFET devices to deliver an x25 factor security level improvement.
- Detection, Location, and Concealment of Defective Pixels in Image Sensors by TAKAM TCHENDJOU, Ghislain; SIMEU, Emmanuel, image sensors are empowered with online diagnosis and self-healing methods to improve their dependability.
- Defect and Fault Modeling Framework for STT-MRAM Testing by Wu, Lizhou; Rao, Siddharth; Taouil, Mottaqiallah; Cardoso Medeiros, Guilherme; Fieback, Moritz; Marinissen, Erik Jan; Kar, Gouri Sankar; Hamdioui, Said, a framework to derive accurate STT-MRAM fault models is described, together with its employment to model resistive defects in interconnect and pinhole defects in MTJ devices, allowing test solutions for detecting those defects.
- Online Safety Checking for Delay Locked Loops via Embedded Phase Error Monitor by Huang, Shi-Yu; Chu, Wei, the Automotive Safety Integrity Level (ASIL) is targeted by proposing a phase error monitoring scheme for Delay-Locked Loops (DLLs).
- Protecting Memories against Soft Errors: The Case for Customizable Error Correction Codes by Li, Jiaqiang; Reviriego, Pedro; Xiao, Li; Wu, Haotian, the memory protection is supported by a tool able to automate the error correction code design.
- Autonomous Scan Patterns for Laser Voltage Imaging by Tyszer, Jerzy; Cheng, Wu-Tung; Milewski, Sylwester; Mrugalski, Grzegorz; Rajski, Janusz; Trawka, Maciej, authors demonstrate how to reuse on-chip EDT compression environment to generate and apply Laser Voltage Imaging-aware scan patterns for advanced contactless test procedures.
We sincerely hope that you enjoy reading this special issue, and would like to thank all authors and reviewers for their tremendous efforts and contributions in producing these high-quality articles. We also take this opportunity to thank the IEEE Transactions on Emerging Topics in Computing (TETC) Editor-in-Chief (EIC) Prof. Cecilia Metra, past Associate Editor Ramesh Karri, the editorial board, and the entire editorial staff for their guidance, encouragement, and assistance in delivering this special issue
Large fluctuations of the nonlinearities in isotropic turbulence. Anisotropic filtering analysis
Using a Navier–Stokes isotropic turbulent field numerically simulated in a box with a discretization of
10243 (Biferale et al., 2005), we show that the probability of having a stretching–tilting larger than a
few times the local enstrophy is low. By using an anisotropic kind of filter in the Fourier space, where
wavenumbers that have at least one component below a threshold or inside a range are removed, we
analyze these survival statistics when the large, the small inertial or the small inertial and dissipation
scales are filtered out. By considering a flow obtained by randomizing the phases of the Fourier modes,
and applying our filtering techniques, we identified clearly the properties attributable to turbulence.
It can be observed that, in the unfiltered isotropic Navier–Stokes field, the probability of the ratio
(|ω·∇U|/|ω|2) being higher than a given threshold is higher than in the fields where the large scales were
filtered out. At the same time, it is lower than in the fields where the small inertial and dissipation range
of scales is filtered out. This is basically due to the suppression of compact structures in the ranges that
have been filtered in different ways. The partial removal of the background of filaments and sheets does
not have a first order effect on these statistics. These results are discussed in the light of a hypothesized
relation between vortical filaments, sheets and blobs in physical space and in Fourier space. The study
in fact can be viewed as a kind of test for this idea and tries to highlight its limits. We conclude that a
qualitative relation in physical space and in Fourier space can be supposed to exist for blobs only. That is
for the near isotropic structures which are sufficiently described by a single spatial scale and do not suffer
from the disambiguation problem as filaments and sheets do.
Information is also given on the filtering effect on statistics concerning the inclination of the strain rate
tensor eigenvectors with respect to vorticity. In all filtered ranges, eigenvector 2 reduces its alignment,
while eigenvector 3 reduces its misalignment. All filters increase the gap between the most extensional
eigenvalue ⟨λ1⟩ and the intermediate one ⟨λ2⟩ and the gap between this last ⟨λ2⟩ and the contractile
eigenvalue ⟨λ3⟩. When the large scales are missing, the modulus of the eigenvalue 1 becomes nearly
equal to that of the eigenvalue 3, similarly to the modulus of the associated components of the enstrophy
production
A low-cost approach for determining the impact of Functional Approximation
Approximate Computing (AxC) trades off between the level of accuracy required by the user and the actual precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction etc.. Several AxC techniques have been proposed so far in the literature. They work at different abstraction level and propose both hardware and software implementations. The common issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. In this paper, we propose a probabilistic approach to predict the relation between component-level functional approximation and application-level accuracy. Experimental results on a set of benchmark application show that the proposed approach is able to estimate the approximation error with good accuracy and very low computation time
Efficient Neural Network Approximation via Bayesian Reasoning
Approximate Computing (AxC) trades off between the accuracy required by the user and the precision provided by the computing system to achieve several optimizations such as performance improvement, energy, and area reduction. Several AxC techniques have been proposed so far in the literature. They work at different abstraction levels and propose both hardware and software implementations. The standard issue of all existing approaches is the lack of a methodology to estimate the impact of a given AxC technique on the application-level accuracy. This paper proposes a probabilistic approach based on Bayesian networks to quickly estimate the impact of a given approximation technique on application-level accuracy. Moreover, we have also shown how Bayesian networks allow a backtrack analysis that automatically identifies the most sensitive components. That influence analysis dramatically reduces the space exploration for approximation techniques. Preliminary results on a simple artificial neural network shown the efficiency of the proposed approach
Approximate computing design exploration through data lifetime metrics
When designing an approximate computing system, the selection of the resources to modify is key. It is important that the error introduced in the system remains reasonable, but the size of the design exploration space can make this extremely difficult. In this paper, we propose to exploit a new metric for this selection: data lifetime. The concept comes from the field of reliability, where it can guide selective hardening: the more often a resource handles "live" data, the more critical it be-comes, the more important it will be to protect it. In this paper, we propose to use this same metric in a new way: identify the less critical resources as approximation targets in order to minimize the impact on the global system behavior and there-fore decrease the impact of approximation while increasing gains on other criteria
ReNE: A Cytoscape Plugin for Regulatory Network Enhancement
One of the biggest challenges in the study of biological regulatory mechanisms is the integration, modeling, and analysis of the complex interactions which take place in biological networks. Despite post transcriptional regulatory elements (i.e., miRNAs) are widely investigated in current research, their usage and visualization in biological networks is very limited. Regulatory networks are commonly limited to gene entities. To integrate networks with post transcriptional regulatory data, researchers are therefore forced to manually resort to specific third party databases. In this context, we introduce ReNE, a Cytoscape 3.x plugin designed to automatically enrich a standard gene-based regulatory network with more detailed transcriptional, post transcriptional, and translational data, resulting in an enhanced network that more precisely models the actual biological regulatory mechanisms. ReNE can automatically import a network layout from the Reactome or KEGG repositories, or work with custom pathways described using a standard OWL/XML data format that the Cytoscape import procedure accepts. Moreover, ReNE allows researchers to merge multiple pathways coming from different sources. The merged network structure is normalized to guarantee a consistent and uniform description of the network nodes and edges and to enrich all integrated data with additional annotations retrieved from genome-wide databases like NCBI, thus producing a pathway fully manageable through the Cytoscape environment. The normalized network is then analyzed to include missing transcription factors, miRNAs, and proteins. The resulting enhanced network is still a fully functional Cytoscape network where each regulatory element (transcription factor, miRNA, gene, protein) and regulatory mechanism (up-regulation/down-regulation) is clearly visually identifiable, thus enabling a better visual understanding of its role and the effect in the network behavior. The enhanced network produced by ReNE is exportable in multiple formats for further analysis via third party applications. ReNE can be freely installed from the Cytoscape App Store (http://apps.cytoscape.org/apps/rene) and the full source code is freely available for download through a SVN repository accessible at http://www.sysbio.polito.it/tools_svn/Bi​oInformatics/Rene/releases/. ReNE enhances a network by only integrating data from public repositories, without any inference or prediction. The reliability of the introduced interactions only depends on the reliability of the source data, which is out of control of ReNe developers
A New miRNA Motif Protects Pathways’ Expression in Gene Regulatory Networks
The continuing discovery of new functions and classes of small non-coding RNAs is suggesting the presence of regulatory mechanisms far more complex than the ones identified so far. In our computational analysis of a large set of public available databases, we found statistical evidence of an inter-pathway regulatory motif, not previously described, that reveals a new protective role miRNAs may play in the successful activation of a pathway. This paper reports the main outcomes of this analysis
Cross-layer soft-error resilience analysis of computing systems
In a world with computation at the epicenter of every activity, computing systems must be highly resilient to errors even if miniaturization makes the underlying hardware unreliable. Techniques able to guarantee high reliability are associated to high costs. Early resilience analysis has the potential to support informed design decisions to maximize system-level reliability while minimizing the associated costs. This tutorial focuses on early cross-layer (hardware and software) resilience analysis considering the full computing continuum (from IoT/CPS to HPC applications) with emphasis on soft errors
EXT-TAURUM P2T: an Extended Secure CAN-FD Architecture for Road Vehicles
The automobile industry is no longer relying on pure mechanical systems; instead, it benefits from advanced Electronic Control Units (ECUs) in order to provide new and complex functionalities in the effort to move toward fully connected cars. However, connected cars provide a dangerous playground for hackers. Vehicles are becoming increasingly vulnerable to cyber attacks as they come equipped with more connected features and control systems. This situation may expose strategic assets in the automotive value chain. In this scenario, the Controller Area Network (CAN) is the most widely used communication protocol in the automotive domain. However, this protocol lacks encryption and authentication. Consequently, any malicious/hijacked node can cause catastrophic accidents and financial loss. Starting from the analysis of the vulnerability connected to the CAN communication protocol in the automotive domain, this paper proposes EXT-TAURUM P2T a new low-cost secure CAN-FD architecture for the automotive domain implementing secure communication among ECUs, a novel key provisioning strategy, intelligent throughput management, and hardware signature mechanisms. The proposed architecture has been implemented, resorting to a commercial Multi-Protocol Vehicle Interface module, and the obtained results experimentally demonstrate the approach’s feasibility
- …