210,943 research outputs found

    Guest Editorial: Special section on emerging trends and computing paradigms for testing, reliability and security in future VLSI systems

    Get PDF
    With the rapid advancement of computing technologies in all domains (i.e., handheld devices, autonomous vehicles, medical devices, and massive supercomputers), testability, reliability and security of electronic systems are crucial issues to guarantee safeness of human life. Emerging technologies coupled with new computing paradigms (e.g., approximate computing, neuromorphic computing, in-memory computing) are together exacerbating these problems posing significant challenges to researchers and designers. To address this increased complexity in the hardware testing/reliability/security domain, it is imperative to employ design and analysis methods working at all levels of abstraction, starting from the system level down to the gate level. In this context, the selected papers span from the important field of the yield analysis and modeling, which is becoming fundamental for the manufacturing of modern technologies to the error detection, correction and recovery when the new devices are operative on field. At the same time, papers do not forget that the fault tolerance can be achieved by a cross-layer approach to the dependability that includes the analysis of the effect of faults and the techniques and methodologies to deploy more resilient devices by means of hardening of the design. Eventually, the dependability of the systems is nowadays deeply linked with the security aspects, including the impact on the design trade-offs and the test and validation. The IEEE VLSI Test Symposium (VTS) invited the highest-ranked papers to be included in this special issue of IEEE Transactions on Emerging Technologies in Computing (TETC) in 2020. All aspects of design, manufacturing, test, monitoring and securing of systems affected by defects and malicious attacks are covered by the accepted paper. It is our great pleasure to publish this special issue containing 12 high-quality papers covering all aspects of the emerging trends on testing and reliability: - FTxAC: Leveraging the Approximate Computing Paradigm in the Design of Fault-Tolerant Embedded Systems to Reduce Overheads by Aponte-Moreno, Alexander; Restrepo-Calle, Felipe; Pedraza, Cesar, the design of Fault-Tolerant systems is exploited by means of approximate computing techniques to reduce the implicit overhead of the common redundancy. - A Statistical Gate Sizing Method for Timing Yield and Lifetime Reliability Optimization of Integrated Circuits by Ghavami, Behnam; Ibrahimi, Milad; Raji, Mohsen, the reliability of CMOS devices is improved tackling the joint effect of process variation and transistor aging. - 3D Ring Oscillator based Test Structures to Detect a Trojan Die in a 3D Die Stack in the Presence of Process Variations by Alhelaly, Soha; Dworak, Jennifer; Nepal, Kundan; Manikas, Theodore; Gui, Ping; Crouch, Alfred, the issue of Trojan insertion into 3D integrated circuits has been explored from the use of in-stack circuitry and various testing procedures point of view, showing their detection capability. - Defect Analysis and Parallel Testing for 3D Hybrid CMOS-Memristor Memory by Liu, Peng; You, Zhiqiang; Wu, Jigang; Elimu, Michael; Wang, Weizheng; Cai, Shuo; Han, Yinhe, a new parallel March-like test is proposed to test CMOS Molecular architectures. - Attacks toward Wireless Network-on-Chip and Countermeasures by Biswas, Arnab Kumar; Chatterjee, Navonil; Mondal, Hemanta; Gogniat, Guy; DIGUET, Jean-Philippe, Wireless Network-on-Chip security vulnerabilities are described and their countermeasures proposed. - A Novel TDMA-Based Fault Tolerance Technique for the TSVs in 3D-ICs Using Honeycomb Topology (by Ni, Tianming; Yang, Zhao; Chang, Hao; Zhang, Xiaoqiang; Lu, Lin; Yan, Aibin; Huang, Zhengfeng; Wen, Xiaoqing) proposes a chain-type time division multiplexing access (TDMA)-based fault tolerance technique showing huge area overheads reduction. - Design and analysis of secure emerging crypto-hardware using HyperFET devices by Delgado-Lozano, Ignacio María; Tena-Sánchez, Erica; Núñez, Juan; Acosta, Antonio J., Power Analysis attacks against FinFET device have been tackled by incorporating HyperFET devices to deliver an x25 factor security level improvement. - Detection, Location, and Concealment of Defective Pixels in Image Sensors by TAKAM TCHENDJOU, Ghislain; SIMEU, Emmanuel, image sensors are empowered with online diagnosis and self-healing methods to improve their dependability. - Defect and Fault Modeling Framework for STT-MRAM Testing by Wu, Lizhou; Rao, Siddharth; Taouil, Mottaqiallah; Cardoso Medeiros, Guilherme; Fieback, Moritz; Marinissen, Erik Jan; Kar, Gouri Sankar; Hamdioui, Said, a framework to derive accurate STT-MRAM fault models is described, together with its employment to model resistive defects in interconnect and pinhole defects in MTJ devices, allowing test solutions for detecting those defects. - Online Safety Checking for Delay Locked Loops via Embedded Phase Error Monitor by Huang, Shi-Yu; Chu, Wei, the Automotive Safety Integrity Level (ASIL) is targeted by proposing a phase error monitoring scheme for Delay-Locked Loops (DLLs). - Protecting Memories against Soft Errors: The Case for Customizable Error Correction Codes by Li, Jiaqiang; Reviriego, Pedro; Xiao, Li; Wu, Haotian, the memory protection is supported by a tool able to automate the error correction code design. - Autonomous Scan Patterns for Laser Voltage Imaging by Tyszer, Jerzy; Cheng, Wu-Tung; Milewski, Sylwester; Mrugalski, Grzegorz; Rajski, Janusz; Trawka, Maciej, authors demonstrate how to reuse on-chip EDT compression environment to generate and apply Laser Voltage Imaging-aware scan patterns for advanced contactless test procedures. We sincerely hope that you enjoy reading this special issue, and would like to thank all authors and reviewers for their tremendous efforts and contributions in producing these high-quality articles. We also take this opportunity to thank the IEEE Transactions on Emerging Topics in Computing (TETC) Editor-in-Chief (EIC) Prof. Cecilia Metra, past Associate Editor Ramesh Karri, the editorial board, and the entire editorial staff for their guidance, encouragement, and assistance in delivering this special issue

    Introduction to the special section on dependable network computing

    Get PDF
    Dependable network computing is becoming a key part of our daily economic and social life. Every day, millions of users and businesses are utilizing the Internet infrastructure for real-time electronic commerce transactions, scheduling important events, and building relationships. While network traffic and the number of users are rapidly growing, the mean-time between failures (MTTF) is surprisingly short; according to recent studies, in the majority of Internet backbone paths, the MTTF is 28 days. This leads to a strong requirement for highly dependable networks, servers, and software systems. The challenge is to build interconnected systems, based on available technology, that are inexpensive, accessible, scalable, and dependable. This special section provides insights into a number of these exciting challenges

    Lustre, Hadoop, Accumulo

    Full text link
    Data processing systems impose multiple views on data as it is processed by the system. These views include spreadsheets, databases, matrices, and graphs. There are a wide variety of technologies that can be used to store and process data through these different steps. The Lustre parallel file system, the Hadoop distributed file system, and the Accumulo database are all designed to address the largest and the most challenging data storage problems. There have been many ad-hoc comparisons of these technologies. This paper describes the foundational principles of each technology, provides simple models for assessing their capabilities, and compares the various technologies on a hypothetical common cluster. These comparisons indicate that Lustre provides 2x more storage capacity, is less likely to loose data during 3 simultaneous drive failures, and provides higher bandwidth on general purpose workloads. Hadoop can provide 4x greater read bandwidth on special purpose workloads. Accumulo provides 10,000x lower latency on random lookups than either Lustre or Hadoop but Accumulo's bulk bandwidth is 10x less. Significant recent work has been done to enable mix-and-match solutions that allow Lustre, Hadoop, and Accumulo to be combined in different ways.Comment: 6 pages; accepted to IEEE High Performance Extreme Computing conference, Waltham, MA, 201
    • …
    corecore