1,587 research outputs found

    Automated silicon debug data analysis techniques for a hardware data acquisition environment

    Full text link
    Abstract—Silicon debug poses a unique challenge to the en-gineer because of the limited access to internal signals of the chip. Embedded hardware such as trace buffers helps overcome this challenge by acquiring data in real time. However, trace buffers only provide access to a limited subset of pre-selected signals. In order to effectively debug, it is essential to configure the trace-buffer to trace the relevant signals selected from the pre-defined set. This can be a labor-intensive and time-consuming process. This paper introduces a set of techniques to automate the configuring process for trace buffer-based hardware. First, the proposed approach utilizes UNSAT cores to identify signals that can provide valuable information for localizing the error. Next, it finds alternatives for signals not part of the traceable set so that it can imply the corresponding values. Integrating the proposed techniques with a debugging methodology, experiments show that the methodology can reduce 30 % of potential suspects with as low as 8 % of registers traced, demonstrating the effectiveness of the proposed procedures. Index Terms—Silicon debug, post-silicon diagnosis, data acqui-sition setup I

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Quality validation of PCB-mounted sensors to prevent counterfeit components

    Get PDF
    Counterfeiting of electronic components is a growing problem, leading to lost revenue for companies as well as unreliable products delivered to customers. There is no general way for small hardware companies to deal with the problem. Addressing counterfeiting is usually not a high priority for these companies, though it is still imperative for them to deliver products with guaranteed functionality. This thesis, written at Minut AB in collaboration with Lund University, concerns the problem of counterfeit components in the tech industry. The approach is to test components during product manufacturing. A test system validating the components' performance is developed to guarantee the quality and functionality of the final product. The results show that the developed test system is capable of finding components with deviating performance. A malfunctioning counterfeit component can be discovered by the test system, though, counterfeiting is a complex problem difficult to assess at individual component level. Introduction of test and production statistics can point to failing component batches. These statistics opens for further investigation in the search for counterfeit components.Prevent counterfeit electronics Abstract Today's tech industry is struggling with a growing problem, counterfeiting of components. In the semiconductor industry, an estimated $3 billion worth of components are counterfeit worldwide. But lost revenue is not all. Counterfeit components may compromise the performance and safety of electronics. If a component is not what it claims to be, this can result in unreliable and dangerous systems. Introduction The problem with counterfeit components is known in the tech industry, but the solution is not obvious. Finding counterfeit components is a difficult task. They appear in many different shapes and ways. Old components sold as new, components relabeled to a higher grade, or components scrapped from the original manufacturer are just some examples. Today, big corporations and semiconductor manufacturers works with authentication and standardizations to fight the problem. That means extra work which takes time and requires money, money that not every company can spend. Especially these days where a multitude of small hardware companies are making their way into the market. They don't have the resources to motivate additional development of anti-counterfeit techniques. And it does not exist any predefined methods or standards of how to handle the problem. With the increasing number of hardware start-ups in the industry comes a growing number of smaller factories and component distributors. This results in larger and more complex supply chains which means a greater risk of counterfeit. While it is hard to motivate implementation of additional anti-counterfeit methods for a new company, it is still desired to deliver working products without malfunctioning or counterfeit components. Minut is one of those hardware start-ups making a connected home monitoring product called Point. They want to ensure that every shipped unit works as supposed and contains no counterfeit components. How to handle the problem Simply, by testing. Point uses a variety of sensors to sense the ambient environment. By developing a custom test system that validates the performance and quality of all sensors on every produced Point, it is possible to tell if the unit works and is ready to be shipped to customer. All tests will be designed to validate that the output of the sensors conforms to their specifications. If a sensor performs within specified accuracy, it is assumed to be working and not to be counterfeit. Also the power consumption of every Point is measured to assure that none of the components consumes more current than expected. The combination of all these tests ensures the functionality of every unit. Result The test system makes it possible to validate the performance of all the critical components on Point. Test results from the system shows that malfunctioning components can be distinguished during production. This assures that no defect units leaves the factory and hence fulfills one purpose of the test system. The other purpose is to find counterfeit components. This is a more complex task. The work points out that it is hard to binary determine if a component is counterfeit or not. However, by the use of data logging and statistics, the solution is one step closer. By storing the results from all the tested units, Minut can gather statistics over the quality of every single component. If the statistics shows high failure rate or poor performance within specific component batches, it could be counterfeit components. with these statistics it is possible for a company to put pressure on the supply chain and component manufacturer. The work shows that is it possible for small hardware companies to test and quality validate their products. The test system built also introduces simple ways to keep statistics of the testing and production. With every produced unit tested, the risk of delivering a product containing a counterfeit or malfunctioning component is reduced. With more companies testing their products, it is more difficult for counterfeit components to reach consumers and the overall safety of delivered products is increased

    Advances in Architectures and Tools for FPGAs and their Impact on the Design of Complex Systems for Particle Physics

    Get PDF
    The continual improvement of semiconductor technology has provided rapid advancements in device frequency and density. Designers of electronics systems for high-energy physics (HEP) have benefited from these advancements, transitioning many designs from fixed-function ASICs to more flexible FPGA-based platforms. Today’s FPGA devices provide a significantly higher amount of resources than those available during the initial Large Hadron Collider design phase. To take advantage of the capabilities of future FPGAs in the next generation of HEP experiments, designers must not only anticipate further improvements in FPGA hardware, but must also adopt design tools and methodologies that can scale along with that hardware. In this paper, we outline the major trends in FPGA hardware, describe the design challenges these trends will present to developers of HEP electronics, and discuss a range of techniques that can be adopted to overcome these challenges

    Machine Learning for Microprocessor Performance Bug Localization

    Full text link
    The validation process for microprocessors is a very complex task that consumes substantial engineering time during the design process. Bugs that degrade overall system performance, without affecting its functional correctness, are particularly difficult to debug given the lack of a golden reference for bug-free performance. This work introduces two automated performance bug localization methodologies based on machine learning that aims to aid the debugging process. Our results show that, the evaluated microprocessor core performance bugs whose average IPC impact is greater than 1%, our best-performing technique is able to localize the exact microarchitectural unit of the bug \sim77\% of the time, while achieving a top-3 unit accuracy (out of 11 possible locations) of over 90% for bugs with the same average IPC impact. The proposed system in our simulation setup requires only a few seconds to perform a bug location inference, which leads to a reduced debugging time.Comment: 12 pages, 6 figure

    Use of Field Programmable Gate Array Technology in Future Space Avionics

    Get PDF
    Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA

    Evolvable Smartphone-Based Point-of-Care Systems For In-Vitro Diagnostics

    Get PDF
    Recent developments in the life-science -omics disciplines, together with advances in micro and nanoscale technologies offer unprecedented opportunities to tackle some of the major healthcare challenges of our time. Lab-on-Chip technologies coupled with smart-devices in particular, constitute key enablers for the decentralization of many in-vitro medical diagnostics applications to the point-of-care, supporting the advent of a preventive and personalized medicine. Although the technical feasibility and the potential of Lab-on-Chip/smart-device systems is repeatedly demonstrated, direct-to-consumer applications remain scarce. This thesis addresses this limitation. System evolvability is a key enabler to the adoption and long-lasting success of next generation point-of-care systems by favoring the integration of new technologies, streamlining the reengineering efforts for system upgrades and limiting the risk of premature system obsolescence. Among possible implementation strategies, platform-based design stands as a particularly suitable entry point. One necessary condition, is for change-absorbing and change-enabling mechanisms to be incorporated in the platform architecture at initial design-time. Important considerations arise as to where in Lab-on-Chip/smart-device platforms can these mechanisms be integrated, and how to implement them. Our investigation revolves around the silicon-nanowire biological field effect transistor, a promising biosensing technology for the detection of biological analytes at ultra low concentrations. We discuss extensively the sensitivity and instrumentation requirements set by the technology before we present the design and implementation of an evolvable smartphone-based platform capable of interfacing lab-on-chips embedding such sensors. We elaborate on the implementation of various architectural patterns throughout the platform and present how these facilitated the evolution of the system towards one accommodating for electrochemical sensing. Model-based development was undertaken throughout the engineering process. A formal SysML system model fed our evolvability assessment process. We introduce, in particular, a model-based methodology enabling the evaluation of modular scalability: the ability of a system to scale the current value of one of its specification by successively reengineering targeted system modules. The research work presented in this thesis provides a roadmap for the development of evolvable point-of-care systems, including those targeting direct-to-consumer applications. It extends from the early identification of anticipated change, to the assessment of the ability of a system to accommodate for these changes. Our research should thus interest industrials eager not only to disrupt, but also to last in a shifting socio-technical paradigm

    RON-BEAM DEBUG AND FAILURE ANALYSIS OF INTEGRATED CIRCUITS

    Get PDF
    A current research project at IMAG/TIM3 Laboratory aims at an integrated test system combining the use of the Scanning Electron Microscope (SEM), used in voltage contrast mode, with a new high-level approach of fault location in complex VLSI circuits, in order to reach a complete automated diagnosis process. Two research themes are induced by this project, which are: prototype validation of known circuits, on which CAD information is available, and failure analysis of unknown circuits, which are compared to reference circuits. For prototype validation, a knowledge-based approach to fault location is used. Concerning failure analysis, automatic image comparison based on pattern recog- nition techniques is performed. The purpose of the paper is to present these two methodologies, focusing on the SEM-based data acquisition process

    3D Execution Monitor (3D-EM): Using 3D Circuits to Detect Hardware Malicious Inclusions in General Purpose Processors

    Get PDF
    Best PhD Paper, Proceedings of the International Conference on Information Warfare and Security (ICIW), Washington, DC, USA, March 2011, Pages 289-298. [Paper] [Slides] [Abstract] [Conference
    corecore