8,475 research outputs found

    Generation and sampling of quantum states of light in a silicon chip

    Get PDF
    Implementing large instances of quantum algorithms requires the processing of many quantum information carriers in a hardware platform that supports the integration of different components. While established semiconductor fabrication processes can integrate many photonic components, the generation and algorithmic processing of many photons has been a bottleneck in integrated photonics. Here we report the on-chip generation and processing of quantum states of light with up to eight photons in quantum sampling algorithms. Switching between different optical pumping regimes, we implement the Scattershot, Gaussian and standard boson sampling protocols in the same silicon chip, which integrates linear and nonlinear photonic circuitry. We use these results to benchmark a quantum algorithm for calculating molecular vibronic spectra. Our techniques can be readily scaled for the on-chip implementation of specialised quantum algorithms with tens of photons, pointing the way to efficiency advantages over conventional computers

    E-QED: Electrical Bug Localization During Post-Silicon Validation Enabled by Quick Error Detection and Formal Methods

    Full text link
    During post-silicon validation, manufactured integrated circuits are extensively tested in actual system environments to detect design bugs. Bug localization involves identification of a bug trace (a sequence of inputs that activates and detects the bug) and a hardware design block where the bug is located. Existing bug localization practices during post-silicon validation are mostly manual and ad hoc, and, hence, extremely expensive and time consuming. This is particularly true for subtle electrical bugs caused by unexpected interactions between a design and its electrical state. We present E-QED, a new approach that automatically localizes electrical bugs during post-silicon validation. Our results on the OpenSPARC T2, an open-source 500-million-transistor multicore chip design, demonstrate the effectiveness and practicality of E-QED: starting with a failed post-silicon test, in a few hours (9 hours on average) we can automatically narrow the location of the bug to (the fan-in logic cone of) a handful of candidate flip-flops (18 flip-flops on average for a design with ~ 1 Million flip-flops) and also obtain the corresponding bug trace. The area impact of E-QED is ~2.5%. In contrast, deter-mining this same information might take weeks (or even months) of mostly manual work using traditional approaches

    Fault Isolation with ‘X’ Filter for Bogus Signals and Intensive Scan Cell Sequence Validation

    Get PDF
    There are some concerns in silicon data collection by using the Design-For-Test (DFT). Bogus signal which carries ‘x’ value in simulation, results from the complex logic synthesis and power-up floating state can often mislead the fault isolation process with invalid failing condition. Besides, scan cells within the scan chain architecture is also having mismatch value in between the simulation data and silicon data due to the non-ideal mapping file passed down from the designer team. Hence, it is important to develop an integrated tool that can filter all the bogus signal online and to validate the correlation between silicon data and simulation data with minimum coverage of 90%. Data from actual Intel 6th generation microprocessor with 14 nm process technology, Skylake is imported to ensure that the application of this thesis in the current industry market. Necessary tools such as the “Differentiate and Display” feature to ease the analysis of data, the AND-logic operation to filter the bogus signal and X-OR logic operation to handle the inverted characteristic of signals are developed throughout the thesis. Results show that the developed integrated filter of bogus signals is successful and the minimum coverage of validation tool is 96.5%. Actual failure analysis case from industry is imported and the difference with and without the developed tools are compared. Inconclusive optical test result from the sample is obtained without the implementation of tools. On the other hand, defect of short circuit between the via and the metal line is found after the implementation of the developed tools. It is concluded that this thesis has achieved all the objectives set

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Analysis and optimization of a debug post-silicon hardware architecture

    Get PDF
    The goal of this thesis is to analyze the post-silicon validation hardware infrastructure implemented on multicore systems taking as an example Esperanto Technologies SoC, which has thousands of RISC-V processors and targets specific software applications. Then, based on the conclusions of the analysis, the project proposes a new post-silicon debug architecture that can fit on any System on-Chip without depending on its target application or complexity and that optimizes the options available on the market for multicore systems

    Very Large Scale Integration Cell Based Path Extractor For Physical To Layout Mapping In Fault Isolation Work

    Get PDF
    Debug and diagnosis in post-silicon challenges the technological advancement in Physical-to-Layout Mapping capabilities. Areas that require such innovation are fault isolation work in failure analysis of semiconductor devices, at post-silicon stage. Since fault isolation work begins at Register Transfer Level (RTL) level to form a suspected boundary consisting of multiple logics from one end to the other, layout to schematic mapping automation tool helps to identify fault in design within given boundary. Therefore the development of a path extractor program which is capable of extracting all possible paths from these start to end signals can save engineers time in tracing components involved between a fault line. This feature is extremely significant in Electronic Design Automation (EDA) as it can provide results of net name sequences stored in a database of mapper files. These mapper files can be used in layout design debug as the net sequence represents schematic signals. To be able to retrieve all possible signals involved within a suspected boundary is a popular search computational problem. Therefore the path extractor program proposed incorporates the characteristics of a depth-first search algorithm by considering the specifications of a cell-based design. The objectives achieved in this research are proven reliable with path extraction results consistent even with search depth manipulation. Performance differs an average of 12.6 % (iteration count) with keeping maximum allowable depth of search constant. Paths of net sequences were consistent throughout the verification of the path extractor program. This development and study of the path extract method carries significance in areas of EDA and debug diagnosis work

    Machine learning techniques and space mapping approaches to enhance signal and power integrity in high-speed links and power delivery networks

    Get PDF
    Enhancing signal integrity (SI) and reliability in modern computer platforms heavily depends on the post-silicon validation of high-speed input/output (HSIO) links, which implies a physical layer (PHY) tuning process where equalization techniques are employed. On the other hand, the interaction between SI and power delivery networks (PDN) is becoming crucial in the computer industry, imposing the need of computationally expensive models to also ensure power integrity (PI). In this paper, surrogate-based optimization (SBO) methods, including space mapping (SM), are applied to efficiently tune equalizers in HSIO links using lab measurements on industrial post-silicon validation platforms, speeding up the PHY tuning process while enhancing eye diagram margins. Two HSIO interfaces illustrate the proposed SBO/SM techniques: USB3 Gen 1 and SATA Gen 3. Additionally, a methodology based on parameter extraction is described to develop fast PDN lumped models for low-cost SI-PI co-simulation; a dual data rate (DDR) memory sub-system illustrates this methodology. Finally, we describe a surrogate modeling methodology for efficient PDN optimization, comparing several machine learning techniques; a PDN voltage regulator with dual power rail remote sensing illustrates this last methodology.ITESO, A.C
    corecore