538 research outputs found

    Testing a Quantum Computer

    Get PDF
    The problem of quantum test is formally addressed. The presented method attempts the quantum role of classical test generation and test set reduction methods known from standard binary and analog circuits. QuFault, the authors software package generates test plans for arbitrary quantum circuits using the very efficient simulator QuIDDPro[1]. The quantum fault table is introduced and mathematically formalized, and the test generation method explained.Comment: 15 pages, 17 equations, 27 tables, 8 figure

    GALA-n: Generic Architecture of Layout-Aware n-Bit Quantum Operators for Cost-Effective Realization on IBM Quantum Computers

    Full text link
    A generic architecture of n-bit quantum operators is proposed for cost-effective transpilation, based on the layouts and the number of n neighbor physical qubits for IBM quantum computers, where n >= 3. This proposed architecture is termed "GALA-n quantum operator". The GALA-n quantum operator is designed using the visual approach of the Bloch sphere, from the visual representations of the rotational quantum operations for IBM native gates (square root of X, X, RZ, and CNOT). In this paper, we also proposed a new formula for the quantum cost, which calculates the total numbers of native gates, SWAP gates, and the depth of the final transpiled quantum circuits. This formula is termed the "transpilation quantum cost". After transpilation, our proposed GALA-n quantum operator always has a lower transpilation quantum cost than that of conventional n-bit quantum operators, which are mainly constructed from costly n-bit Toffoli gates.Comment: 27 pages, 22 figure

    Particle filtering in high-dimensional chaotic systems

    Full text link
    We present an efficient particle filtering algorithm for multiscale systems, that is adapted for simple atmospheric dynamics models which are inherently chaotic. Particle filters represent the posterior conditional distribution of the state variables by a collection of particles, which evolves and adapts recursively as new information becomes available. The difference between the estimated state and the true state of the system constitutes the error in specifying or forecasting the state, which is amplified in chaotic systems that have a number of positive Lyapunov exponents. The purpose of the present paper is to show that the homogenization method developed in Imkeller et al. (2011), which is applicable to high dimensional multi-scale filtering problems, along with important sampling and control methods can be used as a basic and flexible tool for the construction of the proposal density inherent in particle filtering. Finally, we apply the general homogenized particle filtering algorithm developed here to the Lorenz'96 atmospheric model that mimics mid-latitude atmospheric dynamics with microscopic convective processes.Comment: 28 pages, 12 figure

    A segmented total energy detector (sTED) for (n, Îł) cross section measurements at n_TOF EAR2

    Get PDF
    This work was supported in part by the I+D+i grant PGC2018-096717-B-C21 funded by MCIN/AEI/10.13039/501100011033 and by the European Commission H2020 Framework Programme project SANDA (Grant agreement ID: 847552).The neutron time-of-flight facility n_TOF is characterised by its high instantaneous neutron intensity, high resolution and broad neutron energy spectra, specially conceived for neutron-induced reaction cross section measurements. Two Time-Of-Flight (TOR) experimental areas are available at the facility: experimental area 1 (EAR1), located at the end of the 185 m horizontal flight path from the spallation target, and experimental area 2 (EAR2), placed at 20 m from the target in the vertical direction. The neutron fluence in EAR2 is similar to 300 times more intense than in EARL in the relevant time-of-flight window. EAR2 was designed to carry out challenging cross-section measurements with low mass samples (approximately 1 mg), reactions with small cross-sections or/and highly radioactive samples. The high instantaneous fluence of EAR2 results in high counting rates that challenge the existing capture systems. Therefore, the sTED detector has been designed to mitigate these effects. In 2021, a dedicated campaign was done validating the performance of the detector up to at least 300 keV neutron energy. After this campaign, the detector has been used to perform various capture cross section measurements at n_TOF EAR2.MCIN/AEI/10.13039/501100011033 I+D+i PGC2018-096717-B-C21European Commission H2020 Framework Programme SANDA 84755

    Fault Models for Quantum Mechanical Switching Networks

    Full text link
    The difference between faults and errors is that, unlike faults, errors can be corrected using control codes. In classical test and verification one develops a test set separating a correct circuit from a circuit containing any considered fault. Classical faults are modelled at the logical level by fault models that act on classical states. The stuck fault model, thought of as a lead connected to a power rail or to a ground, is most typically considered. A classical test set complete for the stuck fault model propagates both binary basis states, 0 and 1, through all nodes in a network and is known to detect many physical faults. A classical test set complete for the stuck fault model allows all circuit nodes to be completely tested and verifies the function of many gates. It is natural to ask if one may adapt any of the known classical methods to test quantum circuits. Of course, classical fault models do not capture all the logical failures found in quantum circuits. The first obstacle faced when using methods from classical test is developing a set of realistic quantum-logical fault models. Developing fault models to abstract the test problem away from the device level motivated our study. Several results are established. First, we describe typical modes of failure present in the physical design of quantum circuits. From this we develop fault models for quantum binary circuits that enable testing at the logical level. The application of these fault models is shown by adapting the classical test set generation technique known as constructing a fault table to generate quantum test sets. A test set developed using this method is shown to detect each of the considered faults.Comment: (almost) Forgotten rewrite from 200

    Effectiveness Assessment of the Search-Based Statistical Structural Testing

    Get PDF
    Search-based statistical structural testing (SBSST) is a promising technique that uses automated search to construct input distributions for statistical structural testing. It has been proved that a simple search algorithm, for example, the hill-climber is able to optimize an input distribution. However, due to the noisy fitness estimation of the minimum triggering probability among all cover elements (Tri-Low-Bound), the existing approach does not show a satisfactory efficiency. Constructing input distributions to satisfy the Tri-Low-Bound criterion requires an extensive computation time. Tri-Low-Bound is considered a strong criterion, and it is demonstrated to sustain a high fault-detecting ability. This article tries to answer the following question: if we use a relaxed constraint that significantly reduces the time consumption on search, can the optimized input distribution still be effective in fault-detecting ability? In this article, we propose a type of criterion called fairness-enhanced-sum-of-triggering-probability (p-L1-Max). The criterion utilizes the sum of triggering probabilities as the fitness value and leverages a parameter p to adjust the uniformness of test data generation. We conducted extensive experiments to compare the computation time and the fault-detecting ability between the two criteria. The result shows that the 1.0-L1-Max criterion has the highest efficiency, and it is more practical to use than the Tri-Low-Bound criterion. To measure a criterion’s fault-detecting ability, we introduce a definition of expected faults found in the effective test set size region. To measure the effective test set size region, we present a theoretical analysis of the expected faults found with respect to various test set sizes and use the uniform distribution as a baseline to derive the effective test set size region’s definition
    • …
    corecore