385 research outputs found

    Spoofing cross entropy measure in boson sampling

    Full text link
    Cross entropy measure is a widely used benchmarking to demonstrate quantum computational advantage from sampling problems, such as random circuit sampling using superconducting qubits and boson sampling. In this work, we propose a heuristic classical algorithm that generates heavy outcomes of the ideal boson sampling distribution and consequently achieves a large cross entropy. The key idea is that there exist classical samplers that are efficiently simulable and correlate with the ideal boson sampling probability distribution and that the correlation can be used to post-select heavy outcomes of the ideal probability distribution, which essentially leads to a large cross entropy. As a result, our algorithm achieves a large cross entropy score by selectively generating heavy outcomes without simulating ideal boson sampling. We first show that for small-size circuits, the algorithm can even score a better cross entropy than the ideal distribution of boson sampling. We then demonstrate that our method scores a better cross entropy than the recent Gaussian boson sampling experiments when implemented at intermediate, verifiable system sizes. Much like current state-of-the-art experiments, we cannot verify that our spoofer works for quantum advantage size systems. However, we demonstrate our approach works for much larger system sizes in fermion sampling, where we can efficiently compute output probabilities.Comment: 14 pages, 11 figure

    Extracting Business Intelligence from Online Product Reviews: An Experiment of Automatic Rule-Induction

    Get PDF
    Online product reviews are a major source of business intelligence (BI) that helps managers and market researchers make important decisions on product development and promotion. However, the large volume of online product review data creates significant information overload problems, making it difficult to analyze users’ concerns. In this paper, we employ a design science paradigm to develop a new framework for designing BI systems that correlate the textual content and the numerical ratings of online product reviews. Based on the framework, we developed a prototype for extracting the relationship between the user ratings and their textual comments posted on Amazon.com’s Web site. Two data mining algorithms were implemented to extract automatically decision rules that guide the understanding of the relationship. We report on experimental results of using the prototype to extract rules from online reviews of three products and discuss the managerial implications

    Efficient classical simulation of noisy random quantum circuits in one dimension

    Get PDF
    Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers.Comment: 27 pages, 9 figures, accepted for publication in Quantu

    Efficient classical simulation of noisy random quantum circuits in one dimension

    Get PDF
    Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers

    4-D Printing of Pressure Sensors and Energy Harvesting Devices for Engineering Education

    Get PDF
    This paper elaborates on the development of laboratory project modules in the Industrial manufacturing and systems engineering department at The University of Texas El Paso based on Four-Dimensional (4D) printing technology. These modules are aimed at introducing the students to interdisciplinary manufacturing and emerging dimensions in manufacturing technology. 4D printing is a new dimension in additive manufacturing wherein, the 3D printed structures react to the change of parameters within the environment such as temperature, and humidity, resulting in shape change or in functionality such as electricity output, and self-healing. Recently 4D printing of simple devices for pressure sensors application were identified and show high feasibility for commercialization due to low cost, freedom of design, and agile manufacturing process. This enables a high interdisciplinary platform for research and project modules suitable to be used in the academic environment for hands-on students training. Laboratory Modules based on 4D printing of pressure sensors is developed for student training that includes: 1) Design of piezoelectric nanocomposites; 2) 3-D model design of pressure sensor devices; 3) Using 3-D printers for 4-D printing, and involved post-processing techniques by which students can experience emerging manufacturing technologies, and; 4) Testing for piezoelectric properties

    Quantum-inspired classical algorithm for graph problems by Gaussian boson sampling

    Full text link
    We present a quantum-inspired classical algorithm that can be used for graph-theoretical problems, such as finding the densest kk-subgraph and finding the maximum weight clique, which are proposed as applications of a Gaussian boson sampler. The main observation from Gaussian boson samplers is that a given graph's adjacency matrix to be encoded in a Gaussian boson sampler is nonnegative, which does not necessitate quantum interference. We first provide how to program a given graph problem into our efficient classical algorithm. We then numerically compare the performance of ideal and lossy Gaussian boson samplers, our quantum-inspired classical sampler, and the uniform sampler for finding the densest kk-subgraph and finding the maximum weight clique and show that the advantage from Gaussian boson samplers is not significant in general. We finally discuss the potential advantage of a Gaussian boson sampler over the proposed sampler.Comment: 11 pages, 5 figure

    Classical simulation of bosonic linear-optical random circuits beyond linear light cone

    Full text link
    Sampling from probability distributions of quantum circuits is a fundamentally and practically important task which can be used to demonstrate quantum supremacy using noisy intermediate-scale quantum devices. In the present work, we examine classical simulability of sampling from the output photon-number distribution of linear-optical circuits composed of random beam splitters with equally distributed squeezed vacuum states and single-photon states input. We provide efficient classical algorithms to simulate linear-optical random circuits and show that the algorithms' error is exponentially small up to a depth less than quadratic in the distance between sources using a classical random walk behavior of random linear-optical circuits. Notably, the average-case depth allowing an efficient classical simulation is larger than the worst-case depth limit, which is linear in the distance. Besides, our results together with the hardness of boson sampling give a lower-bound on the depth for constituting global Haar-random unitary circuits.Comment: 16 pages, 1 figur

    PLAYING THE LOTTERY GAMES?

    Get PDF
    The lottery is a huge business. In 2011, $57.6 billion worth of lottery tickets were sold in 43 states and the District of Columbia. There are three major parties (governments, lottery players, and retailers) involved in the lottery industry, plus many more stakeholders. This paper examines the lottery from the viewpoints of these three primary parties. From the lottery players\u27 viewpoint, we show how to statistically determine the expected value of a lottery ticket and discuss when to conclude it is profitable to buy lottery tickets. We explore the question of whether lottery players are rational. State governments have, for years, relied on lottery money to fund education and other expenses. We examine the economic benefits as well as the societal costs of operating the lottery business. Finally, we examine the economics of selling lottery tickets from the retailers\u27 viewpoint

    Degradation Modeling and RUL Prediction Using Wiener Process Subject to Multiple Change Points and Unit Heterogeneity

    Get PDF
    Degradation modeling is critical for health condition monitoring and remaining useful life prediction (RUL). The prognostic accuracy highly depends on the capability of modeling the evolution of degradation signals. In many practical applications, however, the degradation signals show multiple phases, where the conventional degradation models are often inadequate. To better characterize the degradation signals of multiple-phase characteristics, we propose a multiple change-point Wiener process as a degradation model. To take into account the between-unit heterogeneity, a fully Bayesian approach is developed where all model parameters are assumed random. At the offline stage, an empirical two-stage process is proposed for model estimation, and a cross-validation approach is adopted for model selection. At the online stage, an exact recursive model updating algorithm is developed for online individual model estimation, and an effective Monte Carlo simulation approach is proposed for RUL prediction. The effectiveness of the proposed method is demonstrated through thorough simulation studies and real case study
    • …
    corecore