25 research outputs found

    Extendibility limits the performance of quantum processors

    Get PDF
    Resource theories in quantum information science are helpful for the study and quantification of the performance of information-processing tasks that involve quantum systems. These resource theories also find applications in other areas of study; e.g., the resource theories of entanglement and coherence have found use and implications in the study of quantum thermodynamics and memory effects in quantum dynamics. In this paper, we introduce the resource theory of unextendibility, which is associated to the inability of extending quantum entanglement in a given quantum state to multiple parties. The free states in this resource theory are the kk-extendible states, and the free channels are kk-extendible channels, which preserve the class of kk-extendible states. We make use of this resource theory to derive non-asymptotic, upper bounds on the rate at which quantum communication or entanglement preservation is possible by utilizing an arbitrary quantum channel a finite number of times, along with the assistance of kk-extendible channels at no cost. We then show that the bounds we obtain are significantly tighter than previously known bounds for both the depolarizing and erasure channels.Comment: 39 pages, 6 figures, v2 includes pretty strong converse bounds for antidegradable channels, as well as other improvement

    Resource theory of unextendibility and nonasymptotic quantum capacity ()

    Get PDF
    In this paper, we introduce the resource theory of unextendibility as a relaxation of the resource theory of entanglement. The free states in this resource theory are the -extendible states, associated with the inability to extend quantum entanglement in a given quantum state to multiple parties. The free channels are -extendible channels, which preserve the class of -extendible states. We define several quantifiers of unextendibility by means of generalized divergences and establish their properties. By utilizing this resource theory, we obtain nonasymptotic upper bounds on the rate at which quantum communication or entanglement preservation is possible over a finite number of uses of an arbitrary quantum channel assisted by -extendible channels at no cost. These bounds are significantly tighter than previously known bounds for both the depolarizing and erasure channels. Finally, we revisit the pretty strong converse for the quantum capacity of antidegradable channels and establish an upper bound on the nonasymptotic quantum capacity of these channels

    Computer Assistance for Discovering\u27\u27 Formulas in System Engineering and Operator Theory

    Get PDF
    The objective of this paper is two-fold. First we present a methodology for using a combination of computer assistance and human intervention to discover highly algebraic theorems in operator, matrix, and linear systems engineering theory. Since the methodology allows limited human intervention, it is slightly less rigid than an algorithm. We call it a strategy. The second objective is to illustrate the methodology by deriving four theorems. The presentation of the methodology is carried out in three steps. The first step is introducing an abstraction of the methodology which we call an idealized strategy. This abstraction facilitates a high level discussion of the ideas involved. Idealized strategies cannot be implemented on a computer. The second and third steps introduce approximations of these abstractions which we call prestrategy and strategy, respectively. A strategy is more general than a prestrategy and, in fact, every prestrategy is a strategy. The above mentioned approximations are implemented on a computer. We stress that, since there is a computer implementation, the reader can usethese techniques to attack their own algebra problems. Thus the paper might be of both practical and theoretical interest to analysts, engineers, and algebraists. Now we give the idea of a prestrategy. A prestrategy relies almost entirely on two commands which we call NCProcess1 and NCProcess2. These two commands are sufficiently powerful so that, in many cases, when one applies them repeatedly to a complicated collection of equations, they transform the collection of equations into an equivalent but substantially simpler collection of equations. A loose description of a prestrategy applied to a list of equations is: (1) Declare which variables are known and which are unknown. At the beginning of a prestrategy, the order in which the equations are listed is not important, since NCProcess1 and NCProcess2 will reorder them so that the simplest ones appear first. (2) Apply NCProcess1 to the equations; the output is a set of equations, usually some in fewer unknowns than before, carefully partitioned based upon which unknowns they contain. (3) The user must select “important equations,” especially any which solve for an unknown, say x. (When an equation is declared to be important or a variable is switched from being an unknown to being a known, then the way in which NCProcess1 and NCProcess2 reorder the equations is modified.) (4) Switch x to being known rather than unknown. Go to (2) above or stop. When this procedure stops, it hopefully gives the “canonical” necessary conditions for the original equations to have a solution. As a final step we run NCProcess2 which aggressively eliminates redundant equations and partitions the output equations in a way which facilitates proving that the necessary conditions are also sufficient. Many classical theorems in analysis can be viewed in terms of solving a collection of equations. We have found that this procedure actually discovers the classic theorem in a modest collection of classic cases involving factorization of engineering systems and matrix completion problems. One might regard the question of which classical theorems in analysis can be proven with a strategy as an analog of classical Euclidean geometry where a major question was what can be constructed with a compass and ruler. Here the goal is to determine which theorems in systems and operator theory could be discovered by repeatedly applying NCProcess1 and NCProcess2 (or their successors) and the (human) selection of equations which are important. The major practical challenge addressed here is finding operations which, when implemented in software, present the user with crucial algebraic information about his problem while not overwhelming him with too much redundant information. This paper consists of two parts. A description of strategies, a high-level description of the algorithms, a description of the applications to operator, matrix, and linear system engineering theory, and a description of how one would use a strategy to “discover” four different theorems are presented in the first part of the paper. Thus, one who seeks a conventional viewpoint for this rather unconventional paper might think of this as providing a unified proof of four different theorems. The theorems were selected for their diverse proofs and because they are widely known (so that many readers should be familiar with at least one of them). The NCProcess commands use noncommutative Gröbner Basis algorithms which have emerged in the last decade, together with algorithms for removing redundant equations and a method for assisting a mathematician in writing a (noncommutative) polynomial as a composition of polynomials. The reader needs to know nothing about Gröbner Basis to understand the first part of this paper. Descriptions involving the theory of Gröbner Basis appear in the second part of the paper

    LagrangeBench: A Lagrangian Fluid Mechanics Benchmarking Suite

    Full text link
    Machine learning has been successfully applied to grid-based PDE modeling in various scientific applications. However, learned PDE solvers based on Lagrangian particle discretizations, which are the preferred approach to problems with free surfaces or complex physics, remain largely unexplored. We present LagrangeBench, the first benchmarking suite for Lagrangian particle problems, focusing on temporal coarse-graining. In particular, our contribution is: (a) seven new fluid mechanics datasets (four in 2D and three in 3D) generated with the Smoothed Particle Hydrodynamics (SPH) method including the Taylor-Green vortex, lid-driven cavity, reverse Poiseuille flow, and dam break, each of which includes different physics like solid wall interactions or free surface, (b) efficient JAX-based API with various recent training strategies and three neighbor search routines, and (c) JAX implementation of established Graph Neural Networks (GNNs) like GNS and SEGNN with baseline results. Finally, to measure the performance of learned surrogates we go beyond established position errors and introduce physical metrics like kinetic energy MSE and Sinkhorn distance for the particle distribution. Our codebase is available at https://github.com/tumaer/lagrangebench .Comment: Accepted at 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmark
    corecore