279,434 research outputs found

    Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime

    Full text link
    The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard' for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones -- non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch's TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey's hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity

    Time complexity and gate complexity

    Full text link
    We formulate and investigate the simplest version of time-optimal quantum computation theory (t-QCT), where the computation time is defined by the physical one and the Hamiltonian contains only one- and two-qubit interactions. This version of t-QCT is also considered as optimality by sub-Riemannian geodesic length. The work has two aims: one is to develop a t-QCT itself based on physically natural concept of time, and the other is to pursue the possibility of using t-QCT as a tool to estimate the complexity in conventional gate-optimal quantum computation theory (g-QCT). In particular, we investigate to what extent is true the statement: time complexity is polynomial in the number of qubits if and only if so is gate complexity. In the analysis, we relate t-QCT and optimal control theory (OCT) through fidelity-optimal computation theory (f-QCT); f-QCT is equivalent to t-QCT in the limit of unit optimal fidelity, while it is formally similar to OCT. We then develop an efficient numerical scheme for f-QCT by modifying Krotov's method in OCT, which has monotonic convergence property. We implemented the scheme and obtained solutions of f-QCT and of t-QCT for the quantum Fourier transform and a unitary operator that does not have an apparent symmetry. The former has a polynomial gate complexity and the latter is expected to have exponential one because a series of generic unitary operators has a exponential gate complexity. The time complexity for the former is found to be linear in the number of qubits, which is understood naturally by the existence of an upper bound. The time complexity for the latter is exponential. Thus the both targets are examples satisfyng the statement above. The typical characteristics of the optimal Hamiltonians are symmetry under time-reversal and constancy of one-qubit operation, which are mathematically shown to hold in fairly general situations.Comment: 11 pages, 6 figure

    COMPLEX INTUITIONISTIC FUZZY DOMBI PRIORITIZED AGGREGATION OPERATORS AND THEIR APPLICATION FOR RESILIENT GREEN SUPPLIER SELECTION

    Get PDF
    One of the main problems faced by resilient supply chain management is how to solve the problem of supplier selection, which is a typical multi-attribute decision-making (MADM) problem. Given the complexity of the current decision-making environment, the primary influence of this paper is to propose the theory of Dombi operational laws based on complex intuitionistic fuzzy (CIF) information. Moreover, we examined the theory of CIF Dombi prioritized averaging (CIFDPA) and CIF weighted Dombi prioritized averaging (CIFWDPA), where these operators are the modified version of the prioritized aggregation operators and Dombi aggregation operators for fuzzy, intuitionistic fuzzy, complex fuzzy and complex intuitionistic fuzzy information. Some reliable properties for the above operators are also established. Furthermore, to state the art of the proposed operators, an application example in the presence of the invented operators is evaluated for managing resilient green supplier selection problems. Finally, through comparative analysis with mainstream technologies, we provide some mechanism explanations for the proposed method to show the supremacy and worth of the invented theory

    Operator learning with PCA-Net: upper and lower complexity bounds

    Full text link
    PCA-Net is a recently proposed neural operator architecture which combines principal component analysis (PCA) with neural networks to approximate operators between infinite-dimensional function spaces. The present work develops approximation theory for this approach, improving and significantly extending previous work in this direction: First, a novel universal approximation result is derived, under minimal assumptions on the underlying operator and the data-generating distribution. Then, two potential obstacles to efficient operator learning with PCA-Net are identified, and made precise through lower complexity bounds; the first relates to the complexity of the output distribution, measured by a slow decay of the PCA eigenvalues. The other obstacle relates to the inherent complexity of the space of operators between infinite-dimensional input and output spaces, resulting in a rigorous and quantifiable statement of a "curse of parametric complexity", an infinite-dimensional analogue of the well-known curse of dimensionality encountered in high-dimensional approximation problems. In addition to these lower bounds, upper complexity bounds are finally derived. A suitable smoothness criterion is shown to ensure an algebraic decay of the PCA eigenvalues. Furthermore, it is shown that PCA-Net can overcome the general curse for specific operators of interest, arising from the Darcy flow and the Navier-Stokes equations

    Foundations of Online Structure Theory II: The Operator Approach

    Get PDF
    We introduce a framework for online structure theory. Our approach generalises notions arising independently in several areas of computability theory and complexity theory. We suggest a unifying approach using operators where we allow the input to be a countable object of an arbitrary complexity. We give a new framework which (i) ties online algorithms with computable analysis, (ii) shows how to use modifications of notions from computable analysis, such as Weihrauch reducibility, to analyse finite but uniform combinatorics, (iii) show how to finitize reverse mathematics to suggest a fine structure of finite analogs of infinite combinatorial problems, and (iv) see how similar ideas can be amalgamated from areas such as EX-learning, computable analysis, distributed computing and the like. One of the key ideas is that online algorithms can be viewed as a sub-area of computable analysis. Conversely, we also get an enrichment of computable analysis from classical online algorithms
    corecore