22,847 research outputs found

    Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics

    Get PDF
    Quantum computing is powerful because unitary operators describing the time-evolution of a quantum system have exponential size in terms of the number of qubits present in the system. We develop a new "Singular value transformation" algorithm capable of harnessing this exponential advantage, that can apply polynomial transformations to the singular values of a block of a unitary, generalizing the optimal Hamiltonian simulation results of Low and Chuang. The proposed quantum circuits have a very simple structure, often give rise to optimal algorithms and have appealing constant factors, while usually only use a constant number of ancilla qubits. We show that singular value transformation leads to novel algorithms. We give an efficient solution to a certain "non-commutative" measurement problem and propose a new method for singular value estimation. We also show how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum. Finally, as a quantum machine learning application we show how to efficiently implement principal component regression. "Singular value transformation" is conceptually simple and efficient, and leads to a unified framework of quantum algorithms incorporating a variety of quantum speed-ups. We illustrate this by showing how it generalizes a number of prominent quantum algorithms, including: optimal Hamiltonian simulation, implementing the Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude amplification, robust oblivious amplitude amplification, fast QMA amplification, fast quantum OR lemma, certain quantum walk results and several quantum machine learning algorithms. In order to exploit the strengths of the presented method it is useful to know its limitations too, therefore we also prove a lower bound on the efficiency of singular value transformation, which often gives optimal bounds.Comment: 67 pages, 1 figur

    Generalised Regret Optimal Controller Synthesis for Constrained Systems

    Full text link
    This paper presents a synthesis method for the generalised dynamic regret problem, comparing the performance of a strictly causal controller to the optimal non-causal controller under a weighted disturbance. This framework encompasses both the dynamic regret problem, considering the difference of the incurred costs, as well as the competitive ratio, which considers their ratio, and which have both been proposed as inherently adaptive alternatives to classical control methods. Furthermore, we extend the synthesis to the case of pointwise-in-time bounds on the disturbance and show that the optimal solution is no worse than the bounded energy optimal solution and is lower bounded by a constant factor, which is only dependent on the disturbance weight. The proposed optimisation-based synthesis allows considering systems subject to state and input constraints. Finally, we provide a numerical example which compares the synthesised controller performance to H2\mathcal{H}_2- and H\mathcal{H}_\infty-controllers.Comment: Accepted at IFAC WC 202

    User-Base Station Association in HetSNets: Complexity and Efficient Algorithms

    Get PDF
    This work considers the problem of user association to small-cell base stations (SBSs) in a heterogeneous and small-cell network (HetSNet). Two optimization problems are investigated, which are maximizing the set of associated users to the SBSs (the unweighted problem) and maximizing the set of weighted associated users to the SBSs (the weighted problem), under signal-to-interference-plus-noise ratio (SINR) constraints. Both problems are formulated as linear integer programs. The weighted problem is known to be NP-hard and, in this paper, the unweighted problem is proved to be NP-hard as well. Therefore, this paper develops two heuristic polynomial-time algorithms to solve both problems. The computational complexity of the proposed algorithms is evaluated and is shown to be far more efficient than the complexity of the optimal brute-force (BF) algorithm. Moreover, the paper benchmarks the performance of the proposed algorithms against the BF algorithm, the branch-and-bound (B\&B) algorithm and standard algorithms, through numerical simulations. The results demonstrate the close-to-optimal performance of the proposed algorithms. They also show that the weighted problem can be solved to provide solutions that are fair between users or to balance the load among SBSs

    Almost Optimal Stochastic Weighted Matching With Few Queries

    Full text link
    We consider the {\em stochastic matching} problem. An edge-weighted general (i.e., not necessarily bipartite) graph G(V,E)G(V, E) is given in the input, where each edge in EE is {\em realized} independently with probability pp; the realization is initially unknown, however, we are able to {\em query} the edges to determine whether they are realized. The goal is to query only a small number of edges to find a {\em realized matching} that is sufficiently close to the maximum matching among all realized edges. This problem has received a considerable attention during the past decade due to its numerous real-world applications in kidney-exchange, matchmaking services, online labor markets, and advertisements. Our main result is an {\em adaptive} algorithm that for any arbitrarily small ϵ>0\epsilon > 0, finds a (1ϵ)(1-\epsilon)-approximation in expectation, by querying only O(1)O(1) edges per vertex. We further show that our approach leads to a (1/2ϵ)(1/2-\epsilon)-approximate {\em non-adaptive} algorithm that also queries only O(1)O(1) edges per vertex. Prior to our work, no nontrivial approximation was known for weighted graphs using a constant per-vertex budget. The state-of-the-art adaptive (resp. non-adaptive) algorithm of Maehara and Yamaguchi [SODA 2018] achieves a (1ϵ)(1-\epsilon)-approximation (resp. (1/2ϵ)(1/2-\epsilon)-approximation) by querying up to O(wlogn)O(w\log{n}) edges per vertex where ww denotes the maximum integer edge-weight. Our result is a substantial improvement over this bound and has an appealing message: No matter what the structure of the input graph is, one can get arbitrarily close to the optimum solution by querying only a constant number of edges per vertex. To obtain our results, we introduce novel properties of a generalization of {\em augmenting paths} to weighted matchings that may be of independent interest

    Approximating shortest paths in large networks

    Get PDF
    In the classroom students are introduced to shortest route calculation using small datasets (those that can be hand-drawn.) For demonstrating the application of an algorithm a small dataset is typically sufficient. However, real-world applications of shortest path calculations seem to be useful only when applied to large datasets. This paper presents research on a computer based implementation of a modified Dijkstra algorithm as applied to large datasets including tens of thousands of arcs. In an attempt to improve the performance of calculating paths two heuristics are also examined. The intuition behind the heuristics is to remove the arcs that will likely not be traversed by the optimal path from the set of arcs that can possibly be traversed by the optimal path. By reducing this number less labeling is required, resulting in fewer CPU cycles being used to generate a route. This paper compares the results of the optimal against those of the two heuristics

    Near-Optimal UGC-hardness of Approximating Max k-CSP_R

    Get PDF
    In this paper, we prove an almost-optimal hardness for Max kk-CSPR_R based on Khot's Unique Games Conjecture (UGC). In Max kk-CSPR_R, we are given a set of predicates each of which depends on exactly kk variables. Each variable can take any value from 1,2,,R1, 2, \dots, R. The goal is to find an assignment to variables that maximizes the number of satisfied predicates. Assuming the Unique Games Conjecture, we show that it is NP-hard to approximate Max kk-CSPR_R to within factor 2O(klogk)(logR)k/2/Rk12^{O(k \log k)}(\log R)^{k/2}/R^{k - 1} for any k,Rk, R. To the best of our knowledge, this result improves on all the known hardness of approximation results when 3k=o(logR/loglogR)3 \leq k = o(\log R/\log \log R). In this case, the previous best hardness result was NP-hardness of approximating within a factor O(k/Rk2)O(k/R^{k-2}) by Chan. When k=2k = 2, our result matches the best known UGC-hardness result of Khot, Kindler, Mossel and O'Donnell. In addition, by extending an algorithm for Max 2-CSPR_R by Kindler, Kolla and Trevisan, we provide an Ω(logR/Rk1)\Omega(\log R/R^{k - 1})-approximation algorithm for Max kk-CSPR_R. This algorithm implies that our inapproximability result is tight up to a factor of 2O(klogk)(logR)k/212^{O(k \log k)}(\log R)^{k/2 - 1}. In comparison, when 3k3 \leq k is a constant, the previously known gap was O(R)O(R), which is significantly larger than our gap of O(polylog R)O(\text{polylog } R). Finally, we show that we can replace the Unique Games Conjecture assumption with Khot's dd-to-1 Conjecture and still get asymptotically the same hardness of approximation
    corecore