9 research outputs found

    LDRD final report : combinatorial optimization with demands.

    Full text link

    HamLib: A library of Hamiltonians for benchmarking quantum algorithms and hardware

    Full text link
    In order to characterize and benchmark computational hardware, software, and algorithms, it is essential to have many problem instances on-hand. This is no less true for quantum computation, where a large collection of real-world problem instances would allow for benchmarking studies that in turn help to improve both algorithms and hardware designs. To this end, here we present a large dataset of qubit-based quantum Hamiltonians. The dataset, called HamLib (for Hamiltonian Library), is freely available online and contains problem sizes ranging from 2 to 1000 qubits. HamLib includes problem instances of the Heisenberg model, Fermi-Hubbard model, Bose-Hubbard model, molecular electronic structure, molecular vibrational structure, MaxCut, Max-k-SAT, Max-k-Cut, QMaxCut, and the traveling salesperson problem. The goals of this effort are (a) to save researchers time by eliminating the need to prepare problem instances and map them to qubit representations, (b) to allow for more thorough tests of new algorithms and hardware, and (c) to allow for reproducibility and standardization across research studies

    Compacting cuts: a new linear formulation for minimum cut

    No full text
    For a graph (V,E), existing compact linear formulations for the minimum cut problem require Θ(|V ||E|) variables and constraints and can be interpreted as a composition of |V | − 1 polyhedra for minimum s-t cuts in much the same way as early approaches to finding globally minimum cuts relied on |V | − 1 calls to a minimum s-t cut algorithm. We present the first formulation to beat this bound, one that uses O(|V | 2) variables and O(|V | 3) constraints. An immediate consequence of our result is a compact linear relaxation with O(|V | 2) constraints and O(|V | 3) variables for enforcing global connectivity constraints. This relaxation is as strong as standard cut-based relaxations and have applications in solving traveling salesman problems by integer programming as well as finding approximate solutions for survivable network design problems using Jain’s iterative rounding method. Another application is a polynomial time verifiable certificate of size n for for the NP-complete problem of l1-embeddability of a rational metric on an n-set (as opposed to one of size n² known previously)

    Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Get PDF
    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N) more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning
    corecore