427 research outputs found

    Embedding of Complete Graphs in Broken Chimera Graphs

    Get PDF
    In order to solve real world combinatorial optimization problems with a D-Wave quantum annealer it is necessary to embed the problem at hand into the D-Wave hardware graph, namely Chimera or Pegasus. Most hard real world problems exhibit a strong connectivity. For the worst case scenario of a complete graph, there exists an efficient solution for the embedding into the ideal Chimera graph. However, since real machines almost always have broken qubits it is necessary to find an embedding into the broken hardware graph. We present a new approach to the problem of embedding complete graphs into broken Chimera graphs. This problem can be formulated as an optimization problem, more precisely as a matching problem with additional linear constraints. Although being NP-hard in general it is fixed parameter tractable in the number of inaccessible vertices in the Chimera graph. We tested our exact approach on various instances of broken hardware graphs, both related to real hardware as well as randomly generated. For fixed runtime, we were able to embed larger complete graphs compared to previous, heuristic approaches. As an extension, we developed a fast heuristic algorithm which enables us to solve even larger instances. We compared the performance of our heuristic and exact approaches.Comment: 26 pages, 9 figures, 2 table

    Automating Topology Aware Mapping for Supercomputers

    Get PDF
    Petascale machines with hundreds of thousands of cores are being built. These machines have varying interconnect topologies and large network diameters. Computation is cheap and communication on the network is becoming the bottleneck for scaling of parallel applications. Network contention, specifically, is becoming an increasingly important factor affecting overall performance. The broad goal of this dissertation is performance optimization of parallel applications through reduction of network contention. Most parallel applications have a certain communication topology. Mapping of tasks in a parallel application based on their communication graph, to the physical processors on a machine can potentially lead to performance improvements. Mapping of the communication graph for an application on to the interconnect topology of a machine while trying to localize communication is the research problem under consideration. The farther different messages travel on the network, greater is the chance of resource sharing between messages. This can create contention on the network for networks commonly used today. Evaluative studies in this dissertation show that on IBM Blue Gene and Cray XT machines, message latencies can be severely affected under contention. Realizing this fact, application developers have started paying attention to the mapping of tasks to physical processors to minimize contention. Placement of communicating tasks on nearby physical processors can minimize the distance traveled by messages and reduce the chances of contention. Performance improvements through topology aware placement for applications such as NAMD and OpenAtom are used to motivate this work. Building on these ideas, the dissertation proposes algorithms and techniques for automatic mapping of parallel applications to relieve the application developers of this burden. The effect of contention on message latencies is studied in depth to guide the design of mapping algorithms. The hop-bytes metric is proposed for the evaluation of mapping algorithms as a better metric than the previously used maximum dilation metric. The main focus of this dissertation is on developing topology aware mapping algorithms for parallel applications with regular and irregular communication patterns. The automatic mapping framework is a suite of such algorithms with capabilities to choose the best mapping for a problem with a given communication graph. The dissertation also briefly discusses completely distributed mapping techniques which will be imperative for machines of the future.published or submitted for publicationnot peer reviewe

    Machine learning applications in science

    Get PDF

    Quantum-classical generative models for machine learning

    Get PDF
    The combination of quantum and classical computational resources towards more effective algorithms is one of the most promising research directions in computer science. In such a hybrid framework, existing quantum computers can be used to their fullest extent and for practical applications. Generative modeling is one of the applications that could benefit the most, either by speeding up the underlying sampling methods or by unlocking more general models. In this work, we design a number of hybrid generative models and validate them on real hardware and datasets. The quantum-assisted Boltzmann machine is trained to generate realistic artificial images on quantum annealers. Several challenges in state-of-the-art annealers shall be overcome before one can assess their actual performance. We attack some of the most pressing challenges such as the sparse qubit-to-qubit connectivity, the unknown effective-temperature, and the noise on the control parameters. In order to handle datasets of realistic size and complexity, we include latent variables and obtain a more general model called the quantum-assisted Helmholtz machine. In the context of gate-based computers, the quantum circuit Born machine is trained to encode a target probability distribution in the wavefunction of a set of qubits. We implement this model on a trapped ion computer using low-depth circuits and native gates. We use the generative modeling performance on the canonical Bars-and-Stripes dataset to design a benchmark for hybrid systems. It is reasonable to expect that quantum data, i.e., datasets of wavefunctions, will become available in the future. We derive a quantum generative adversarial network that works with quantum data. Here, two circuits are optimized in tandem: one tries to generate suitable quantum states, the other tries to distinguish between target and generated states

    From classical to quantum machine learning: survey on routing optimization in 6G software defined networking

    Get PDF
    The sixth generation (6G) of mobile networks will adopt on-demand self-reconfiguration to fulfill simultaneously stringent key performance indicators and overall optimization of usage of network resources. Such dynamic and flexible network management is made possible by Software Defined Networking (SDN) with a global view of the network, centralized control, and adaptable forwarding rules. Because of the complexity of 6G networks, Artificial Intelligence and its integration with SDN and Quantum Computing are considered prospective solutions to hard problems such as optimized routing in highly dynamic and complex networks. The main contribution of this survey is to present an in-depth study and analysis of recent research on the application of Reinforcement Learning (RL), Deep Reinforcement Learning (DRL), and Quantum Machine Learning (QML) techniques to address SDN routing challenges in 6G networks. Furthermore, the paper identifies and discusses open research questions in this domain. In summary, we conclude that there is a significant shift toward employing RL/DRL-based routing strategies in SDN networks, particularly over the past 3 years. Moreover, there is a huge interest in integrating QML techniques to tackle the complexity of routing in 6G networks. However, considerable work remains to be done in both approaches in order to accomplish thorough comparisons and synergies among various approaches and conduct meaningful evaluations using open datasets and different topologies

    Single-Qubit Gates Matter for Optimising Quantum Circuit Depth in Qubit Mapping

    Full text link
    Quantum circuit transformation (QCT, a.k.a. qubit mapping) is a critical step in quantum circuit compilation. Typically, QCT is achieved by finding an appropriate initial mapping and using SWAP gates to route the qubits such that all connectivity constraints are satisfied. The objective of QCT can be to minimise circuit size or depth. Most existing QCT algorithms prioritise minimising circuit size, potentially overlooking the impact of single-qubit gates on circuit depth. In this paper, we first point out that a single SWAP gate insertion can double the circuit depth, and then propose a simple and effective method that takes into account the impact of single-qubit gates on circuit depth. Our method can be combined with many existing QCT algorithms to optimise circuit depth. The Qiskit SABRE algorithm has been widely accepted as the state-of-the-art algorithm for optimising both circuit size and depth. We demonstrate the effectiveness of our method by embedding it in SABRE, showing that it can reduce circuit depth by up to 50% and 27% on average on, for instance, Google Sycamore and 117 real quantum circuits from MQTBench.Comment: Accepted to The 2023 International Conference on Computer-Aided Design (IEEE/ACM ICCAD'23); 13 pages, 7 figure
    corecore