30 research outputs found

    Distance-based analysis of dynamical systems and time series by optimal transport

    Get PDF
    The concept of distance is a fundamental notion that forms a basis for the orientation in space. It is related to the scientific measurement process: quantitative measurements result in numerical values, and these can be immediately translated into distances. Vice versa, a set of mutual distances defines an abstract Euclidean space. Each system is thereby represented as a point, whose Euclidean distances approximate the original distances as close as possible. If the original distance measures interesting properties, these can be found back as interesting patterns in this space. This idea is applied to complex systems: The act of breathing, the structure and activity of the brain, and dynamical systems and time series in general. In all these situations, optimal transportation distances are used; these measure how much work is needed to transform one probability distribution into another. The reconstructed Euclidean space then permits to apply multivariate statistical methods. In particular, canonical discriminant analysis makes it possible to distinguish between distinct classes of systems, e.g., between healthy and diseased lungs. This offers new diagnostic perspectives in the assessment of lung and brain diseases, and also offers a new approach to numerical bifurcation analysis and to quantify synchronization in dynamical systems.LEI Universiteit LeidenNWO Computational Life Sciences, grant no. 635.100.006Analyse en stochastie

    Long-term future prediction under uncertainty and multi-modality

    Get PDF
    Humans have an innate ability to excel at activities that involve prediction of complex object dynamics such as predicting the possible trajectory of a billiard ball after it has been hit by the player or the prediction of motion of pedestrians while on the road. A key feature that enables humans to perform such tasks is anticipation. There has been continuous research in the area of Computer Vision and Artificial Intelligence to mimic this human ability for autonomous agents to succeed in the real world scenarios. Recent advances in the field of deep learning and the availability of large scale datasets has enabled the pursuit of fully autonomous agents with complex decision making abilities such as self-driving vehicles or robots. One of the main challenges encompassing the deployment of these agents in the real world is their ability to perform anticipation tasks with at least human level efficiency. To advance the field of autonomous systems, particularly, self-driving agents, in this thesis, we focus on the task of future prediction in diverse real world settings, ranging from deterministic scenarios such as prediction of paths of balls on a billiard table to the predicting the future of non-deterministic street scenes. Specifically, we identify certain core challenges for long-term future prediction: long-term prediction, uncertainty, multi-modality, and exact inference. To address these challenges, this thesis makes the following core contributions. Firstly, for accurate long-term predictions, we develop approaches that effectively utilize available observed information in the form of image boundaries in videos or interactions in street scenes. Secondly, as uncertainty increases into the future in case of non-deterministic scenarios, we leverage Bayesian inference frameworks to capture calibrated distributions of likely future events. Finally, to further improve performance in highly-multimodal non-deterministic scenarios such as street scenes, we develop deep generative models based on conditional variational autoencoders as well as normalizing flow based exact inference methods. Furthermore, we introduce a novel dataset with dense pedestrian-vehicle interactions to further aid the development of anticipation methods for autonomous driving applications in urban environments.Menschen haben die angeborene Fähigkeit, Vorgänge mit komplexer Objektdynamik vorauszusehen, wie z. B. die Vorhersage der möglichen Flugbahn einer Billardkugel, nachdem sie vom Spieler gestoßen wurde, oder die Vorhersage der Bewegung von Fußgängern auf der Straße. Eine Schlüsseleigenschaft, die es dem Menschen ermöglicht, solche Aufgaben zu erfüllen, ist die Antizipation. Im Bereich der Computer Vision und der Künstlichen Intelligenz wurde kontinuierlich daran geforscht, diese menschliche Fähigkeit nachzuahmen, damit autonome Agenten in der realen Welt erfolgreich sein können. Jüngste Fortschritte auf dem Gebiet des Deep Learning und die Verfügbarkeit großer Datensätze haben die Entwicklung vollständig autonomer Agenten mit komplexen Entscheidungsfähigkeiten wie selbstfahrende Fahrzeugen oder Roboter ermöglicht. Eine der größten Herausforderungen beim Einsatz dieser Agenten in der realen Welt ist ihre Fähigkeit, Antizipationsaufgaben mit einer Effizienz durchzuführen, die mindestens der menschlichen entspricht. Um das Feld der autonomen Systeme, insbesondere der selbstfahrenden Agenten, voranzubringen, konzentrieren wir uns in dieser Arbeit auf die Aufgabe der Zukunftsvorhersage in verschiedenen realen Umgebungen, die von deterministischen Szenarien wie der Vorhersage der Bahnen von Kugeln auf einem Billardtisch bis zur Vorhersage der Zukunft von nicht-deterministischen Straßenszenen reichen. Insbesondere identifizieren wir bestimmte grundlegende Herausforderungen für langfristige Zukunftsvorhersagen: Langzeitvorhersage, Unsicherheit, Multimodalität und exakte Inferenz. Um diese Herausforderungen anzugehen, leistet diese Arbeit die folgenden grundlegenden Beiträge. Erstens: Für genaue Langzeitvorhersagen entwickeln wir Ansätze, die verfügbare Beobachtungsinformationen in Form von Bildgrenzen in Videos oder Interaktionen in Straßenszenen effektiv nutzen. Zweitens: Da die Unsicherheit in der Zukunft bei nicht-deterministischen Szenarien zunimmt, nutzen wir Bayes’sche Inferenzverfahren, um kalibrierte Verteilungen wahrscheinlicher zukünftiger Ereignisse zu erfassen. Drittens: Um die Leistung in hochmultimodalen, nichtdeterministischen Szenarien wie Straßenszenen weiter zu verbessern, entwickeln wir tiefe generative Modelle, die sowohl auf konditionalen Variations-Autoencodern als auch auf normalisierenden fließenden exakten Inferenzmethoden basieren. Darüber hinaus stellen wir einen neuartigen Datensatz mit dichten Fußgänger-Fahrzeug- Interaktionen vor, um Antizipationsmethoden für autonome Fahranwendungen in urbanen Umgebungen weiter zu entwickeln

    Benchmarking, verifying and utilising near term quantum technology

    Get PDF
    Quantum computers can, in theory, impressively reduce the time required to solve many pertinent problems. Such problems are found in applications as diverse as cryptography, machine learning and chemistry, to name a few. However, in practice the set of problems which can be solved depends on the amount and quality of the quantum resources available. With the addition of more qubits, improvements in noise levels, the development of quantum networks, and so on, comes more computing power. Motivated by the desire to measure the power of these devices as their capabilities change, this thesis explores the verification, characterisation and benchmarking techniques that are appropriate at each stage of development. We study the techniques that become available with each advance, and the ways that such techniques can be used to guide further development of quantum devices and their control software. Our focus is on advancements towards the first example of practical certifiable quantum computational supremacy; when a quantum computer demonstrably outperforms all classical computers at a task of practical concern. Doing so allows us to look a little beyond recent demonstrations of quantum computational supremacy for its own sake. Systems consisting of only a few noisy qubits can be simulated by a classical computer. While this reduces the applicability of quantum technology of this size, we first provide a methodology for using classical simulations to guide progress towards demonstrations of quantum computational supremacy. Using measurements of the noise levels present in the NQIT Q20:20 device, an ion-trap based quantum computer, we use classical simulations to predict and prepare for the performance of larger devices with similar characteristics. We identify the noise sources that are the most impactful, and simulate the effectiveness of approaches to mitigating them. As quantum technology advances, classically simulating it becomes increasingly resource intensive. However, simulations remain useful as a point of comparison against which to benchmark the performance of quantum devices. For so called ‘random quantum circuits’, such benchmarking techniques have been developed to support claims of demonstrations of quantum computational supremacy. To give better indications of the device’s performance in practice, instances of computations derived for practical applications have been used to benchmark devices. Our second contribution is to introduce a suite of circuits derived from structures that are common to many instances of computations derived for practical applications, contrasting with the aforementioned approach of using a collection of particular instances. This allows us to make broadly applicable predictions of performance, which are indicative of the device’s behaviour when investigating applications of concern. We use this suite to benchmark all layers of the quantum computing stack, exploring the interplay between the compilation strategy, device, and the computation itself. The circuit structures in the suite are sufficiently diverse to provide insights into the noise channels present in several real devices, and into the applications for which each quantum computing stack is best suited. We consider several figures of merit by which to assess performance when implementing these circuits, taking care to minimise the required number of uses of the quantum device. As our third contribution, we consider benchmarking devices performing Instantaneous Quantum Polynomial time (IQP) computations; a subset of all the computations quantum computers are capable of performing in polynomial time. By using only a commuting gate set, IQP circuits do not require the development of a universal quantum computer, but are still thought impossible to simulate efficiently on a classical computer. Utilising a small quantum network, which allows for the transmission of single qubits, we introduce an approach to benchmarking the performance of devices capable of implementing IQP computations. As the resource consumption of our benchmarking technique grows reasonably as the size of the device grows, it enables us to benchmark IQP capable devices when they are of sufficient size to demonstrate quantum computational supremacy, and indeed to certify demonstrations of quantum computational supremacy. The approach we introduce is constructed by concealing some secret structure within an IQP computation. This structure can be taken advantage of by a quantum computer, but not by a classical one, in order to prove it is capable of accurately implementing IQP circuits. To achieve this we derive an implementation of IQP circuits which keeps the computation, and as a result the structure introduced, hidden from the device being tested. We prove this implementation to be information-theoretically and composably secure. In the work described above we explore verification, characterisation and benchmarking of quantum technology both as it advances to demonstrations of quantum computational supremacy, and when it is applied to real world problems. Finally, we consider demonstrations of quantum computational supremacy with an instance of these real world problems. We consider quantum machine learning, and generative modelling in particular. Generative modelling is the task of producing new samples from a distribution, given a collection of samples from that distribution. We introduce and define ‘quantum learning supremacy’, which captures our intuitive notion of a demonstration of quantum computational supremacy in this setting, and allows us to speak formally about generative modelling tasks that can be completed by quantum, but not classical, computers. We introduce the Quantum Circuit Ising Born Machine (QCIBM), which consists of a parametrised quantum circuit and a classical optimisation loop to train the parameters, as a route to demonstrating quantum learning supremacy. We adapt results that exist for IQP circuits in order to argue that the QCIBM might indeed be used to demonstrate quantum learning supremacy. We discuss training procedures for the QCIBM, and Quantum Circuit Born Machines generally, and their implications on demonstrations of quantum learning supremacy
    corecore