15 research outputs found
Pseudo-Hamiltonian neural networks with state-dependent external forces
Hybrid machine learning based on Hamiltonian formulations has recently been successfully demonstrated for simple mechanical systems, both energy conserving and not energy conserving. We introduce a pseudo-Hamiltonian formulation that is a generalization of the Hamiltonian formulation via the port-Hamiltonian formulation, and show that pseudo-Hamiltonian neural network models can be used to learn external forces acting on a system. We argue that this property is particularly useful when the external forces are state dependent, in which case it is the pseudo-Hamiltonian structure that facilitates the separation of internal and external forces. Numerical results are provided for a forced and damped mass–spring system and a tank system of higher complexity, and a symmetric fourth-order integration scheme is introduced for improved training on sparse and noisy data.publishedVersio
Constraint Preserving Mixers for the Quantum Approximate Optimization Algorithm
The quantum approximate optimization algorithm/quantum alternating operator ansatz (QAOA) is a heuristic to find approximate solutions of combinatorial optimization problems. Most of the literature is limited to quadratic problems without constraints. However, many practically relevant optimization problems do have (hard) constraints that need to be fulfilled. In this article, we present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space given by these constraints. We generalize the “XY”-mixer designed to preserve the subspace of “one-hot” states to the general case of subspaces given by a number of computational basis states. We expose the underlying mathematical structure which reveals more of how mixers work and how one can minimize their cost in terms of the number of CX gates, particularly when Trotterization is taken into account. Our analysis also leads to valid Trotterizations for an “XY”-mixer with fewer CX gates than is known to date. In view of practical implementations, we also describe algorithms for efficient decomposition into basis gates. Several examples of more general cases are presented and analyzed.publishedVersio
Uncovering circuit mechanisms of current sinks and sources with biophysical simulations of primary visual cortex
publishedVersio
Search for High Energetic Neutrinos from Core Collapse Supernovae using the IceCube Neutrino Telescope
Die Entdeckung eines hochenergetischen Flusses astrophysikalischer Neutrinos stellt einen wesentlichen physikalischen Durchbruch der letzten Jahre dar. Trotz allem ist der Ursprung dieser Neutrinos immer noch unbekannt. Die Suche nach den Quellen der hochenergetischen kosmischen Strahlung ist direkt verbunden mit der Suche nach Neutrinos, da diese in den gleichen hadronischen Prozessen erzeugt werden und eine Neutrinoquelle deshalb einen direkten Hinweis auf eine Quelle der kosmischen Strahlung darstellen wĂĽrde. Viele potentielle Quellen der Neutrinos werden diskutiert, darunter Kern-Kollaps Supernovae.
In dieser Arbeit werden sieben Jahre Daten des IceCube Neutrinoteleskopes mit der Richtung mehreren Hundert Kernkollaps-Supernovae auf Korrelation getestet. Die Analyse gewinnt dabei durch die gute Richtungsrekonstruktion der 700000 Muonspurdaten und der großen Datenbank optische beobachteter Supernovae. Die Sensitivität der zeitabhängigen Likelihood-Analyse wird durch die Kombination mehrere Quellen in einer einzigen Analyse gesteigert.
Es wurde kein statistisch signifikantes Cluster von Neutrinos an den Positionen der Supernovae gefunden. Daraus wurden obere Grenzen für verschiedene Modelle berechnet und der Beitrag von Kernkollaps-Supernovae zum diffusen Neutrinofluss eingeschränkt. Daraus können bestimmte Typen von Supernovae als dominate Quelle der diffusen hochenergetischen astrophysikalischen Neutrinos ausgeschlossen werden.The recent discovery of a high energy flux of astrophysical neutrinos was one of the breakthroughs of the last years. However, the origin of these neutrinos remains still unknown. Also, the search for the sources of high-energy cosmic rays is closely connected to neutrinos since neutrinos are produced in hadronic interactions, and thus the detection of a neutrino source would be a \textit{smoking gun} signature for cosmic rays. Many potential neutrino source classes have been discussed, among these are core-collapse supernovae.
In this thesis, seven years of data from the IceCube neutrino observatory are tested for correlation with the direction of hundreds of core-collapse supernovae. The analysis benefits from the good angular reconstruction of the order of one degree and below of the about 700000 muon track events and an extensive database of optical observations of supernovae. Using a time-dependent likelihood method, the sensitivity of the analysis is increased by stacking the sources in a combined analysis.
No significant clustering of neutrino events around the position of core-collapse supernovae is found. Upper limits of different neutrino light curve models are computed, and the contribution of core-collapse supernovae to the measured diffuse high energetic neutrino background is constrained. These limits allow excluding certain types of core-collapse supernovae as the dominant source of the observed high energetic astrophysical neutrino flux
Multi-Linear Population Analysis (MLPA) of LFP Data Using Tensor Decompositions
The local field potential (LFP) is the low frequency part of the extracellular electrical potential in the brain and reflects synaptic activity onto thousands of neurons around each recording contact. Nowadays, LFPs can be measured at several hundred locations simultaneously. The measured LFP is in general a superposition of contributions from many underlying neural populations which makes interpretation of LFP measurements in terms of the underlying neural activity challenging. Classical statistical analyses of LFPs rely on matrix decomposition-based methods, such as PCA (Principal Component Analysis) and ICA (Independent Component Analysis), which require additional constraints on spatial and/or temporal patterns of populations. In this work, we instead explore the multi-fold data structure of LFP recordings, e.g., multiple trials, multi-channel time series, arrange the signals as a higher-order tensor (i.e., multiway array), and study how a specific tensor decomposition approach, namely canonical polyadic (CP) decomposition, can be used to reveal the underlying neural populations. Essential for interpretation, the CP model provides uniqueness without imposing constraints on patterns of underlying populations. Here, we first define a neural network model and based on its dynamics, compute LFPs. We run multiple trials with this network, and LFPs are then analysed simultaneously using the CP model. More specifically, we design feed-forward population rate neuron models to match the structure of state-of-the-art, large-scale LFP simulations, but downscale them to allow easy inspection and interpretation. We demonstrate that our feed-forward model matches the mathematical structure assumed in the CP model, and CP successfully reveals temporal and spatial patterns as well as variations over trials of underlying populations when compared with the ground truth from the model. We also discuss the use of diagnostic approaches for CP to guide the analysis when there is no ground truth information. In comparison with classical methods, we discuss the advantages of using tensor decompositions for analyzing LFP recordings as well as their limitations
Constraint Preserving Mixers for the Quantum Approximate Optimization Algorithm
The quantum approximate optimization algorithm/quantum alternating operator ansatz (QAOA) is a heuristic to find approximate solutions of combinatorial optimization problems. Most of the literature is limited to quadratic problems without constraints. However, many practically relevant optimization problems do have (hard) constraints that need to be fulfilled. In this article, we present a framework for constructing mixing operators that restrict the evolution to a subspace of the full Hilbert space given by these constraints. We generalize the “XY”-mixer designed to preserve the subspace of “one-hot” states to the general case of subspaces given by a number of computational basis states. We expose the underlying mathematical structure which reveals more of how mixers work and how one can minimize their cost in terms of the number of CX gates, particularly when Trotterization is taken into account. Our analysis also leads to valid Trotterizations for an “XY”-mixer with fewer CX gates than is known to date. In view of practical implementations, we also describe algorithms for efficient decomposition into basis gates. Several examples of more general cases are presented and analyzed