2,450 research outputs found
Complexity & wormholes in holography
Holography has proven to be a highly successful approach in studying quantum gravity, where a non-gravitational quantum field theory is dual to a quantum gravity theory in one higher dimension. This doctoral thesis delves into two key aspects within the context of holography: complexity and wormholes. In Part I of the thesis, the focus is on holographic complexity. Beginning with a brief review of quantum complexity and its significance in holography, the subsequent two chapters proceed to explore this topic in detail. We study several proposals to quantify the costs of holographic path integrals. We then show how such costs can be optimized and match them to bulk complexity proposals already existing in the literature. In Part II of the thesis, we shift our attention to the study of spacetime wormholes in AdS/CFT. These are bulk spacetime geometries having two or more disconnected boundaries. In recent years, such wormholes have received a lot of attention as they lead to interesting implications and raise important puzzles. We study the construction of several simple examples of such wormholes in general dimensions in the presence of a bulk scalar field and explore their implications in the boundary theory
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Sampling with Barriers: Faster Mixing via Lewis Weights
We analyze Riemannian Hamiltonian Monte Carlo (RHMC) for sampling a polytope
defined by inequalities in endowed with the metric defined by the
Hessian of a convex barrier function. The advantage of RHMC over Euclidean
methods such as the ball walk, hit-and-run and the Dikin walk is in its ability
to take longer steps. However, in all previous work, the mixing rate has a
linear dependence on the number of inequalities. We introduce a hybrid of the
Lewis weights barrier and the standard logarithmic barrier and prove that the
mixing rate for the corresponding RHMC is bounded by , improving on the previous best bound of (based on the log barrier). This continues the general parallels
between optimization and sampling, with the latter typically leading to new
tools and more refined analysis. To prove our main results, we have to
overcomes several challenges relating to the smoothness of Hamiltonian curves
and the self-concordance properties of the barrier. In the process, we give a
general framework for the analysis of Markov chains on Riemannian manifolds,
derive new smoothness bounds on Hamiltonian curves, a central topic of
comparison geometry, and extend self-concordance to the infinity norm, which
gives sharper bounds; these properties appear to be of independent interest
Fundamental and Applied Problems of the String Theory Landscape
In this thesis we study quantum corrections to string-derived effective actions \textit{per se} as well as their implications for phenomenologically relevant setups like the \textit{Large Volume Scenario} (LVS) and the \textit{anti-D3-brane} uplift.
In the first part of this thesis, we improve the understanding of string loop corrections on general Calabi-Yau orientifolds from an effective field theory perspective by proposing a new classification scheme for quantum corrections. Thereby, we discover new features of string loop corrections, like for instance possible logarithmic effects in the Kahler and scalar potential, which are relevant for phenomenological applications like models of inflation.
In the next part of the thesis, we derive a simple and explicit formula, the \textit{LVS parametric tadpole constraint} (PTC),
that ensures that the anti-D3-brane uplifted LVS dS vacuum is protected against the most dangerous higher order corrections.
The main difficulty appears to be the small uplifting contribution which is necessary due to the exponentially large volume obtained via the LVS. This in turn requires a large negative contribution to the tadpole which is quantified in the PTC. As the negative contribution to the tadpole is limited in weakly coupled string theories, the PTC represents a concrete challenge for the LVS.
The last part of the thesis investigates the impact of corrections to the brane-flux annihilation process discovered by Kachru, Pearson, and Verlinde (KPV) on which the anti-D3-brane uplift is based. We find that corrections drastically alter the KPV analysis with the result that much more flux in the Klebanov-Strassler throat is required than previously assumed in order to control the leading corrections on the NS5-brane. The implication for the LVS with standard anti-D3-brane uplift can again be quantified by the PTC. Incorporating this new bound significantly increases the required negative contribution to the tadpole. In addition, we uncover a new uplifting mechanism not relying on large fluxes and hence deep warped throats, thereby sidestepping the main difficulties related to the PTC
Consistency of scalar and vector effective field theories
In the absence of a theory of everything, modern physicists need to rely on other predictive tools and turned to Effective Field Theories (EFTs) in a number of fields, including but not limited to statistical mechanics, condensed matter, particle physics, cosmology and gravity. The coefficients of an EFT can be constrained with high precision by experiments, which can involve high-energy particle colliders for instance but are generally left free from the theoretical point of view. The focus of this thesis is to use various consistency criteria to get theoretical constraints on the low-energy coefficients of EFTs. In particular, we construct a new model of massive spin-1 field by requiring that the theory is free of any ghostly degree of freedom. We then study its cosmological perturbations and ask that all propagating modes are stable and subluminal, reducing the space of viable cosmological solutions. Finally, we implement a method to get âcausality boundsâ, which are obtained by requiring infrared causality. This is imposed by forbidding any resolvable time advance in the EFT. We derive such âcausality boundsâ for shift-symmetric and Galileon scalar EFTs, before turning to gauge-symmetric vector fields. We prove that our causality bounds can be competitive with positivity bounds and can even be used in scenarios that are out of reach of the positivity approach. The result of this thesis, by exploring several consistency criteria, is to provide compact causality bounds for low-energy EFT coefficients, in addition to constraints coming from the absence of ghosts, stability and cosmological viability.Open Acces
Low-Thrust Optimal Escape Trajectories from Lagrangian Points and Quasi-Periodic Orbits in a High-Fidelity Model
L'abstract Ăš presente nell'allegato / the abstract is in the attachmen
Subgroup discovery for structured target concepts
The main object of study in this thesis is subgroup discovery, a theoretical framework for finding subgroups in dataâi.e., named sub-populationsâ whose behaviour with respect to a specified target concept is exceptional when compared to the rest of the dataset. This is a powerful tool that conveys crucial information to a human audience, but despite past advances has been limited to simple target concepts. In this work we propose algorithms that bring this framework to novel application domains. We introduce the concept of representative subgroups, which we use not only to ensure the fairness of a sub-population with regard to a sensitive trait, such as race or gender, but also to go beyond known trends in the data. For entities with additional relational information that can be encoded as a graph, we introduce a novel measure of robust connectedness which improves on established alternative measures of density; we then provide a method that uses this measure to discover which named sub-populations are more well-connected. Our contributions within subgroup discovery crescent with the introduction of kernelised subgroup discovery: a novel framework that enables the discovery of subgroups on i.i.d. target concepts with virtually any kind of structure. Importantly, our framework additionally provides a concrete and efficient tool that works out-of-the-box without any modification, apart from specifying the Gramian of a positive definite kernel. To use within kernelised subgroup discovery, but also on any other kind of kernel method, we additionally introduce a novel random walk graph kernel. Our kernel allows the fine tuning of the alignment between the vertices of the two compared graphs, during the count of the random walks, while we also propose meaningful structure-aware vertex labels to utilise this new capability. With these contributions we thoroughly extend the applicability of subgroup discovery and ultimately re-define it as a kernel method.Der Hauptgegenstand dieser Arbeit ist die Subgruppenentdeckung (Subgroup Discovery), ein theoretischer Rahmen fĂŒr das Auffinden von Subgruppen in Datenâd. h. benannte Teilpopulationenâderen Verhalten in Bezug auf ein bestimmtes Targetkonzept im Vergleich zum Rest des Datensatzes auĂergewöhnlich ist. Es handelt sich hierbei um ein leistungsfĂ€higes Instrument, das einem menschlichen Publikum wichtige Informationen vermittelt. Allerdings ist es trotz bisherigen Fortschritte auf einfache Targetkonzepte beschrĂ€nkt. In dieser Arbeit schlagen wir Algorithmen vor, die diesen Rahmen auf neuartige Anwendungsbereiche ĂŒbertragen. Wir fĂŒhren das Konzept der reprĂ€sentativen Untergruppen ein, mit dem wir nicht nur die Fairness einer Teilpopulation in Bezug auf ein sensibles Merkmal wie Rasse oder Geschlecht sicherstellen, sondern auch ĂŒber bekannte Trends in den Daten hinausgehen können. FĂŒr EntitĂ€ten mit zusĂ€tzlicher relationalen Information, die als Graph kodiert werden kann, fĂŒhren wir ein neuartiges MaĂ fĂŒr robuste Verbundenheit ein, das die etablierten alternativen DichtemaĂe verbessert; anschlieĂend stellen wir eine Methode bereit, die dieses MaĂ verwendet, um herauszufinden, welche benannte Teilpopulationen besser verbunden sind. Unsere BeitrĂ€ge in diesem Rahmen gipfeln in der EinfĂŒhrung der kernelisierten Subgruppenentdeckung: ein neuartiger Rahmen, der die Entdeckung von Subgruppen fĂŒr u.i.v. Targetkonzepten mit praktisch jeder Art von Struktur ermöglicht. Wichtigerweise, unser Rahmen bereitstellt zusĂ€tzlich ein konkretes und effizientes Werkzeug, das ohne jegliche Modifikation funktioniert, abgesehen von der Angabe des Gramian eines positiv definitiven Kernels. FĂŒr den Einsatz innerhalb der kernelisierten Subgruppentdeckung, aber auch fĂŒr jede andere Art von Kernel-Methode, fĂŒhren wir zusĂ€tzlich einen neuartigen Random-Walk-Graph-Kernel ein. Unser Kernel ermöglicht die Feinabstimmung der Ausrichtung zwischen den Eckpunkten der beiden unter-Vergleich-gestelltenen Graphen wĂ€hrend der ZĂ€hlung der Random Walks, wĂ€hrend wir auch sinnvolle strukturbewusste Vertex-Labels vorschlagen, um diese neue FĂ€higkeit zu nutzen. Mit diesen BeitrĂ€gen erweitern wir die Anwendbarkeit der Subgruppentdeckung grĂŒndlich und definieren wir sie im Endeffekt als Kernel-Methode neu
Sp(2N) Lattice Gauge Theories and Extensions of the Standard Model of Particle Physics
We review the current status of the long-term programme of numerical investigation of (2) gauge theories with and without fermionic matter content. We start by introducing the phenomenological as well as theoretical motivations for this research programme, which are related to composite Higgs models, models of partial top compositeness, dark matter models, and in general to the physics of strongly coupled theories and their approach to the large-N limit. We summarise the results of lattice studies conducted so far in the (2) YangâMills theories, measuring the string tension, the mass spectrum of glueballs and the topological susceptibility, and discuss their large-N extrapolation. We then focus our discussion on (4), and summarise the numerical measurements of mass and decay constant of mesons in the theories with fermion matter in either the fundamental or the antisymmetric representation, first in the quenched approximation, and then with dynamical fermions. We finally discuss the case of dynamical fermions in mixed representations, and exotic composite fermion states such as the chimera baryons. We conclude by sketching the future stages of the programme. We also describe our approach to open access
Recommended from our members
Foundations of Node Representation Learning
Low-dimensional node representations, also called node embeddings, are a cornerstone in the modeling and analysis of complex networks. In recent years, advances in deep learning have spurred development of novel neural network-inspired methods for learning node representations which have largely surpassed classical \u27spectral\u27 embeddings in performance. Yet little work asks the central questions of this thesis: Why do these novel deep methods outperform their classical predecessors, and what are their limitations?
We pursue several paths to answering these questions. To further our understanding of deep embedding methods, we explore their relationship with spectral methods, which are better understood, and show that some popular deep methods are equivalent to spectral methods in a certain natural limit. We also introduce the problem of inverting node embeddings in order to probe what information they contain. Further, we propose a simple, non-deep method for node representation learning, and find it to often be competitive with modern deep graph networks in downstream performance.
To better understand the limitations of node embeddings, we prove some upper and lower bounds on their capabilities. Most notably, we prove that node embeddings are capable of exact low-dimensional representation of networks with bounded max degree or arboricity, and we further show that a simple algorithm can find such exact embeddings for real-world networks. By contrast, we also prove inherent bounds on random graph models, including those derived from node embeddings, to capture key structural properties of networks without simply memorizing a given graph
- âŠ