2,275 research outputs found

    Techniques for high-multiplicity scattering amplitudes and applications to precision collider physics

    Get PDF
    In this thesis, we present state-of-the-art techniques for the computation of scattering amplitudes in Quantum Field Theories. Following an introduction to the topic, we describe a robust framework that enables the calculation of multi-scale two-loop amplitudes directly relevant to modern particle physics phenomenology at the Large Hadron Collider and beyond. We discuss in detail the use of finite fields to bypass the algebraic complexity of such computations, as well as the method of integration-by-parts relations and differential equations. We apply our framework to calculate the two-loop amplitudes contributing to three process: Higgs boson production in association with a bottom-quark pair, W boson production with a photon and a jet, as well as lepton-pair scattering with an off-shell and an on-shell photon. Finally, we draw our conclusions and discuss directions for future progress of amplitude computations

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Development of a SQUID magnetometry system for cryogenic neutron electric dipole moment experiment

    Get PDF
    A measurement of the neutron electric dipole moment (nEDM) could hold the key to understanding why the visible universe is the way it is: why matter should predominate over antimatter. As a charge-parity violating (CPV) quantity, an nEDM could provide an insight into new mechanisms that address this baryon asymmetry. The motivation for an improved sensitivity to an nEDM is to find it to be non-zero at a level consistent with certain beyond the Standard Model theories that predict new sources of CPV, or to establish a new limit that constrains them. CryoEDM is an experiment that sought to better the current limit of ∣dn∣<2.9×10−26 e |d_n| < 2.9 \times 10^{-26}\,e\,cm by an order of magnitude. It is designed to measure the nEDM via the Ramsey Method of Separated Oscillatory Fields, in which it is critical that the magnetic field remains stable throughout. A way of accurately tracking the magnetic fields, moreover at a temperature ∼0.5 \sim 0.5\,K, is crucial for CryoEDM, and for future cryogenic projects. This thesis presents work focussing on the development of a 12-SQUID magnetometry system for CryoEDM, that enables the magnetic field to be monitored to a precision of 0.1 0.1\,pT. A major component of its infrastructure is the superconducting capillary shields, which screen the input lines of the SQUIDs from the pick up of spurious magnetic fields that will perturb a SQUID's measurement. These are shown to have a transverse shielding factor of >1×107> 1 \times 10^{7}, which is a few orders of magnitude greater than the calculated requirement. Efforts to characterise the shielding of the SQUID chips themselves are also discussed. The use of Cryoperm for shields reveals a tension between improved SQUID noise and worse neutron statistics. Investigations show that without it, SQUIDs have an elevated noise when cooled in a substantial magnetic field; with it, magnetostatic simulations suggest that it is detrimental to the polarisation of neutrons in transport. The findings suggest that with proper consideration, it is possible to reach a compromise between the two behaviours. Computational work to develop a simulation of SQUID data is detailed, which is based on the Laplace equation for the magnetic scalar potential. These data are ultimately used in the development of a linear regression technique to determine the volume-averaged magnetic field in the neutron cells. This proves highly effective in determining the fields within the 0.1 0.1\,pT requirement under certain conditions

    Learning and Control of Dynamical Systems

    Get PDF
    Despite the remarkable success of machine learning in various domains in recent years, our understanding of its fundamental limitations remains incomplete. This knowledge gap poses a grand challenge when deploying machine learning methods in critical decision-making tasks, where incorrect decisions can have catastrophic consequences. To effectively utilize these learning-based methods in such contexts, it is crucial to explicitly characterize their performance. Over the years, significant research efforts have been dedicated to learning and control of dynamical systems where the underlying dynamics are unknown or only partially known a priori, and must be inferred from collected data. However, much of these classical results have focused on asymptotic guarantees, providing limited insights into the amount of data required to achieve desired control performance while satisfying operational constraints such as safety and stability, especially in the presence of statistical noise. In this thesis, we study the statistical complexity of learning and control of unknown dynamical systems. By utilizing recent advances in statistical learning theory, high-dimensional statistics, and control theoretic tools, we aim to establish a fundamental understanding of the number of samples required to achieve desired (i) accuracy in learning the unknown dynamics, (ii) performance in the control of the underlying system, and (iii) satisfaction of the operational constraints such as safety and stability. We provide finite-sample guarantees for these objectives and propose efficient learning and control algorithms that achieve the desired performance at these statistical limits in various dynamical systems. Our investigation covers a broad range of dynamical systems, starting from fully observable linear dynamical systems to partially observable linear dynamical systems, and ultimately, nonlinear systems. We deploy our learning and control algorithms in various adaptive control tasks in real-world control systems and demonstrate their strong empirical performance along with their learning, robustness, and stability guarantees. In particular, we implement one of our proposed methods, Fourier Adaptive Learning and Control (FALCON), on an experimental aerodynamic testbed under extreme turbulent flow dynamics in a wind tunnel. The results show that FALCON achieves state-of-the-art stabilization performance and consistently outperforms conventional and other learning-based methods by at least 37%, despite using 8 times less data. The superior performance of FALCON arises from its physically and theoretically accurate modeling of the underlying nonlinear turbulent dynamics, which yields rigorous finite-sample learning and performance guarantees. These findings underscore the importance of characterizing the statistical complexity of learning and control of unknown dynamical systems.</p

    Modeling, control and navigation of aerospace systems

    Get PDF

    Probabilistic Inference for Model Based Control

    Get PDF
    Robotic systems are essential for enhancing productivity, automation, and performing hazardous tasks. Addressing the unpredictability of physical systems, this thesis advances robotic planning and control under uncertainty, introducing learning-based methods for managing uncertain parameters and adapting to changing environments in real-time. Our first contribution is a framework using Bayesian statistics for likelihood-free inference of model parameters. This allows employing complex simulators for designing efficient, robust controllers. The method, integrating the unscented transform with a variant of information theoretical model predictive control, shows better performance in trajectory evaluation compared to Monte Carlo sampling, easing the computational load in various control and robotics tasks. Next, we reframe robotic planning and control as a Bayesian inference problem, focusing on the posterior distribution of actions and model parameters. An implicit variational inference algorithm, performing Stein Variational Gradient Descent, estimates distributions over model parameters and control inputs in real-time. This Bayesian approach effectively handles complex multi-modal posterior distributions, vital for dynamic and realistic robot navigation. Finally, we tackle diversity in high-dimensional spaces. Our approach mitigates underestimation of uncertainty in posterior distributions, which leads to locally optimal solutions. Using the theory of rough paths, we develop an algorithm for parallel trajectory optimisation, enhancing solution diversity and avoiding mode collapse. This method extends our variational inference approach for trajectory estimation, employing diversity-enhancing kernels and leveraging path signature representation of trajectories. Empirical tests, ranging from 2-D navigation to robotic manipulators in cluttered environments, affirm our method's efficiency, outperforming existing alternatives

    Computational Approaches to Drug Profiling and Drug-Protein Interactions

    Get PDF
    Despite substantial increases in R&D spending within the pharmaceutical industry, denovo drug design has become a time-consuming endeavour. High attrition rates led to a long period of stagnation in drug approvals. Due to the extreme costs associated with introducing a drug to the market, locating and understanding the reasons for clinical failure is key to future productivity. As part of this PhD, three main contributions were made in this respect. First, the web platform, LigNFam enables users to interactively explore similarity relationships between ‘drug like’ molecules and the proteins they bind. Secondly, two deep-learning-based binding site comparison tools were developed, competing with the state-of-the-art over benchmark datasets. The models have the ability to predict offtarget interactions and potential candidates for target-based drug repurposing. Finally, the open-source ScaffoldGraph software was presented for the analysis of hierarchical scaffold relationships and has already been used in multiple projects, including integration into a virtual screening pipeline to increase the tractability of ultra-large screening experiments. Together, and with existing tools, the contributions made will aid in the understanding of drug-protein relationships, particularly in the fields of off-target prediction and drug repurposing, helping to design better drugs faster

    Reinforcement learning in large state action spaces

    Get PDF
    Reinforcement learning (RL) is a promising framework for training intelligent agents which learn to optimize long term utility by directly interacting with the environment. Creating RL methods which scale to large state-action spaces is a critical problem towards ensuring real world deployment of RL systems. However, several challenges limit the applicability of RL to large scale settings. These include difficulties with exploration, low sample efficiency, computational intractability, task constraints like decentralization and lack of guarantees about important properties like performance, generalization and robustness in potentially unseen scenarios. This thesis is motivated towards bridging the aforementioned gap. We propose several principled algorithms and frameworks for studying and addressing the above challenges RL. The proposed methods cover a wide range of RL settings (single and multi-agent systems (MAS) with all the variations in the latter, prediction and control, model-based and model-free methods, value-based and policy-based methods). In this work we propose the first results on several different problems: e.g. tensorization of the Bellman equation which allows exponential sample efficiency gains (Chapter 4), provable suboptimality arising from structural constraints in MAS(Chapter 3), combinatorial generalization results in cooperative MAS(Chapter 5), generalization results on observation shifts(Chapter 7), learning deterministic policies in a probabilistic RL framework(Chapter 6). Our algorithms exhibit provably enhanced performance and sample efficiency along with better scalability. Additionally, we also shed light on generalization aspects of the agents under different frameworks. These properties have been been driven by the use of several advanced tools (e.g. statistical machine learning, state abstraction, variational inference, tensor theory). In summary, the contributions in this thesis significantly advance progress towards making RL agents ready for large scale, real world applications

    On factor models for high-dimensional time series

    Get PDF
    The aim of this thesis is to develop statistical methods for use with factor models for high-dimensional time series. We consider three broad areas: estimation, changepoint detection, and determination of the number of factors. In Chapter 1, we sketch the backdrop for our thesis and review key aspects of the literature. In Chapter 2, we develop a method to estimate the factors and parameters in an approximate dynamic factor model. Specifically, we present a spectral expectation-maximisation (or \spectral EM") algorithm, whereby we derive the E and M step equations in the frequency domain. Our E step relies on the Wiener-Kolmogorov smoother, the frequency domain counterpart of the Kalman smoother, and our M step is based on maximisation of the Whittle Likelihood with respect to the parameters of the model. We initialise our procedure using dynamic principal components analysis (or \dynamic PCA"), and by leveraging results on lag-window estimators of spectral density by Wu and Zaffaroni (2018), we establish consistency-with-rates of our spectral EM estimator of the parameters and factors as both the dimension (N) and the sample size (T) go to infinity. We find rates commensurate with the literature. Finally, we conduct a simulation study to numerically validate our theoretical results. In Chapter 3, we develop a sequential procedure to detect changepoints in an approximate static factor model. Specifically, we define a ratio of eigenvalues of the covariance matrix of N observed variables. We compute this ratio each period using a rolling window of size m over time, and declare a changepoint when its value breaches an alarm threshold. We investigate the asymptotic behaviour (as N;m ! 1) of our ratio, and prove that, for specific eigenvalues, the ratio will spike upwards when a changepoint is encountered but not otherwise. We use a block-bootstrap to obtain alarm thresholds. We present simulation results and an empirical application based on Financial Times Stock Exchange 100 Index (or \FTSE 100") data. In Chapter 4, we conduct an exploratory analysis which aims to extend the randomised sequential procedure of Trapani (2018) into the frequency domain. Specifically, we aim to estimate the number of dynamically loaded factors by applying the test of Trapani (2018) to eigenvalues of the estimated spectral density matrix (as opposed to the covariance matrix) of the data
    • …
    corecore