5,432 research outputs found
Classical and quantum algorithms for scaling problems
This thesis is concerned with scaling problems, which have a plethora of connections to different areas of mathematics, physics and computer science. Although many structural aspects of these problems are understood by now, we only know how to solve them efficiently in special cases.We give new algorithms for non-commutative scaling problems with complexity guarantees that match the prior state of the art. To this end, we extend the well-known (self-concordance based) interior-point method (IPM) framework to Riemannian manifolds, motivated by its success in the commutative setting. Moreover, the IPM framework does not obviously suffer from the same obstructions to efficiency as previous methods. It also yields the first high-precision algorithms for other natural geometric problems in non-positive curvature.For the (commutative) problems of matrix scaling and balancing, we show that quantum algorithms can outperform the (already very efficient) state-of-the-art classical algorithms. Their time complexity can be sublinear in the input size; in certain parameter regimes they are also optimal, whereas in others we show no quantum speedup over the classical methods is possible. Along the way, we provide improvements over the long-standing state of the art for searching for all marked elements in a list, and computing the sum of a list of numbers.We identify a new application in the context of tensor networks for quantum many-body physics. We define a computable canonical form for uniform projected entangled pair states (as the solution to a scaling problem), circumventing previously known undecidability results. We also show, by characterizing the invariant polynomials, that the canonical form is determined by evaluating the tensor network contractions on networks of bounded size
Technology for Low Resolution Space Based RSO Detection and Characterisation
Space Situational Awareness (SSA) refers to all activities to detect, identify and track objects in Earth orbit. SSA is critical to all current and future space activities and protect space assets by providing access control, conjunction warnings, and monitoring status of active satellites. Currently SSA methods and infrastructure are not sufficient to account for the proliferations of space debris. In response to the need for better SSA there has been many different areas of research looking to improve SSA most of the requiring dedicated ground or space-based infrastructure. In this thesis, a novel approach for the characterisation of RSO’s (Resident Space Objects) from passive low-resolution space-based sensors is presented with all the background work performed to enable this novel method. Low resolution space-based sensors are common on current satellites, with many of these sensors being in space using them passively to detect RSO’s can greatly augment SSA with out expensive infrastructure or long lead times. One of the largest hurtles to overcome with research in the area has to do with the lack of publicly available labelled data to test and confirm results with. To overcome this hurtle a simulation software, ORBITALS, was created. To verify and validate the ORBITALS simulator it was compared with the Fast Auroral Imager images, which is one of the only publicly available low-resolution space-based images found with auxiliary data. During the development of the ORBITALS simulator it was found that the generation of these simulated images are computationally intensive when propagating the entire space catalog. To overcome this an upgrade of the currently used propagation method, Specialised General Perturbation Method 4th order (SGP4), was performed to allow the algorithm to run in parallel reducing the computational time required to propagate entire catalogs of RSO’s. From the results it was found that the standard facet model with a particle swarm optimisation performed the best estimating an RSO’s attitude with a 0.66 degree RMSE accuracy across a sequence, and ~1% MAPE accuracy for the optical properties. This accomplished this thesis goal of demonstrating the feasibility of low-resolution passive RSO characterisation from space-based platforms in a simulated environment
The future of cosmology? A case for CMB spectral distortions
This thesis treats the topic of CMB Spectral Distortions (SDs), which
represent any deviation from a pure black body shape of the CMB energy
spectrum. As such, they can be used to probe the inflationary, expansion and
thermal evolution of the universe both within CDM and beyond it. The
currently missing observation of this rich probe of the universe makes of it an
ideal target for future observational campaigns. In fact, while the
CDM signal guarantees a discovery, the sensitivity to a wide variety
of new physics opens the door to an enormous uncharted territory. In light of
these considerations, the thesis opens by reviewing the topic of CMB SDs in a
pedagogical and illustrative fashion, aimed at waking the interest of the
broader community. This introductory premise sets the stage for the first main
contribution of the thesis to the field of SDs: their implementation in the
Boltzmann solver CLASS and the parameter inference code MontePython. The
CLASS+MontePython pipeline is publicly available, fast, it includes all sources
of SDs within CDM and many others beyond that, and allows to
consistently account for any observational setup. By means of these numerical
tools, the second main contribution of the thesis consists in showcasing the
versatility and competitiveness of SDs for several cosmological models as well
as for a number of different mission designs. Among others, the results cover
features in the primordial power spectrum, primordial gravitational waves,
non-standard dark matter properties, primordial black holes, primordial
magnetic fields and Hubble tension. Finally, the manuscript is disseminated
with (20) follow-up ideas that naturally extend the work carried out so far,
highlighting how rich of unexplored possibilities the field of CMB SDs still
is. The hope is that these suggestions will become a propeller for further
interesting developments.Comment: PhD thesis. Pedagogical review of theory, experimental status and
numerical tools (CLASS+MontePython) with broad overview of applications.
Includes 20 original follow-up idea
Integrated Geophysical Analysis of Passive Continental Margins: Insights into the Crustal Structure of the Namibian Margin from Magnetotelluric, Gravity, and Seismic Data
Passive continental margin research amalgamates the investigation of many broad topics, such as the emergence of oceanic crust, lithospheric stress patterns and plume-lithosphere interaction, reservoir potential, methane cycle, and general global geodynamics. Central tasks in this field of research are geophysical investigations of the structure, composition, and dynamic of the passive margin crust and upper mantle. A key practice to improve geophysical models and their interpretation, is the integrated analysis of multiple data, or the integration of complementary models and data. In this thesis, I compare four different inversion results based on data from the Namibian passive continental margin. These are a) a single method MT inversion; b) constrained inversion of MT data, cross-gradient coupled with a fixed structural density model; c) cross-gradient coupled joint inversion of MT and satellite gravity data; d) constrained inversion of MT data, cross-gradient coupled with a fixed gradient velocity model. To bridge the formal analysis of geophysical models with geological interpretations, I define a link between the physical parameter models and geological units. Therefore, the results from the joint MT and gravity inversion (c) are correlated through a user-unbiased clustering analysis. This clustering analysis results in a distinct difference in the signature of the transitional crust south of- and along the supposed hot-spot track Walvis Ridge. I ascribe this contrast to an increase in magmatic activity above the volcanic center along Walvis Ridge. Furthermore, the analysis helps to clearly identify areas of interlayered massive, and weathered volcanic flows, which are usually only identified in reflection seismic studies as seaward dipping reflectors. Lastly, the clustering helps to differentiate two types of sediment cover. Namely, one of near-shore, thick, clastic sediments, and one of further offshore located, more biogenic, marine sediments
Precision Studies of QCD in the Low Energy Domain of the EIC
The manuscript focuses on the high impact science of the EIC with objective
to identify a portion of the science program for QCD precision studies that
requires or greatly benefits from high luminosity and low center-of-mass
energies. The science topics include (1) Generalized Parton Distributions, 3D
imagining and mechanical properties of the nucleon (2) mass and spin of the
nucleon (3) Momentum dependence of the nucleon in semi-inclusive deep inelastic
scattering (4) Exotic meson spectroscopy (5) Science highlights of nuclei (6)
Precision studies of Lattice QCD in the EIC era (7) Science of far-forward
particle detection (8) Radiative effects and corrections (9) Artificial
Intelligence (10) EIC interaction regions for high impact science program with
discovery potential. This paper documents the scientific basis for supporting
such a program and helps to define the path toward the realization of the
second EIC interaction region.Comment: 103 pages,47 figure
Study of the tracking performance of a liquid Argon detector based on a novel optical imaging concept
The Deep Underground Neutrino Experiment (DUNE) is a long-baseline accelerator experiment designed to make a significant contribution to the study of neutrino oscillations with unprecedented sensitivity. The main goal of DUNE is the determination of the neutrino mass ordering and the leptonic CP violation phase, key parameters of the
three-neutrino flavor mixing that have yet to be determined. An important component of the DUNE Near Detector complex is the System for on-Axis Neutrino Detection (SAND) apparatus, which will include GRAIN (GRanular Argon for Interactions of Neutrinos), a novel liquid Argon detector aimed at imaging neutrino interactions using only scintillation light. For this purpose, an innovative optical readout system based on Coded Aperture Masks is investigated. This dissertation aims to demonstrate the feasibility of reconstructing particle tracks and the topology of CCQE (Charged Current Quasi Elastic) neutrino events in GRAIN with such a technique. To this end, the development and implementation of a reconstruction algorithm based on Maximum Likelihood Expectation Maximization was carried out to directly obtain a three-dimensional distribution proportional to the energy deposited by charged particles crossing the LAr volume. This study includes the evaluation of the design of several camera configurations and the simulation of a multi-camera optical system in GRAIN
Contributions to improve the technologies supporting unmanned aircraft operations
Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge.
Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential.
On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle.
This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies
the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir.
Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio.
Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav
On astrophysical solutions in the constructive gravity program and cosmological tests for weakly birefringent spacetime
Via gravitational closure [Dü+18]; [Wol22]; [Due20]; [Wie18] could show, how gravitational
theories based on the matter content of spacetime can be systematically constructed. While
this successfully reproduces general relativity for metric spacetimes, finding a solution for the
simplest generalization of Maxwell electrodynamics with a vacuum birefringence allowing,
area-metric structure has in general not been possible so far. For highly symmetric FLRW
spacetimes a metric, as well as an area-metric solution could be derived [Due20]; [Fis17].
Based on this result, the constructive gravity program will be applied for spherically symmetric,
stationary metric spacetimes. Furthermore, an according ansatz is worked out for
area-metric geometries, and it is discussed which difficulties arise in finding a corresponding
solution. Furthermore, the Etherington-duality is violated in the case of weakly area-metric
gravitation [Sch+17]; [Ale20b]; [SW17], and this violation will be investigated with weak
gravitational lensing experiments. The observable is the surface brightness, which is, however,
heavily influenced by astrophysical processes like physical interaction of galaxies with tidal
fields. Beyond that, it is studied how galaxies also get bent due to tidal interactions and how
strong this effect is compared to its analog in gravitational lensing
Statistical and Computational Guarantees for Influence Diagnostics
Influence diagnostics such as influence functions and approximate maximum
influence perturbations are popular in machine learning and in AI domain
applications. Influence diagnostics are powerful statistical tools to identify
influential datapoints or subsets of datapoints. We establish finite-sample
statistical bounds, as well as computational complexity bounds, for influence
functions and approximate maximum influence perturbations using efficient
inverse-Hessian-vector product implementations. We illustrate our results with
generalized linear models and large attention based models on synthetic and
real data.Comment: For AISTATS 2023. Software see
https://github.com/jfisher52/influence_theor
- …