1,608 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
An intensity triplet for the prediction of systematic InSAR closure phases
Thesis (M.S.) University of Alaska Fairbanks, 2023Synthetic Aperture Radar (SAR), a microwave-based active remote sensing technique, has had a rich and contemporary history. Because such platforms can measure both the phase and intensity of the reflected signal, interferometric SAR (InSAR) has proliferated and allowed geodesists to measure topography and millimeter-to-centimeter scale deformations of the Earth's surface from space. Applications of InSAR range from measuring the inflation of volcanoes caused by magma movement to measuring the subsidence in permafrost environments caused by the thawing of ground ice. Advancements in InSAR time series algorithms and speckle models have allowed us to image such movements at increasingly high precision. However, analysis of closure phases (or phase triplets), a quantification of inconsistencies thought to be caused by speckle, reveal systematic behaviors across many environments. Systematic closure phases have been linked to changes in the dielectric constant of the soil (generally thought to be a result of soil moisture changes), but existing models require strong constraints on structure and sensitivity to moisture content. To overcome this obstacle and decompose the closure phase into a systematic and stochastic part, we present a data-driven approach based on the SAR intensities. Intensity observations are also sensitive to surface dielectric changes. Thus, we have constructed an intensity triplet that mimics the algebraic structure of the closure phase. A regression between such triplets allows us to predict the systematic part of the closure phase, which is associated with dielectric changes. We estimate the corresponding phase errors using a minimum-norm inversion of the systematic closure phases to inspect the impact of such systematic closure phases on deformation measurements. Correction of these systematic closure phases that correlate with our intensity triplet can account for millimeter-scale fluctuations of the deformation time series. In permafrost environments, they can also account for displacement rate biases up to a millimeter a month. In semi-arid environments, these differences are generally an order of magnitude smaller and are less likely to lead to displacement rate biases. From nearby meteorological stations, we attribute these errors to snowfall, freeze-thaw, as well as seasonal moisture trends. This kind of analysis shows great potential for correcting the temporal inconsistencies in InSAR phases related to dielectric changes and enabling even finer deformation measurements, particularly in permafrost tundra.Chapter 1. Introduction. Chapter 2. InSAR theory -- 2.1. Forming an interferogram -- 2.2. Time series estimation -- 2.3. Closure phases. Chapter 3. Predicting and removing systematic phase closures -- 3.1. An intensity triplet -- 3.2. Predicting systematic closure phases -- 3.2.1. Model -- 3.2.2. Parameter estimation -- 3.3. Significance testing -- 3.4. Inversion. Chapter 4. Data and preprocessing -- 4.1. Las Vegas, NV -- 4.2. Dalton Highway, AK -- 4.3. Ancillary processing. Chapter 5. Results -- 5.1. Overview -- 5.2. Coefficient of determination -- 5.3. Slope estimates -- 5.4. Intercept estimates -- 5.5. Impacts on deformation estimates. Chapter 6. Discussion -- 6.1. Variability in R2 and slope estimates -- 6.2. Implications for deformation estimates -- 6.3. Implications for observations of land surface properties -- 6.4. Unexplained systematic closure phases -- 6.5. Model improvements. Chapter 7. Conclusion -- References -- Appendices
Validation of semi-analytical, semi-empirical covariance matrices for two-point correlation function for Early DESI data
We present an extended validation of semi-analytical, semi-empirical
covariance matrices for the two-point correlation function (2PCF) on simulated
catalogs representative of Luminous Red Galaxies (LRG) data collected during
the initial two months of operations of the Stage-IV ground-based Dark Energy
Spectroscopic Instrument (DESI). We run the pipeline on multiple extended
Zel'dovich (EZ) mock galaxy catalogs with the corresponding cuts applied and
compare the results with the mock sample covariance to assess the accuracy and
its fluctuations. We propose an extension of the previously developed formalism
for catalogs processed with standard reconstruction algorithms. We consider
methods for comparing covariance matrices in detail, highlighting their
interpretation and statistical properties caused by sample variance, in
particular, nontrivial expectation values of certain metrics even when the
external covariance estimate is perfect. With improved mocks and validation
techniques, we confirm a good agreement between our predictions and sample
covariance. This allows one to generate covariance matrices for comparable
datasets without the need to create numerous mock galaxy catalogs with matching
clustering, only requiring 2PCF measurements from the data itself. The code
used in this paper is publicly available at
https://github.com/oliverphilcox/RascalC.Comment: 19 pages, 1 figure. Code available at
https://github.com/oliverphilcox/RascalC, table and figure data available at
https://dx.doi.org/10.5281/zenodo.775063
Evaluation of Multi-frequency Synthetic Aperture Radar for Subsurface Archaeological Prospection in Arid Environments
The discovery of the subsurface paleochannels in the Saharan Desert with the 1981 Shuttle Imaging Radar (SIR-A) sensor was hugely significant in the field of synthetic aperture radar (SAR) remote sensing. Although previous studies had indicated the ability of microwaves to penetrate the earth’s surface in arid environments, this was the first applicable instance of subsurface imaging using a spaceborne sensor. And the discovery of the ‘radar rivers’ with associated archaeological evidence in this inhospitable environment proved the existence of an earlier less arid paleoclimate that supported past populations.
Since the 1980’s SAR subsurface prospection in arid environments has progressed, albeit primarily in the fields of hydrology and geology, with archaeology being investigated to a lesser extent. Currently there is a lack of standardised methods for data acquisition and processing regarding subsurface imaging, difficulties in image interpretation and insufficient supporting quantitative verification. These barriers keep SAR technology from becoming as integral as other remote sensing techniques in archaeological practice
The main objective of this thesis is to undertake a multi-frequency SAR analysis across different site types in arid landscapes to evaluate and enhance techniques for analysing SAR within the context of archaeological subsurface prospection. The analysis and associated fieldwork aim to address the gap in the literature regarding field verification of SAR image interpretation and contribute to the understanding of SAR microwave penetration in arid environments.
The results presented in this thesis demonstrate successful subsurface imaging of subtle feature(s) at the site of ‘Uqdat al-Bakrah, Oman with X-band data. Because shorter wavelengths are often ignored due to their limited penetration depths as compared to the C-band or L-band data, the effectiveness of X-band sensors in archaeological prospection at this site is significant. In addition, the associated ground penetrating radar and excavation fieldwork undertaken at ‘Uqdat al-Bakrah confirm the image interpretation and support the quantitative information regarding microwave penetration
Analog Photonics Computing for Information Processing, Inference and Optimisation
This review presents an overview of the current state-of-the-art in photonics
computing, which leverages photons, photons coupled with matter, and
optics-related technologies for effective and efficient computational purposes.
It covers the history and development of photonics computing and modern
analogue computing platforms and architectures, focusing on optimization tasks
and neural network implementations. The authors examine special-purpose
optimizers, mathematical descriptions of photonics optimizers, and their
various interconnections. Disparate applications are discussed, including
direct encoding, logistics, finance, phase retrieval, machine learning, neural
networks, probabilistic graphical models, and image processing, among many
others. The main directions of technological advancement and associated
challenges in photonics computing are explored, along with an assessment of its
efficiency. Finally, the paper discusses prospects and the field of optical
quantum computing, providing insights into the potential applications of this
technology.Comment: Invited submission by Journal of Advanced Quantum Technologies;
accepted version 5/06/202
Sparse PCA With Multiple Components
Sparse Principal Component Analysis (sPCA) is a cardinal technique for
obtaining combinations of features, or principal components (PCs), that explain
the variance of high-dimensional datasets in an interpretable manner. This
involves solving a sparsity and orthogonality constrained convex maximization
problem, which is extremely computationally challenging. Most existing works
address sparse PCA via methods-such as iteratively computing one sparse PC and
deflating the covariance matrix-that do not guarantee the orthogonality, let
alone the optimality, of the resulting solution when we seek multiple mutually
orthogonal PCs. We challenge this status by reformulating the orthogonality
conditions as rank constraints and optimizing over the sparsity and rank
constraints simultaneously. We design tight semidefinite relaxations to supply
high-quality upper bounds, which we strengthen via additional second-order cone
inequalities when each PC's individual sparsity is specified. Further, we
derive a combinatorial upper bound on the maximum amount of variance explained
as a function of the support. We exploit these relaxations and bounds to
propose exact methods and rounding mechanisms that, together, obtain solutions
with a bound gap on the order of 0%-15% for real-world datasets with p = 100s
or 1000s of features and r \in {2, 3} components. Numerically, our algorithms
match (and sometimes surpass) the best performing methods in terms of fraction
of variance explained and systematically return PCs that are sparse and
orthogonal. In contrast, we find that existing methods like deflation return
solutions that violate the orthogonality constraints, even when the data is
generated according to sparse orthogonal PCs. Altogether, our approach solves
sparse PCA problems with multiple components to certifiable (near) optimality
in a practically tractable fashion.Comment: Updated version with improved algorithmics and a new section
containing a generalization of the Gershgorin circle theorem; comments or
suggestions welcom
Approximate Methods for Marginal Likelihood Estimation
We consider the estimation of the marginal likelihood in Bayesian statistics, a essential and
important task known to be computationally expensive when the dimension of the parameter space
is large. We propose a general algorithm with numerous extensions that can be widely applied to a
variety of problem settings and excels particularly when dealing with near log-concave posteriors.
Our method hinges on a novel idea that uses MCMC samples to partition the parameter space
and forms local approximations over these partition sets as a means of estimating the marginal
likelihood. In this dissertation, we provide both the motivation and the groundwork for developing
what we call the Hybrid estimator. Our numerical experiments show the versatility and accuracy of
the proposed estimator, even as the parameter space becomes increasingly high-dimensional and
complicated
Temperature and dissipation in finite quantum systems
The ideas in this thesis are placed broadly within the context of many-body
quantum dynamics, an area of research that has gained significant interest in
recent years due to developments in cold atom experiments that enable the
realization of isolated many-body quantum systems.
In this thesis, we first focus on the concept of connecting quantum mechanical
systems to statistical mechanics, which often arises in the study of ‘thermalization’
in isolated many-body systems. An inescapable issue in the endeavor to connect
the two is the definition of temperature. The first core definition of temperature
we consider is inspired by the eigenstate thermalization hypothesis, which
posits that the eigenstates of a generic thermalizing system have information
regarding thermalization encoded within them. We consider temperatures based
on comparing the structure of (full or reduced) eigenstate density matrices to
thermal density matrices. The second temperature definition invokes the standard
temperature-entropy relation from statistical mechanics relating temperature
and microcanonical entropy. We explore various ways to define the microcanonical
entropy in finite isolated quantum systems and numerically compute the
corresponding temperature.
Following this, we study the diametrical opposite of isolated quantum systems
— open quantum systems. We study a quantum particle on a tight-binding
lattice with a non-Hermitian (purely imaginary) local potential. Non-Hermitian
Hamiltonians are effective models for describing open quantum systems. We
analyze the scattering dynamics and spectrum, identifying an exceptional point
where the entire spectrum pairs up into mutually coalescing eigenstate pairs. At
large potential strengths, the absorption coefficient decreases, and the effect of the imaginary potential is similar to that of a real potential, which we quantify by
utilizing the properties of a localized eigenstate. We demonstrate the existence of
many exceptional points in a similar PT -symmetric system and non-interacting many-particle model. This investigation contributes to a many-body understanding of this non-Hermitian setup
Multidimensional Time Series Methods for Economics and Finance
Questa tesi mira ad affrontare le questioni inferenziali e interpretative nei modelli ad alta dimensione e multidimensionali nel contesto dell'Economia e della Finanza. La crescente integrazione economica e finanziaria ha reso di fondamentale importanza considerare i Paesi e i Mercati Finanziari come un'unica, grande e interconnessa entità . Le principali sfide indotte da questo quadro riguardano la stima e l'interpretazione di ampi Panel data, in cui le unità possono essere rappresentate da paesi o attività finanziarie, osservate attraverso diversi indicatori nel tempo. Questa tesi propone tecniche di stima Bayesiana per nuovi modelli matriciali e tensoriali e utilizza tecniche della Teoria dei Grafi per facilitare l'interpretazione di network ad alta dimensione. I contributi sono presentati in tre capitoli. Nel Capitolo 2, vengono proposti approcci della Teoria dei Grafi per studiare le strutture e le interazioni in Network direzionali e pesati. Nel Capitolo 3, viene proposto un approccio Bayesiano di variable selection per gestire il problema della sovrapparametrizzazione nei modelli di Autorregressione Matriciale di grandi dimensioni. Nel Capitolo 4, viene esplorata la relazione dinamica tra rendimenti, volatilità e sentiment nel settore delle criptovalute attraverso un modello Autoregressivo Matriciale, che rappresenta il primo tentativo di considerare i dati sugli asset finanziari come strutture multidimensionali.This thesis aims to address the inferential and interpretational issues in high and multi-dimensional models in the context of Economics and Finance. The growing economic and financial integration has made imperative the need to conceive Countries and Financial Markets as a single, large, interconnected entity. The main challenges induced by this framework concern the estimation and interpretation of large panels, where units can be represented by countries or assets, observed via several indicators across time. This thesis proposes Bayesian estimation techniques for novel matrix and tensor-valued models and employs new methodological tools from Graph Theory to facilitate interpretation of high-dimensional networks. The contributions are presented in three chapters. In Chapter 2, Graph Theory approaches are proposed to study the structures and interactions of weighted directed networks of multivariate time series observations/relationships. In Chapter 3, a Bayesian variable selection approach is proposed to handle the over-parametrization problem in large Matrix Autoregressive models. In Chapter 4, the dynamic relationship among returns, volatility, and sentiment in the cryptocurrency class is explored through a Bayesian Matrix Autoregressive model, which is the first attempt to consider financial asset data as multi-dimensional structures
Intrinsic Gaussian process on unknown manifolds with probabilistic metrics
This article presents a novel approach to construct Intrinsic Gaussian Processes for regression on unknown manifolds with probabilistic metrics (GPUM ) in point clouds. In many
real world applications, one often encounters high dimensional data (e.g.‘point cloud data’)
centered around some lower dimensional unknown manifolds. The geometry of manifold
is in general different from the usual Euclidean geometry. Naively applying traditional
smoothing methods such as Euclidean Gaussian Processes (GPs) to manifold-valued data
and so ignoring the geometry of the space can potentially lead to highly misleading predictions and inferences. A manifold embedded in a high dimensional Euclidean space can
be well described by a probabilistic mapping function and the corresponding latent space.
We investigate the geometrical structure of the unknown manifolds using the Bayesian
Gaussian Processes latent variable models(B-GPLVM) and Riemannian geometry. The
distribution of the metric tensor is learned using B-GPLVM. The boundary of the resulting
manifold is defined based on the uncertainty quantification of the mapping. We use the
probabilistic metric tensor to simulate Brownian Motion paths on the unknown manifold.
The heat kernel is estimated as the transition density of Brownian Motion and used as the
covariance functions of GPUM . The applications of GPUM are illustrated in the simulation
studies on the Swiss roll, high dimensional real datasets of WiFi signals and image data
examples. Its performance is compared with the Graph Laplacian GP, Graph Mat´ern GP
and Euclidean GP
- …