3,376 research outputs found
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Probabilistic Programming Interfaces for Random Graphs::Markov Categories, Graphons, and Nominal Sets
We study semantic models of probabilistic programming languages over graphs, and establish a connection to graphons from graph theory and combinatorics. We show that every well-behaved equational theory for our graph probabilistic programming language corresponds to a graphon, and conversely, every graphon arises in this way.We provide three constructions for showing that every graphon arises from an equational theory. The first is an abstract construction, using Markov categories and monoidal indeterminates. The second and third are more concrete. The second is in terms of traditional measure theoretic probability, which covers 'black-and-white' graphons. The third is in terms of probability monads on the nominal sets of Gabbay and Pitts. Specifically, we use a variation of nominal sets induced by the theory of graphs, which covers Erdős-Rényi graphons. In this way, we build new models of graph probabilistic programming from graphons
Planetary Hinterlands:Extraction, Abandonment and Care
This open access book considers the concept of the hinterland as a crucial tool for understanding the global and planetary present as a time defined by the lasting legacies of colonialism, increasing labor precarity under late capitalist regimes, and looming climate disasters. Traditionally seen to serve a (colonial) port or market town, the hinterland here becomes a lens to attend to the times and spaces shaped and experienced across the received categories of the urban, rural, wilderness or nature. In straddling these categories, the concept of the hinterland foregrounds the human and more-than-human lively processes and forms of care that go on even in sites defined by capitalist extraction and political abandonment. Bringing together scholars from the humanities and social sciences, the book rethinks hinterland materialities, affectivities, and ecologies across places and cultural imaginations, Global North and South, urban and rural, and land and water
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Machine learning applications in search algorithms for gravitational waves from compact binary mergers
Gravitational waves from compact binary mergers are now routinely observed by Earth-bound detectors. These observations enable exciting new science, as they have opened a new window to the Universe.
However, extracting gravitational-wave signals from the noisy detector data is a challenging problem. The most sensitive search algorithms for compact binary mergers use matched filtering, an algorithm that compares the data with a set of expected template signals. As detectors are upgraded and more sophisticated signal models become available, the number of required templates will increase, which can make some sources computationally prohibitive to search for. The computational cost is of particular concern when low-latency alerts should be issued to maximize the time for electromagnetic follow-up observations. One potential solution to reduce computational requirements that has started to be explored in the last decade is machine learning. However, different proposed deep learning searches target varying parameter spaces and use metrics that are not always comparable to existing literature. Consequently, a clear picture of the capabilities of machine learning searches has been sorely missing.
In this thesis, we closely examine the sensitivity of various deep learning gravitational-wave search algorithms and introduce new methods to detect signals from binary black hole and binary neutron star mergers at previously untested statistical confidence levels. By using the sensitive distance as our core metric, we allow for a direct comparison of our algorithms to state-of-the-art search pipelines. As part of this thesis, we organized a global mock data challenge to create a benchmark for machine learning search algorithms targeting compact binaries. This way, the tools developed in this thesis are made available to the greater community by publishing them as open source software.
Our studies show that, depending on the parameter space, deep learning gravitational-wave search algorithms are already competitive with current production search pipelines. We also find that strategies developed for traditional searches can be effectively adapted to their machine learning counterparts. In regions where matched filtering becomes computationally expensive, available deep learning algorithms are also limited in their capability. We find reduced sensitivity to long duration signals compared to the excellent results for short-duration binary black hole signals
Memorization-Dilation: Modeling Neural Collapse Under Label Noise
The notion of neural collapse refers to several emergent phenomena that have
been empirically observed across various canonical classification problems.
During the terminal phase of training a deep neural network, the feature
embedding of all examples of the same class tend to collapse to a single
representation, and the features of different classes tend to separate as much
as possible. Neural collapse is often studied through a simplified model,
called the unconstrained feature representation, in which the model is assumed
to have "infinite expressivity" and can map each data point to any arbitrary
representation. In this work, we propose a more realistic variant of the
unconstrained feature representation that takes the limited expressivity of the
network into account. Empirical evidence suggests that the memorization of
noisy data points leads to a degradation (dilation) of the neural collapse.
Using a model of the memorization-dilation (M-D) phenomenon, we show one
mechanism by which different losses lead to different performances of the
trained network on noisy data. Our proofs reveal why label smoothing, a
modification of cross-entropy empirically observed to produce a regularization
effect, leads to improved generalization in classification tasks.Comment: to be published at ICLR 202
Do price trajectory data increase the efficiency of market impact estimation?
Market impact is an important problem faced by large institutional investor
and active market participant. In this paper, we rigorously investigate whether
price trajectory data from the metaorder increases the efficiency of
estimation, from an asymptotic view of statistical estimation. We show that,
for popular market impact models, estimation methods based on partial price
trajectory data, especially those containing early trade prices, can outperform
established estimation methods (e.g., VWAP-based) asymptotically. We discuss
theoretical and empirical implications of such phenomenon, and how they could
be readily incorporated into practice
Inexact iterative numerical linear algebra for neural network-based spectral estimation and rare-event prediction
Understanding dynamics in complex systems is challenging because there are
many degrees of freedom, and those that are most important for describing
events of interest are often not obvious. The leading eigenfunctions of the
transition operator are useful for visualization, and they can provide an
efficient basis for computing statistics such as the likelihood and average
time of events (predictions). Here we develop inexact iterative linear algebra
methods for computing these eigenfunctions (spectral estimation) and making
predictions from a data set of short trajectories sampled at finite intervals.
We demonstrate the methods on a low-dimensional model that facilitates
visualization and a high-dimensional model of a biomolecular system.
Implications for the prediction problem in reinforcement learning are
discussed.Comment: 27 pages, 16 figure
Practical and Rigorous Uncertainty Bounds for Gaussian Process Regression
Gaussian Process Regression is a popular nonparametric regression method
based on Bayesian principles that provides uncertainty estimates for its
predictions. However, these estimates are of a Bayesian nature, whereas for
some important applications, like learning-based control with safety
guarantees, frequentist uncertainty bounds are required. Although such rigorous
bounds are available for Gaussian Processes, they are too conservative to be
useful in applications. This often leads practitioners to replacing these
bounds by heuristics, thus breaking all theoretical guarantees. To address this
problem, we introduce new uncertainty bounds that are rigorous, yet practically
useful at the same time. In particular, the bounds can be explicitly evaluated
and are much less conservative than state of the art results. Furthermore, we
show that certain model misspecifications lead to only graceful degradation. We
demonstrate these advantages and the usefulness of our results for
learning-based control with numerical examples.Comment: Contains supplementary material and corrections to the original
versio
- …