3,456 research outputs found
Investigating the dynamics of Greenland's glacier-fjord systems
Over the past two decades, Greenlandâs tidewater glaciers have dramatically retreated, thinned and accelerated, contributing significantly to sea level rise. This change in glacier behaviour is thought to have been triggered by increasing atmospheric and ocean temperatures, and mass loss from Greenlandâs tidewater glaciers is predicted to continue this century. Substantial research during this period of rapid glacier change has improved our understanding of Greenlandâs glacier-fjord systems. However, many of the processes operating in these systems that ultimately control the response of tidewater glaciers to changing atmospheric and oceanic conditions are poorly understood. This thesis combines modelling and remote sensing to investigate two particularly poorly-understood components of glacier-fjord systems, with the ultimate aim of improving understanding of recent glacier behaviour and constraining the stability of the ice sheet in a changing climate.
The research presented in this thesis begins with an investigation into the dominant controls on the seasonal dynamics of contrasting tidewater glaciers draining the Greenland Ice Sheet. To do this, high resolution estimates of ice velocity were generated and compared with detailed observations and modelling of the principal controls on seasonal glacier flow, including terminus position, ice mélange presence or absence, ice sheet surface melting and runoff, and plume presence or absence. These data revealed characteristic seasonal and shorter-term changes in ice velocity at each of the study glaciers in more detail than was available from previous remote sensing studies. Of all the environmental controls examined, seasonal evolution of subglacial hydrology (as inferred from plume observations and modelling) was best able to explain the observed ice flow variations, despite differences in geometry and flow of the study glaciers. The inferred relationships between subglacial hydrology and ice dynamics were furthermore entirely consistent with process-understanding developed at land-terminating sectors of the ice sheet. This investigation provides a more detailed understanding of tidewater glacier subglacial hydrology and its interaction with ice dynamics than was previously available and suggests that interannual variations in meltwater supply may have limited influence on annually averaged ice velocity.
The thesis then shifts its attention from the glacier part of the system into the fjords, focusing on the interaction between icebergs, fjord circulation and fjord water properties. This focus on icebergs is motivated by recent research revealing that freshwater produced by iceberg melting constitutes an important component of fjord freshwater budgets, yet the impact of this freshwater on fjords was unknown. To investigate this, a new model for iceberg-ocean interaction is developed and incorporated into an ocean circulation model.
This new model is first applied to Sermilik Fjord â a large fjord in east Greenland that hosts Helheim Glacier, one of the largest tidewater glaciers draining the ice sheet â to further constrain iceberg freshwater production and to quantify the influence of iceberg melting on fjord circulation and water properties. These investigations reveal that iceberg freshwater flux increases with ice sheet runoff raised to the power ~0.1 and ranges from ~500-2500 mÂł sâ»Âč during summer, with ~40% of that produced below the pycnocline. It is also shown that icebergs substantially modify the temperature and velocity structure of Sermilik Fjord, causing 1-5°C cooling in the upper ~100 m and invigorating fjord circulation, which in turn causes a 10-40% increase in oceanic heat flux towards Helheim Glacier. This research highlights the important role of icebergs in Greenlandâs iceberg congested fjords and therefore the need to include them in future studies examining ice sheet â ocean interaction.
Having investigated the effect of icebergs on fjord circulation in a realistic setting, this thesis then characterises the effect of submarine iceberg melting on water properties near the ice sheet â ocean interface by applying the new model to a range of idealised scenarios. This near-glacier region is one which is crucial for constraining ocean-driven retreat of tidewater glaciers, but which is poorly-understood. The simulations show that icebergs are important modifiers of glacier-adjacent water properties, generally acting to reduce vertical variations in water temperature. The iceberg-induced temperature changes will generally increase submarine melt rates at mid-depth and decrease rates at the surface, with less pronounced effects at greater depth. This highlights another mechanism by which iceberg melting can affect ice sheet â ocean interaction and emphasises the need to account for iceberg-ocean interaction when simulating ocean-driven retreat of Greenlandâs tidewater glaciers.
In summary, this thesis has helped to provide a deeper understanding of two poorly-understood components of Greenlandâs tidewater glacier-fjord systems: (i) interactions between subglacial hydrology and ice velocity, and; (ii) iceberg-ocean interaction. This research has enabled more precise interpretations of past glacier behaviour and can be used to inform model development that will help constrain future ice sheet mass loss in response to a changing climate."I must express my gratitude to the University of St Andrews and to the Scottish Alliance for Geoscience, Environment and Society (SAGES) for funding and supporting me as a research student."-- Fundin
Robust and Flexible Persistent Scatterer Interferometry for Long-Term and Large-Scale Displacement Monitoring
Die Persistent Scatterer Interferometrie (PSI) ist eine Methode zur Ăberwachung von Verschiebungen der ErdoberflĂ€che aus dem Weltraum. Sie basiert auf der Identifizierung und Analyse von stabilen Punktstreuern (sog. Persistent Scatterer, PS) durch die Anwendung von AnsĂ€tzen der Zeitreihenanalyse auf Stapel von SAR-Interferogrammen. PS Punkte dominieren die RĂŒckstreuung der Auflösungszellen, in denen sie sich befinden, und werden durch geringfĂŒgige Dekorrelation charakterisiert. Verschiebungen solcher PS Punkte können mit einer potenziellen Submillimetergenauigkeit ĂŒberwacht werden, wenn Störquellen effektiv minimiert werden.
Im Laufe der Zeit hat sich die PSI in bestimmten Anwendungen zu einer operationellen Technologie entwickelt. Es gibt jedoch immer noch herausfordernde Anwendungen fĂŒr die Methode. Physische VerĂ€nderungen der LandoberflĂ€che und Ănderungen in der Aufnahmegeometrie können dazu fĂŒhren, dass PS Punkte im Laufe der Zeit erscheinen oder verschwinden. Die Anzahl der kontinuierlich kohĂ€renten PS Punkte nimmt mit zunehmender LĂ€nge der Zeitreihen ab, wĂ€hrend die Anzahl der TPS Punkte zunimmt, die nur wĂ€hrend eines oder mehrerer getrennter Segmente der analysierten Zeitreihe kohĂ€rent sind. Daher ist es wĂŒnschenswert, die Analyse solcher TPS Punkte in die PSI zu integrieren, um ein flexibles PSI-System zu entwickeln, das in der Lage ist mit dynamischen VerĂ€nderungen der LandoberflĂ€che umzugehen und somit ein kontinuierliches Verschiebungsmonitoring ermöglicht. Eine weitere Herausforderung der PSI besteht darin, groĂflĂ€chiges Monitoring in Regionen mit komplexen atmosphĂ€rischen Bedingungen durchzufĂŒhren. Letztere fĂŒhren zu hoher Unsicherheit in den Verschiebungszeitreihen bei groĂen AbstĂ€nden zur rĂ€umlichen Referenz.
Diese Arbeit befasst sich mit Modifikationen und Erweiterungen, die auf der Grund lage eines bestehenden PSI-Algorithmus realisiert wurden, um einen robusten und flexiblen PSI-Ansatz zu entwickeln, der mit den oben genannten Herausforderungen umgehen kann. Als erster Hauptbeitrag wird eine Methode prĂ€sentiert, die TPS Punkte vollstĂ€ndig in die PSI integriert. In Evaluierungsstudien mit echten SAR Daten wird gezeigt, dass die Integration von TPS Punkten tatsĂ€chlich die BewĂ€ltigung dynamischer VerĂ€nderungen der LandoberflĂ€che ermöglicht und mit zunehmender ZeitreihenlĂ€nge zunehmende Relevanz fĂŒr PSI-basierte Beobachtungsnetzwerke hat. Der zweite Hauptbeitrag ist die Vorstellung einer Methode zur kovarianzbasierten Referenzintegration in groĂflĂ€chige PSI-Anwendungen zur SchĂ€tzung von rĂ€umlich korreliertem Rauschen. Die Methode basiert auf der Abtastung des Rauschens an Referenzpixeln mit bekannten Verschiebungszeitreihen und anschlieĂender Interpolation auf die restlichen PS Pixel unter BerĂŒcksichtigung der rĂ€umlichen Statistik des Rauschens. Es wird in einer Simulationsstudie sowie einer Studie mit realen Daten gezeigt, dass die Methode ĂŒberlegene Leistung im Vergleich zu alternativen Methoden zur Reduktion von rĂ€umlich korreliertem Rauschen in Interferogrammen mittels Referenzintegration zeigt.
Die entwickelte PSI-Methode wird schlieĂlich zur Untersuchung von Landsenkung im Vietnamesischen Teil des Mekong Deltas eingesetzt, das seit einigen Jahrzehnten von Landsenkung und verschiedenen anderen Umweltproblemen betroffen ist. Die geschĂ€tzten Landsenkungsraten zeigen eine hohe VariabilitĂ€t auf kurzen sowie groĂen rĂ€umlichen Skalen. Die höchsten Senkungsraten von bis zu 6 cm pro Jahr treten hauptsĂ€chlich in stĂ€dtischen Gebieten auf. Es kann gezeigt werden, dass der gröĂte Teil der Landsenkung ihren Ursprung im oberflĂ€chennahen Untergrund hat. Die prĂ€sentierte Methode zur Reduzierung von rĂ€umlich korreliertem Rauschen verbessert die Ergebnisse signifikant, wenn eine angemessene rĂ€umliche Verteilung von Referenzgebieten verfĂŒgbar ist. In diesem Fall wird das Rauschen effektiv reduziert und unabhĂ€ngige Ergebnisse von zwei Interferogrammstapeln, die aus unterschiedlichen Orbits aufgenommen wurden, zeigen groĂe Ăbereinstimmung. Die Integration von TPS Punkten fĂŒhrt fĂŒr die analysierte Zeitreihe von sechs Jahren zu einer deutlich gröĂeren Anzahl an identifizierten TPS als PS Punkten im gesamten Untersuchungsgebiet und verbessert damit das Beobachtungsnetzwerk erheblich. Ein spezieller Anwendungsfall der TPS Integration wird vorgestellt, der auf der Clusterung von TPS Punkten basiert, die innerhalb der analysierten Zeitreihe erschienen, um neue Konstruktionen systematisch zu identifizieren und ihre anfĂ€ngliche Bewegungszeitreihen zu analysieren
Geodetic monitoring of complex shaped infrastructures using Ground-Based InSAR
In the context of climate change, alternatives to fossil energies need to be used as much as possible to produce electricity. Hydroelectric power generation through the utilisation of dams stands out as an exemplar of highly effective methodologies in this endeavour. Various monitoring sensors can be installed with different characteristics w.r.t. spatial resolution, temporal resolution and accuracy to assess their safe usage. Among the array of techniques available, it is noteworthy that ground-based synthetic aperture radar (GB-SAR) has not yet been widely adopted for this purpose. Despite its remarkable equilibrium between the aforementioned attributes, its sensitivity to atmospheric disruptions, specific acquisition geometry, and the requisite for phase unwrapping collectively contribute to constraining its usage. Several processing strategies are developed in this thesis to capitalise on all the opportunities of GB-SAR systems, such as continuous, flexible and autonomous observation combined with high resolutions and accuracy.
The first challenge that needs to be solved is to accurately localise and estimate the azimuth of the GB-SAR to improve the geocoding of the image in the subsequent step. A ray tracing algorithm and tomographic techniques are used to recover these external parameters of the sensors. The introduction of corner reflectors for validation purposes confirms a significant error reduction. However, for the subsequent geocoding, challenges persist in scenarios involving vertical structures due to foreshortening and layover, which notably compromise the geocoding quality of the observed points. These issues arise when multiple points at varying elevations are encapsulated within a singular resolution cell, posing difficulties in pinpointing the precise location of the scattering point responsible for signal return. To surmount these hurdles, a Bayesian approach grounded in intensity models is formulated, offering a tool to enhance the accuracy of the geocoding process. The validation is assessed on a dam in the black forest in Germany, characterised by a very specific structure.
The second part of this thesis is focused on the feasibility of using GB-SAR systems for long-term geodetic monitoring of large structures. A first assessment is made by testing large temporal baselines between acquisitions for epoch-wise monitoring. Due to large displacements, the phase unwrapping can not recover all the information. An improvement is made by adapting the geometry of the signal processing with the principal component analysis. The main case study consists of several campaigns from different stations at Enguri Dam in Georgia. The consistency of the estimated displacement map is assessed by comparing it to a numerical model calibrated on the plumblines data. It exhibits a strong agreement between the two results and comforts the usage of GB-SAR for epoch-wise monitoring, as it can measure several thousand points on the dam. It also exhibits the possibility of detecting local anomalies in the numerical model. Finally, the instrument has been installed for continuous monitoring for over two years at Enguri Dam. An adequate flowchart is developed to eliminate the drift happening with classical interferometric algorithms to achieve the accuracy required for geodetic monitoring. The analysis of the obtained time series confirms a very plausible result with classical parametric models of dam deformations. Moreover, the results of this processing strategy are also confronted with the numerical model and demonstrate a high consistency. The final comforting result is the comparison of the GB-SAR time series with the output from four GNSS stations installed on the dam crest.
The developed algorithms and methods increase the capabilities of the GB-SAR for dam monitoring in different configurations. It can be a valuable and precious supplement to other classical sensors for long-term geodetic observation purposes as well as short-term monitoring in cases of particular dam operations
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Laboratory multistatic 3D SAR with polarimetry and sparse aperture sampling
With the advent of constellations of SAR satellites, and the possibility of swarms of SAR UAV's, there is increased interest in multistatic SAR image formation. This may provide advantages including allowing three-dimensional image formation free of clutter overlay; the coherent combination of bistatic SAR geometries for improved image resolution; and the collection of additional scattering information, including polarimetric. The polarimetric collection may provide useful target information, such as its orientation, polarisability, or number of interactions with the radar signal; distributed receivers would be more likely to capture any bright specular responses from targets in the scene, making target outlines distinct. Highlight results from multistatic polarimetric SAR experiments at the Cranfield University GBSAR laboratory are presented, illustrating the utility of the approach for fully sampled 3D SAR image formation, and for sparse aperture SAR 3D point-cloud generation with a newly developed volumetric multistatic interferometry algorithm.Defence Science and Technology Laboratory. Grant Number: P1568
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Recommended from our members
Interpretable Machine Learning Architectures for Efficient Signal Detection with Applications to Gravitational Wave Astronomy
Deep learning has seen rapid evolution in the past decade, accomplishing tasks that were previously unimaginable. At the same time, researchers strive to better understand and interpret the underlying mechanisms of the deep models, which are often justifiably regarded as "black boxes". Overcoming this deficiency will not only serve to suggest better learning architectures and training methods, but also extend deep learning to scenarios where interpretability is key to the application. One such scenario is signal detection and estimation, with gravitational wave detection as a specific example, where classic methods are often preferred for their interpretability. Nonetheless, while classic statistical detection methods such as matched filtering excel in their simplicity and intuitiveness, they can be suboptimal in terms of both accuracy and computational efficiency. Therefore, it is appealing to have methods that achieve ``the best of both worlds'', namely enjoying simultaneously excellent performance and interpretability.
In this thesis, we aim to bridge this gap between modern deep learning and classic statistical detection, by revisiting the signal detection problem from a new perspective. First, to address the perceived distinction in interpretability between classic matched filtering and deep learning, we state the intrinsic connections between the two families of methods, and identify how trainable networks can address the structural limitations of matched filtering. Based on these ideas, we propose two trainable architectures that are constructed based on matched filtering, but with learnable templates and adaptivity to unknown noise distributions, and therefore higher detection accuracy. We next turn our attention toward improving the computational efficiency of detection, where we aim to design architectures that leverage structures within the problem for efficiency gains. By leveraging the statistical structure of class imbalance, we integrate hierarchical detection into trainable networks, and use a novel loss function which explicitly encodes both detection accuracy and efficiency. Furthermore, by leveraging the geometric structure of the signal set, we consider using signal space optimization as an alternative computational primitive for detection, which is intuitively more efficient than covering with a template bank. We theoretical prove the efficiency gain by analyzing Riemannian gradient descent on the signal manifold, which reveals an exponential improvement in efficiency over matched filtering. We also propose a practical trainable architecture for template optimization, which makes use of signal embedding and kernel interpolation.
We demonstrate the performance of all proposed architectures on the task of gravitational wave detection in astrophysics, where matched filtering is the current method of choice. The architectures are also widely applicable to general signal or pattern detection tasks, which we exemplify with the handwritten digit recognition task using the template optimization architecture. Together, we hope the this work useful to scientists and engineers seeking machine learning architectures with high performance and interpretability, and contribute to our understanding of deep learning as a whole
Non-invasive and non-intrusive diagnostic techniques for gas-solid fluidized beds â A review
Gas-solid fluidized-bed systems offer great advantages in terms of chemical reaction efficiency and temperature control where other chemical reactor designs fall short. For this reason, they have been widely employed in a range of industrial application where these properties are essential. Nonetheless, the knowledge of such systems and the corresponding design choices, in most cases, rely on a heuristic expertise gained over the years rather than on a deep physical understanding of the phenomena taking place in fluidized beds. This is a huge limiting factor when it comes to the design, the scale-up and the optimization of such complex units. Fortunately, a wide array of diagnostic techniques has enabled researchers to strive in this direction, and, among these, non-invasive and non-intrusive diagnostic techniques stand out thanks to their innate feature of not affecting the flow field, while also avoiding direct contact with the medium under study. This work offers an overview of the non-invasive and non-intrusive diagnostic techniques most commonly applied to fluidized-bed systems, highlighting their capabilities in terms of the quantities they can measure, as well as advantages and limitations of each of them. The latest developments and the likely future trends are also presented. Neither of these methodologies represents a best option on all fronts. The goal of this work is rather to highlight what each technique has to offer and what application are they better suited for
A New Restriction on Low-Redundancy Restricted Array and Its Good Solutions
In array signal processing, a fundamental problem is to design a sensor array
with low-redundancy and reduced mutual coupling, which are the main features to
improve the performance of direction-of-arrival (DOA) estimation.
For a -sensor array with aperture , it is called low-redundancy (LR) if
the ratio is approaching the Leech's bound for ; and the mutual coupling is often
reduced by decreasing the numbers of sensor pairs with the first three smallest
inter-spacings, denoted as with . Many works have
been done to construct large LRAs, whose spacing structures all coincide with a
common pattern with the
restriction . Here denote the spacing between adjacent
sensors, and is the largest one. The objective of this paper is to find
some new arrays with lower redundancy ratio or lower mutual coupling compared
with known arrays. In order to do this, we give a new restriction for to be , and obtain 2 classes of -type arrays, 2 classes
of -type arrays, and 1 class of -type arrays for any .
Here the -Type means that . Notably, compared with
known arrays with the same type, one of our new -type array and the new
-type array all achieves the lowest mutual coupling, and their uDOFs are
at most 4 less for any ; compared with SNA and MISC arrays, the new
-type array has a significant reduction in both redundancy ratio and
mutual coupling.
We should emphasize that the new -type array in this paper is the first
class of arrays achieving and for any
- âŠ