3,929 research outputs found
Investigating the dynamics of Greenland's glacier-fjord systems
Over the past two decades, Greenlandâs tidewater glaciers have dramatically retreated, thinned and accelerated, contributing significantly to sea level rise. This change in glacier behaviour is thought to have been triggered by increasing atmospheric and ocean temperatures, and mass loss from Greenlandâs tidewater glaciers is predicted to continue this century. Substantial research during this period of rapid glacier change has improved our understanding of Greenlandâs glacier-fjord systems. However, many of the processes operating in these systems that ultimately control the response of tidewater glaciers to changing atmospheric and oceanic conditions are poorly understood. This thesis combines modelling and remote sensing to investigate two particularly poorly-understood components of glacier-fjord systems, with the ultimate aim of improving understanding of recent glacier behaviour and constraining the stability of the ice sheet in a changing climate.
The research presented in this thesis begins with an investigation into the dominant controls on the seasonal dynamics of contrasting tidewater glaciers draining the Greenland Ice Sheet. To do this, high resolution estimates of ice velocity were generated and compared with detailed observations and modelling of the principal controls on seasonal glacier flow, including terminus position, ice mélange presence or absence, ice sheet surface melting and runoff, and plume presence or absence. These data revealed characteristic seasonal and shorter-term changes in ice velocity at each of the study glaciers in more detail than was available from previous remote sensing studies. Of all the environmental controls examined, seasonal evolution of subglacial hydrology (as inferred from plume observations and modelling) was best able to explain the observed ice flow variations, despite differences in geometry and flow of the study glaciers. The inferred relationships between subglacial hydrology and ice dynamics were furthermore entirely consistent with process-understanding developed at land-terminating sectors of the ice sheet. This investigation provides a more detailed understanding of tidewater glacier subglacial hydrology and its interaction with ice dynamics than was previously available and suggests that interannual variations in meltwater supply may have limited influence on annually averaged ice velocity.
The thesis then shifts its attention from the glacier part of the system into the fjords, focusing on the interaction between icebergs, fjord circulation and fjord water properties. This focus on icebergs is motivated by recent research revealing that freshwater produced by iceberg melting constitutes an important component of fjord freshwater budgets, yet the impact of this freshwater on fjords was unknown. To investigate this, a new model for iceberg-ocean interaction is developed and incorporated into an ocean circulation model.
This new model is first applied to Sermilik Fjord â a large fjord in east Greenland that hosts Helheim Glacier, one of the largest tidewater glaciers draining the ice sheet â to further constrain iceberg freshwater production and to quantify the influence of iceberg melting on fjord circulation and water properties. These investigations reveal that iceberg freshwater flux increases with ice sheet runoff raised to the power ~0.1 and ranges from ~500-2500 mÂł sâ»Âč during summer, with ~40% of that produced below the pycnocline. It is also shown that icebergs substantially modify the temperature and velocity structure of Sermilik Fjord, causing 1-5°C cooling in the upper ~100 m and invigorating fjord circulation, which in turn causes a 10-40% increase in oceanic heat flux towards Helheim Glacier. This research highlights the important role of icebergs in Greenlandâs iceberg congested fjords and therefore the need to include them in future studies examining ice sheet â ocean interaction.
Having investigated the effect of icebergs on fjord circulation in a realistic setting, this thesis then characterises the effect of submarine iceberg melting on water properties near the ice sheet â ocean interface by applying the new model to a range of idealised scenarios. This near-glacier region is one which is crucial for constraining ocean-driven retreat of tidewater glaciers, but which is poorly-understood. The simulations show that icebergs are important modifiers of glacier-adjacent water properties, generally acting to reduce vertical variations in water temperature. The iceberg-induced temperature changes will generally increase submarine melt rates at mid-depth and decrease rates at the surface, with less pronounced effects at greater depth. This highlights another mechanism by which iceberg melting can affect ice sheet â ocean interaction and emphasises the need to account for iceberg-ocean interaction when simulating ocean-driven retreat of Greenlandâs tidewater glaciers.
In summary, this thesis has helped to provide a deeper understanding of two poorly-understood components of Greenlandâs tidewater glacier-fjord systems: (i) interactions between subglacial hydrology and ice velocity, and; (ii) iceberg-ocean interaction. This research has enabled more precise interpretations of past glacier behaviour and can be used to inform model development that will help constrain future ice sheet mass loss in response to a changing climate."I must express my gratitude to the University of St Andrews and to the Scottish Alliance for Geoscience, Environment and Society (SAGES) for funding and supporting me as a research student."-- Fundin
Robust and Flexible Persistent Scatterer Interferometry for Long-Term and Large-Scale Displacement Monitoring
Die Persistent Scatterer Interferometrie (PSI) ist eine Methode zur Ăberwachung von Verschiebungen der ErdoberflĂ€che aus dem Weltraum. Sie basiert auf der Identifizierung und Analyse von stabilen Punktstreuern (sog. Persistent Scatterer, PS) durch die Anwendung von AnsĂ€tzen der Zeitreihenanalyse auf Stapel von SAR-Interferogrammen. PS Punkte dominieren die RĂŒckstreuung der Auflösungszellen, in denen sie sich befinden, und werden durch geringfĂŒgige Dekorrelation charakterisiert. Verschiebungen solcher PS Punkte können mit einer potenziellen Submillimetergenauigkeit ĂŒberwacht werden, wenn Störquellen effektiv minimiert werden.
Im Laufe der Zeit hat sich die PSI in bestimmten Anwendungen zu einer operationellen Technologie entwickelt. Es gibt jedoch immer noch herausfordernde Anwendungen fĂŒr die Methode. Physische VerĂ€nderungen der LandoberflĂ€che und Ănderungen in der Aufnahmegeometrie können dazu fĂŒhren, dass PS Punkte im Laufe der Zeit erscheinen oder verschwinden. Die Anzahl der kontinuierlich kohĂ€renten PS Punkte nimmt mit zunehmender LĂ€nge der Zeitreihen ab, wĂ€hrend die Anzahl der TPS Punkte zunimmt, die nur wĂ€hrend eines oder mehrerer getrennter Segmente der analysierten Zeitreihe kohĂ€rent sind. Daher ist es wĂŒnschenswert, die Analyse solcher TPS Punkte in die PSI zu integrieren, um ein flexibles PSI-System zu entwickeln, das in der Lage ist mit dynamischen VerĂ€nderungen der LandoberflĂ€che umzugehen und somit ein kontinuierliches Verschiebungsmonitoring ermöglicht. Eine weitere Herausforderung der PSI besteht darin, groĂflĂ€chiges Monitoring in Regionen mit komplexen atmosphĂ€rischen Bedingungen durchzufĂŒhren. Letztere fĂŒhren zu hoher Unsicherheit in den Verschiebungszeitreihen bei groĂen AbstĂ€nden zur rĂ€umlichen Referenz.
Diese Arbeit befasst sich mit Modifikationen und Erweiterungen, die auf der Grund lage eines bestehenden PSI-Algorithmus realisiert wurden, um einen robusten und flexiblen PSI-Ansatz zu entwickeln, der mit den oben genannten Herausforderungen umgehen kann. Als erster Hauptbeitrag wird eine Methode prĂ€sentiert, die TPS Punkte vollstĂ€ndig in die PSI integriert. In Evaluierungsstudien mit echten SAR Daten wird gezeigt, dass die Integration von TPS Punkten tatsĂ€chlich die BewĂ€ltigung dynamischer VerĂ€nderungen der LandoberflĂ€che ermöglicht und mit zunehmender ZeitreihenlĂ€nge zunehmende Relevanz fĂŒr PSI-basierte Beobachtungsnetzwerke hat. Der zweite Hauptbeitrag ist die Vorstellung einer Methode zur kovarianzbasierten Referenzintegration in groĂflĂ€chige PSI-Anwendungen zur SchĂ€tzung von rĂ€umlich korreliertem Rauschen. Die Methode basiert auf der Abtastung des Rauschens an Referenzpixeln mit bekannten Verschiebungszeitreihen und anschlieĂender Interpolation auf die restlichen PS Pixel unter BerĂŒcksichtigung der rĂ€umlichen Statistik des Rauschens. Es wird in einer Simulationsstudie sowie einer Studie mit realen Daten gezeigt, dass die Methode ĂŒberlegene Leistung im Vergleich zu alternativen Methoden zur Reduktion von rĂ€umlich korreliertem Rauschen in Interferogrammen mittels Referenzintegration zeigt.
Die entwickelte PSI-Methode wird schlieĂlich zur Untersuchung von Landsenkung im Vietnamesischen Teil des Mekong Deltas eingesetzt, das seit einigen Jahrzehnten von Landsenkung und verschiedenen anderen Umweltproblemen betroffen ist. Die geschĂ€tzten Landsenkungsraten zeigen eine hohe VariabilitĂ€t auf kurzen sowie groĂen rĂ€umlichen Skalen. Die höchsten Senkungsraten von bis zu 6 cm pro Jahr treten hauptsĂ€chlich in stĂ€dtischen Gebieten auf. Es kann gezeigt werden, dass der gröĂte Teil der Landsenkung ihren Ursprung im oberflĂ€chennahen Untergrund hat. Die prĂ€sentierte Methode zur Reduzierung von rĂ€umlich korreliertem Rauschen verbessert die Ergebnisse signifikant, wenn eine angemessene rĂ€umliche Verteilung von Referenzgebieten verfĂŒgbar ist. In diesem Fall wird das Rauschen effektiv reduziert und unabhĂ€ngige Ergebnisse von zwei Interferogrammstapeln, die aus unterschiedlichen Orbits aufgenommen wurden, zeigen groĂe Ăbereinstimmung. Die Integration von TPS Punkten fĂŒhrt fĂŒr die analysierte Zeitreihe von sechs Jahren zu einer deutlich gröĂeren Anzahl an identifizierten TPS als PS Punkten im gesamten Untersuchungsgebiet und verbessert damit das Beobachtungsnetzwerk erheblich. Ein spezieller Anwendungsfall der TPS Integration wird vorgestellt, der auf der Clusterung von TPS Punkten basiert, die innerhalb der analysierten Zeitreihe erschienen, um neue Konstruktionen systematisch zu identifizieren und ihre anfĂ€ngliche Bewegungszeitreihen zu analysieren
Geodetic monitoring of complex shaped infrastructures using Ground-Based InSAR
In the context of climate change, alternatives to fossil energies need to be used as much as possible to produce electricity. Hydroelectric power generation through the utilisation of dams stands out as an exemplar of highly effective methodologies in this endeavour. Various monitoring sensors can be installed with different characteristics w.r.t. spatial resolution, temporal resolution and accuracy to assess their safe usage. Among the array of techniques available, it is noteworthy that ground-based synthetic aperture radar (GB-SAR) has not yet been widely adopted for this purpose. Despite its remarkable equilibrium between the aforementioned attributes, its sensitivity to atmospheric disruptions, specific acquisition geometry, and the requisite for phase unwrapping collectively contribute to constraining its usage. Several processing strategies are developed in this thesis to capitalise on all the opportunities of GB-SAR systems, such as continuous, flexible and autonomous observation combined with high resolutions and accuracy.
The first challenge that needs to be solved is to accurately localise and estimate the azimuth of the GB-SAR to improve the geocoding of the image in the subsequent step. A ray tracing algorithm and tomographic techniques are used to recover these external parameters of the sensors. The introduction of corner reflectors for validation purposes confirms a significant error reduction. However, for the subsequent geocoding, challenges persist in scenarios involving vertical structures due to foreshortening and layover, which notably compromise the geocoding quality of the observed points. These issues arise when multiple points at varying elevations are encapsulated within a singular resolution cell, posing difficulties in pinpointing the precise location of the scattering point responsible for signal return. To surmount these hurdles, a Bayesian approach grounded in intensity models is formulated, offering a tool to enhance the accuracy of the geocoding process. The validation is assessed on a dam in the black forest in Germany, characterised by a very specific structure.
The second part of this thesis is focused on the feasibility of using GB-SAR systems for long-term geodetic monitoring of large structures. A first assessment is made by testing large temporal baselines between acquisitions for epoch-wise monitoring. Due to large displacements, the phase unwrapping can not recover all the information. An improvement is made by adapting the geometry of the signal processing with the principal component analysis. The main case study consists of several campaigns from different stations at Enguri Dam in Georgia. The consistency of the estimated displacement map is assessed by comparing it to a numerical model calibrated on the plumblines data. It exhibits a strong agreement between the two results and comforts the usage of GB-SAR for epoch-wise monitoring, as it can measure several thousand points on the dam. It also exhibits the possibility of detecting local anomalies in the numerical model. Finally, the instrument has been installed for continuous monitoring for over two years at Enguri Dam. An adequate flowchart is developed to eliminate the drift happening with classical interferometric algorithms to achieve the accuracy required for geodetic monitoring. The analysis of the obtained time series confirms a very plausible result with classical parametric models of dam deformations. Moreover, the results of this processing strategy are also confronted with the numerical model and demonstrate a high consistency. The final comforting result is the comparison of the GB-SAR time series with the output from four GNSS stations installed on the dam crest.
The developed algorithms and methods increase the capabilities of the GB-SAR for dam monitoring in different configurations. It can be a valuable and precious supplement to other classical sensors for long-term geodetic observation purposes as well as short-term monitoring in cases of particular dam operations
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (âAIâ) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics â and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the CatĂłlica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Recommended from our members
Interpretable Machine Learning Architectures for Efficient Signal Detection with Applications to Gravitational Wave Astronomy
Deep learning has seen rapid evolution in the past decade, accomplishing tasks that were previously unimaginable. At the same time, researchers strive to better understand and interpret the underlying mechanisms of the deep models, which are often justifiably regarded as "black boxes". Overcoming this deficiency will not only serve to suggest better learning architectures and training methods, but also extend deep learning to scenarios where interpretability is key to the application. One such scenario is signal detection and estimation, with gravitational wave detection as a specific example, where classic methods are often preferred for their interpretability. Nonetheless, while classic statistical detection methods such as matched filtering excel in their simplicity and intuitiveness, they can be suboptimal in terms of both accuracy and computational efficiency. Therefore, it is appealing to have methods that achieve ``the best of both worlds'', namely enjoying simultaneously excellent performance and interpretability.
In this thesis, we aim to bridge this gap between modern deep learning and classic statistical detection, by revisiting the signal detection problem from a new perspective. First, to address the perceived distinction in interpretability between classic matched filtering and deep learning, we state the intrinsic connections between the two families of methods, and identify how trainable networks can address the structural limitations of matched filtering. Based on these ideas, we propose two trainable architectures that are constructed based on matched filtering, but with learnable templates and adaptivity to unknown noise distributions, and therefore higher detection accuracy. We next turn our attention toward improving the computational efficiency of detection, where we aim to design architectures that leverage structures within the problem for efficiency gains. By leveraging the statistical structure of class imbalance, we integrate hierarchical detection into trainable networks, and use a novel loss function which explicitly encodes both detection accuracy and efficiency. Furthermore, by leveraging the geometric structure of the signal set, we consider using signal space optimization as an alternative computational primitive for detection, which is intuitively more efficient than covering with a template bank. We theoretical prove the efficiency gain by analyzing Riemannian gradient descent on the signal manifold, which reveals an exponential improvement in efficiency over matched filtering. We also propose a practical trainable architecture for template optimization, which makes use of signal embedding and kernel interpolation.
We demonstrate the performance of all proposed architectures on the task of gravitational wave detection in astrophysics, where matched filtering is the current method of choice. The architectures are also widely applicable to general signal or pattern detection tasks, which we exemplify with the handwritten digit recognition task using the template optimization architecture. Together, we hope the this work useful to scientists and engineers seeking machine learning architectures with high performance and interpretability, and contribute to our understanding of deep learning as a whole
Fault structure and slip mechanics of the 2022 Mw 6.7 Menyuan earthquake revealed by coseismic rupture observations
\ua9 2023 The Authors. Large and shallow strike-slip earthquakes produce striking ground ruptures, damaging roads and infrastructure but providing great opportunities for examining the fault\u27s structure. After the 2022 Mw 6.7 Menyuan earthquake, we observed abundant surface fractures by a combination of optical remote sensing, radar offset and unmanned aerial vehicle measurements. These fractures reveal a complex fault structure including apparent bending geometries and bifurcating branches, which are essential to understanding the mechanisms of faulting. In this paper, we used triangular dislocations to construct the fault geometry that reflected the distribution of measured strike changes but avoided unexcepted discontinuities and overlaps where the fault bent. The modeled fault geometry revealed an extensional releasing bend which was responsible for the southward branching of the fault rupture at its western edge. Our results also demonstrated the potential to explain the occurrence of aftershock clusters and to infer their fault geometries through the correlation analysis of the aftershock distribution and the slip induced stress field. The triangular dislocation model also enabled the calculation of the fault plane roughness and its spatial variation which directly controlled the fault slip magnitude and rupture termination. These analyses reveal an unprecedented level of detail of the fault structure and slip mechanics and, to some extent, offer insights into the physical processes and structural properties of crustal faults in the Earth\u27s shallow crust
Natural and Technological Hazards in Urban Areas
Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events
Emerging Approaches for THz Array Imaging: A Tutorial Review and Software Tool
Accelerated by the increasing attention drawn by 5G, 6G, and Internet of
Things applications, communication and sensing technologies have rapidly
evolved from millimeter-wave (mmWave) to terahertz (THz) in recent years.
Enabled by significant advancements in electromagnetic (EM) hardware, mmWave
and THz frequency regimes spanning 30 GHz to 300 GHz and 300 GHz to 3000 GHz,
respectively, can be employed for a host of applications. The main feature of
THz systems is high-bandwidth transmission, enabling ultra-high-resolution
imaging and high-throughput communications; however, challenges in both the
hardware and algorithmic arenas remain for the ubiquitous adoption of THz
technology. Spectra comprising mmWave and THz frequencies are well-suited for
synthetic aperture radar (SAR) imaging at sub-millimeter resolutions for a wide
spectrum of tasks like material characterization and nondestructive testing
(NDT). This article provides a tutorial review of systems and algorithms for
THz SAR in the near-field with an emphasis on emerging algorithms that combine
signal processing and machine learning techniques. As part of this study, an
overview of classical and data-driven THz SAR algorithms is provided, focusing
on object detection for security applications and SAR image super-resolution.
We also discuss relevant issues, challenges, and future research directions for
emerging algorithms and THz SAR, including standardization of system and
algorithm benchmarking, adoption of state-of-the-art deep learning techniques,
signal processing-optimized machine learning, and hybrid data-driven signal
processing algorithms...Comment: Submitted to Proceedings of IEE
- âŠ