2,793 research outputs found

    Investigating the dynamics of Greenland's glacier-fjord systems

    Get PDF
    Over the past two decades, Greenland’s tidewater glaciers have dramatically retreated, thinned and accelerated, contributing significantly to sea level rise. This change in glacier behaviour is thought to have been triggered by increasing atmospheric and ocean temperatures, and mass loss from Greenland’s tidewater glaciers is predicted to continue this century. Substantial research during this period of rapid glacier change has improved our understanding of Greenland’s glacier-fjord systems. However, many of the processes operating in these systems that ultimately control the response of tidewater glaciers to changing atmospheric and oceanic conditions are poorly understood. This thesis combines modelling and remote sensing to investigate two particularly poorly-understood components of glacier-fjord systems, with the ultimate aim of improving understanding of recent glacier behaviour and constraining the stability of the ice sheet in a changing climate. The research presented in this thesis begins with an investigation into the dominant controls on the seasonal dynamics of contrasting tidewater glaciers draining the Greenland Ice Sheet. To do this, high resolution estimates of ice velocity were generated and compared with detailed observations and modelling of the principal controls on seasonal glacier flow, including terminus position, ice mĂ©lange presence or absence, ice sheet surface melting and runoff, and plume presence or absence. These data revealed characteristic seasonal and shorter-term changes in ice velocity at each of the study glaciers in more detail than was available from previous remote sensing studies. Of all the environmental controls examined, seasonal evolution of subglacial hydrology (as inferred from plume observations and modelling) was best able to explain the observed ice flow variations, despite differences in geometry and flow of the study glaciers. The inferred relationships between subglacial hydrology and ice dynamics were furthermore entirely consistent with process-understanding developed at land-terminating sectors of the ice sheet. This investigation provides a more detailed understanding of tidewater glacier subglacial hydrology and its interaction with ice dynamics than was previously available and suggests that interannual variations in meltwater supply may have limited influence on annually averaged ice velocity. The thesis then shifts its attention from the glacier part of the system into the fjords, focusing on the interaction between icebergs, fjord circulation and fjord water properties. This focus on icebergs is motivated by recent research revealing that freshwater produced by iceberg melting constitutes an important component of fjord freshwater budgets, yet the impact of this freshwater on fjords was unknown. To investigate this, a new model for iceberg-ocean interaction is developed and incorporated into an ocean circulation model. This new model is first applied to Sermilik Fjord — a large fjord in east Greenland that hosts Helheim Glacier, one of the largest tidewater glaciers draining the ice sheet — to further constrain iceberg freshwater production and to quantify the influence of iceberg melting on fjord circulation and water properties. These investigations reveal that iceberg freshwater flux increases with ice sheet runoff raised to the power ~0.1 and ranges from ~500-2500 mÂł s⁻Âč during summer, with ~40% of that produced below the pycnocline. It is also shown that icebergs substantially modify the temperature and velocity structure of Sermilik Fjord, causing 1-5°C cooling in the upper ~100 m and invigorating fjord circulation, which in turn causes a 10-40% increase in oceanic heat flux towards Helheim Glacier. This research highlights the important role of icebergs in Greenland’s iceberg congested fjords and therefore the need to include them in future studies examining ice sheet – ocean interaction. Having investigated the effect of icebergs on fjord circulation in a realistic setting, this thesis then characterises the effect of submarine iceberg melting on water properties near the ice sheet – ocean interface by applying the new model to a range of idealised scenarios. This near-glacier region is one which is crucial for constraining ocean-driven retreat of tidewater glaciers, but which is poorly-understood. The simulations show that icebergs are important modifiers of glacier-adjacent water properties, generally acting to reduce vertical variations in water temperature. The iceberg-induced temperature changes will generally increase submarine melt rates at mid-depth and decrease rates at the surface, with less pronounced effects at greater depth. This highlights another mechanism by which iceberg melting can affect ice sheet – ocean interaction and emphasises the need to account for iceberg-ocean interaction when simulating ocean-driven retreat of Greenland’s tidewater glaciers. In summary, this thesis has helped to provide a deeper understanding of two poorly-understood components of Greenland’s tidewater glacier-fjord systems: (i) interactions between subglacial hydrology and ice velocity, and; (ii) iceberg-ocean interaction. This research has enabled more precise interpretations of past glacier behaviour and can be used to inform model development that will help constrain future ice sheet mass loss in response to a changing climate."I must express my gratitude to the University of St Andrews and to the Scottish Alliance for Geoscience, Environment and Society (SAGES) for funding and supporting me as a research student."-- Fundin

    Linear Amplification in Nonequilibrium Turbulent Boundary Layers

    Get PDF
    Resolvent analysis is applied to nonequilibrium incompressible adverse pressure gradient (APG) turbulent boundary layers (TBL) and hypersonic boundary layers with high temperature real gas effects, including chemical nonequilibrium. Resolvent analysis is an equation-based, scale-dependent decomposition of the Navier Stokes equations, linearized about a known mean flow field. The decomposition identifies the optimal response and forcing modes, ranked by their linear amplification. To treat the nonequilibrium APG TBL, a biglobal resolvent analysis approach is used to account for the streamwise and wall-normal inhomogeneities in the streamwise developing flow. For the hypersonic boundary layer in chemical nonequilibrium, the resolvent analysis is constructed using a parallel flow assumption, incorporating N₂, O₂, NO, N, and O as a mixture of chemically reacting gases. Biglobal resolvent analysis is first applied to the zero pressure gradient (ZPG) TBL. Scaling relationships are determined for the spanwise wavenumber and temporal frequency that admit self-similar resolvent modes in the inner layer, mesolayer, and outer layer regions of the ZPG TBL. The APG effects on the inner scaling of the biglobal modes are shown to diminish as their self-similarity improves with increased Reynolds number. An increase in APG strength is shown to increase the linear amplification of the large-scale biglobal modes in the outer region, similar to the energization of large scale modes observed in simulation. The linear amplification of these modes grows linearly with the APG history, measured as the streamwise averaged APG strength, and relates to a novel pressure-based velocity scale. Resolvent analysis is then used to identify the length scales most affected by the high-temperature gas effects in hypersonic TBLs. It is shown that the high-temperature gas effects primarily affect modes localized near the peak mean temperature. Due to the chemical nonequilibrium effects, the modes can be linearly amplified through changes in chemical concentration, which have non-negligible effects on the higher order modes. Correlations in the components of the small-scale resolvent modes agree qualitatively with similar correlations in simulation data. Finally, efficient strategies for resolvent analysis are presented. These include an algorithm to autonomously sample the large amplification regions using a Bayesian Optimization-like approach and a projection-based method to approximate resolvent analysis through a reduced eigenvalue problem, derived from calculus of variations.</p

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Radiotherapy dosimetry with ultrasound contrast agents

    Get PDF

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Radiotherapy dosimetry with ultrasound contrast agents

    Get PDF

    Architecture and Circuit Design Optimization for Compute-In-Memory

    Get PDF
    The objective of the proposed research is to optimize computing-in-memory (CIM) design for accelerating Deep Neural Network (DNN) algorithms. As compute peripheries such as analog-to-digital converter (ADC) introduce significant overhead in CIM inference design, the research first focuses on the circuit optimization for inference acceleration and proposes a resistive random access memory (RRAM) based ADC-free in-memory compute scheme. We comprehensively explore the trade-offs involving different types of ADCs and investigate a new ADC design especially suited for the CIM, which performs the analog shift-add for multiple weight significance bits, improving the throughput and energy efficiency under similar area constraints. Furthermore, we prototype an ADC-free CIM inference chip design with a fully-analog data processing manner between sub-arrays, which can significantly improve the hardware performance over the conventional CIM designs and achieve near-software classification accuracy on ImageNet and CIFAR-10/-100 dataset. Secondly, the research focuses on hardware support for CIM on-chip training. To maximize hardware reuse of CIM weight stationary dataflow, we propose the CIM training architectures with the transpose weight mapping strategy. The cell design and periphery circuitry are modified to efficiently support bi-directional compute. A novel solution of signed number multiplication is also proposed to handle the negative input in backpropagation. Finally, we propose an SRAM-based CIM training architecture and comprehensively explore the system-level hardware performance for DNN on-chip training based on silicon measurement results.Ph.D
    • 

    corecore