3,030 research outputs found

    Investigating the dynamics of Greenland's glacier-fjord systems

    Get PDF
    Over the past two decades, Greenland’s tidewater glaciers have dramatically retreated, thinned and accelerated, contributing significantly to sea level rise. This change in glacier behaviour is thought to have been triggered by increasing atmospheric and ocean temperatures, and mass loss from Greenland’s tidewater glaciers is predicted to continue this century. Substantial research during this period of rapid glacier change has improved our understanding of Greenland’s glacier-fjord systems. However, many of the processes operating in these systems that ultimately control the response of tidewater glaciers to changing atmospheric and oceanic conditions are poorly understood. This thesis combines modelling and remote sensing to investigate two particularly poorly-understood components of glacier-fjord systems, with the ultimate aim of improving understanding of recent glacier behaviour and constraining the stability of the ice sheet in a changing climate. The research presented in this thesis begins with an investigation into the dominant controls on the seasonal dynamics of contrasting tidewater glaciers draining the Greenland Ice Sheet. To do this, high resolution estimates of ice velocity were generated and compared with detailed observations and modelling of the principal controls on seasonal glacier flow, including terminus position, ice mĂ©lange presence or absence, ice sheet surface melting and runoff, and plume presence or absence. These data revealed characteristic seasonal and shorter-term changes in ice velocity at each of the study glaciers in more detail than was available from previous remote sensing studies. Of all the environmental controls examined, seasonal evolution of subglacial hydrology (as inferred from plume observations and modelling) was best able to explain the observed ice flow variations, despite differences in geometry and flow of the study glaciers. The inferred relationships between subglacial hydrology and ice dynamics were furthermore entirely consistent with process-understanding developed at land-terminating sectors of the ice sheet. This investigation provides a more detailed understanding of tidewater glacier subglacial hydrology and its interaction with ice dynamics than was previously available and suggests that interannual variations in meltwater supply may have limited influence on annually averaged ice velocity. The thesis then shifts its attention from the glacier part of the system into the fjords, focusing on the interaction between icebergs, fjord circulation and fjord water properties. This focus on icebergs is motivated by recent research revealing that freshwater produced by iceberg melting constitutes an important component of fjord freshwater budgets, yet the impact of this freshwater on fjords was unknown. To investigate this, a new model for iceberg-ocean interaction is developed and incorporated into an ocean circulation model. This new model is first applied to Sermilik Fjord — a large fjord in east Greenland that hosts Helheim Glacier, one of the largest tidewater glaciers draining the ice sheet — to further constrain iceberg freshwater production and to quantify the influence of iceberg melting on fjord circulation and water properties. These investigations reveal that iceberg freshwater flux increases with ice sheet runoff raised to the power ~0.1 and ranges from ~500-2500 mÂł s⁻Âč during summer, with ~40% of that produced below the pycnocline. It is also shown that icebergs substantially modify the temperature and velocity structure of Sermilik Fjord, causing 1-5°C cooling in the upper ~100 m and invigorating fjord circulation, which in turn causes a 10-40% increase in oceanic heat flux towards Helheim Glacier. This research highlights the important role of icebergs in Greenland’s iceberg congested fjords and therefore the need to include them in future studies examining ice sheet – ocean interaction. Having investigated the effect of icebergs on fjord circulation in a realistic setting, this thesis then characterises the effect of submarine iceberg melting on water properties near the ice sheet – ocean interface by applying the new model to a range of idealised scenarios. This near-glacier region is one which is crucial for constraining ocean-driven retreat of tidewater glaciers, but which is poorly-understood. The simulations show that icebergs are important modifiers of glacier-adjacent water properties, generally acting to reduce vertical variations in water temperature. The iceberg-induced temperature changes will generally increase submarine melt rates at mid-depth and decrease rates at the surface, with less pronounced effects at greater depth. This highlights another mechanism by which iceberg melting can affect ice sheet – ocean interaction and emphasises the need to account for iceberg-ocean interaction when simulating ocean-driven retreat of Greenland’s tidewater glaciers. In summary, this thesis has helped to provide a deeper understanding of two poorly-understood components of Greenland’s tidewater glacier-fjord systems: (i) interactions between subglacial hydrology and ice velocity, and; (ii) iceberg-ocean interaction. This research has enabled more precise interpretations of past glacier behaviour and can be used to inform model development that will help constrain future ice sheet mass loss in response to a changing climate."I must express my gratitude to the University of St Andrews and to the Scottish Alliance for Geoscience, Environment and Society (SAGES) for funding and supporting me as a research student."-- Fundin

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF

    Converging organoids and extracellular matrix::New insights into liver cancer biology

    Get PDF
    Primary liver cancer, consisting primarily of hepatocellular carcinoma (HCC) and cholangiocarcinoma (CCA), is a heterogeneous malignancy with a dismal prognosis, resulting in the third leading cause of cancer mortality worldwide [1, 2]. It is characterized by unique histological features, late-stage diagnosis, a highly variable mutational landscape, and high levels of heterogeneity in biology and etiology [3-5]. Treatment options are limited, with surgical intervention the main curative option, although not available for the majority of patients which are diagnosed in an advanced stage. Major contributing factors to the complexity and limited treatment options are the interactions between primary tumor cells, non-neoplastic stromal and immune cells, and the extracellular matrix (ECM). ECM dysregulation plays a prominent role in multiple facets of liver cancer, including initiation and progression [6, 7]. HCC often develops in already damaged environments containing large areas of inflammation and fibrosis, while CCA is commonly characterized by significant desmoplasia, extensive formation of connective tissue surrounding the tumor [8, 9]. Thus, to gain a better understanding of liver cancer biology, sophisticated in vitro tumor models need to incorporate comprehensively the various aspects that together dictate liver cancer progression. Therefore, the aim of this thesis is to create in vitro liver cancer models through organoid technology approaches, allowing for novel insights into liver cancer biology and, in turn, providing potential avenues for therapeutic testing. To model primary epithelial liver cancer cells, organoid technology is employed in part I. To study and characterize the role of ECM in liver cancer, decellularization of tumor tissue, adjacent liver tissue, and distant metastatic organs (i.e. lung and lymph node) is described, characterized, and combined with organoid technology to create improved tissue engineered models for liver cancer in part II of this thesis. Chapter 1 provides a brief introduction into the concepts of liver cancer, cellular heterogeneity, decellularization and organoid technology. It also explains the rationale behind the work presented in this thesis. In-depth analysis of organoid technology and contrasting it to different in vitro cell culture systems employed for liver cancer modeling is done in chapter 2. Reliable establishment of liver cancer organoids is crucial for advancing translational applications of organoids, such as personalized medicine. Therefore, as described in chapter 3, a multi-center analysis was performed on establishment of liver cancer organoids. This revealed a global establishment efficiency rate of 28.2% (19.3% for hepatocellular carcinoma organoids (HCCO) and 36% for cholangiocarcinoma organoids (CCAO)). Additionally, potential solutions and future perspectives for increasing establishment are provided. Liver cancer organoids consist of solely primary epithelial tumor cells. To engineer an in vitro tumor model with the possibility of immunotherapy testing, CCAO were combined with immune cells in chapter 4. Co-culture of CCAO with peripheral blood mononuclear cells and/or allogenic T cells revealed an effective anti-tumor immune response, with distinct interpatient heterogeneity. These cytotoxic effects were mediated by cell-cell contact and release of soluble factors, albeit indirect killing through soluble factors was only observed in one organoid line. Thus, this model provided a first step towards developing immunotherapy for CCA on an individual patient level. Personalized medicine success is dependent on an organoids ability to recapitulate patient tissue faithfully. Therefore, in chapter 5 a novel organoid system was created in which branching morphogenesis was induced in cholangiocyte and CCA organoids. Branching cholangiocyte organoids self-organized into tubular structures, with high similarity to primary cholangiocytes, based on single-cell sequencing and functionality. Similarly, branching CCAO obtain a different morphology in vitro more similar to primary tumors. Moreover, these branching CCAO have a higher correlation to the transcriptomic profile of patient-paired tumor tissue and an increased drug resistance to gemcitabine and cisplatin, the standard chemotherapy regimen for CCA patients in the clinic. As discussed, CCAO represent the epithelial compartment of CCA. Proliferation, invasion, and metastasis of epithelial tumor cells is highly influenced by the interaction with their cellular and extracellular environment. The remodeling of various properties of the extracellular matrix (ECM), including stiffness, composition, alignment, and integrity, influences tumor progression. In chapter 6 the alterations of the ECM in solid tumors and the translational impact of our increased understanding of these alterations is discussed. The success of ECM-related cancer therapy development requires an intimate understanding of the malignancy-induced changes to the ECM. This principle was applied to liver cancer in chapter 7, whereby through a integrative molecular and mechanical approach the dysregulation of liver cancer ECM was characterized. An optimized agitation-based decellularization protocol was established for primary liver cancer (HCC and CCA) and paired adjacent tissue (HCC-ADJ and CCA-ADJ). Novel malignancy-related ECM protein signatures were found, which were previously overlooked in liver cancer transcriptomic data. Additionally, the mechanical characteristics were probed, which revealed divergent macro- and micro-scale mechanical properties and a higher alignment of collagen in CCA. This study provided a better understanding of ECM alterations during liver cancer as well as a potential scaffold for culture of organoids. This was applied to CCA in chapter 8 by combining decellularized CCA tumor ECM and tumor-free liver ECM with CCAO to study cell-matrix interactions. Culture of CCAO in tumor ECM resulted in a transcriptome closely resembling in vivo patient tumor tissue, and was accompanied by an increase in chemo resistance. In tumor-free liver ECM, devoid of desmoplasia, CCAO initiated a desmoplastic reaction through increased collagen production. If desmoplasia was already present, distinct ECM proteins were produced by the organoids. These were tumor-related proteins associated with poor patient survival. To extend this method of studying cell-matrix interactions to a metastatic setting, lung and lymph node tissue was decellularized and recellularized with CCAO in chapter 9, as these are common locations of metastasis in CCA. Decellularization resulted in removal of cells while preserving ECM structure and protein composition, linked to tissue-specific functioning hallmarks. Recellularization revealed that lung and lymph node ECM induced different gene expression profiles in the organoids, related to cancer stem cell phenotype, cell-ECM integrin binding, and epithelial-to-mesenchymal transition. Furthermore, the metabolic activity of CCAO in lung and lymph node was significantly influenced by the metastatic location, the original characteristics of the patient tumor, and the donor of the target organ. The previously described in vitro tumor models utilized decellularized scaffolds with native structure. Decellularized ECM can also be used for creation of tissue-specific hydrogels through digestion and gelation procedures. These hydrogels were created from both porcine and human livers in chapter 10. The liver ECM-based hydrogels were used to initiate and culture healthy cholangiocyte organoids, which maintained cholangiocyte marker expression, thus providing an alternative for initiation of organoids in BME. Building upon this, in chapter 11 human liver ECM-based extracts were used in combination with a one-step microfluidic encapsulation method to produce size standardized CCAO. The established system can facilitate the reduction of size variability conventionally seen in organoid culture by providing uniform scaffolding. Encapsulated CCAO retained their stem cell phenotype and were amendable to drug screening, showing the feasibility of scalable production of CCAO for throughput drug screening approaches. Lastly, Chapter 12 provides a global discussion and future outlook on tumor tissue engineering strategies for liver cancer, using organoid technology and decellularization. Combining multiple aspects of liver cancer, both cellular and extracellular, with tissue engineering strategies provides advanced tumor models that can delineate fundamental mechanistic insights as well as provide a platform for drug screening approaches.<br/

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Enabling Deep Neural Network Inferences on Resource-constraint Devices

    Get PDF
    Department of Computer Science and EngineeringWhile deep neural networks (DNN) are widely used on various devices, including resource-constraint devices such as IoT, AR/VR, and mobile devices, running DNN from resource-constrained devices remains challenging. There exist three approaches for DNN inferences on resource-constraint devices: 1) lightweight DNN for on-device computing, 2) offloading DNN inferences to a cloud server, and 3) split computing to utilize computation and network resources efficiently. Designing a lightweight DNN without compromising the accuracy of DNN is challenging due to a trade-off between latency and accuracy, that more computation is required to achieve higher accuracy. One solution to overcome this challenge is pre-processing to extract and transfer helpful information to achieve high accuracy of DNN. We design the pre-processing, which consists of three processes. The first process of pre-processing is finding out the best input source. The second process is the input-processing which extracts and contains important information for DNN inferences among the whole information gained from the input source. The last process is choosing or designing a suitable lightweight DNN for processed input. As an instance of how to apply the pre-processing, in Sec 2, we present a new transportation mode recognition system for smartphones called DeepVehicleSense, which aims at achieving three performance objectives: high accuracy, low latency, and low power consumption at once by exploiting sound characteristics captured from the built-in microphone while being on candidate transportations. To achieve high accuracy and low latency, DeepVehicleSense makes use of non-linear filters that can best extract the transportation sound samples. For the recognition of five different transportation modes, we design a deep learning-based sound classifier using a novel deep neural network architecture with multiple branches. Our staged inference technique can significantly reduce runtime and energy consumption while maintaining high accuracy for the majority of samples. Offloading DNN inferences to a server is a solution for DNN inferences on resource-constraint devices, but there is one concern about latency caused by data transmission. To reduce transmission latency, recent studies have tried to make this offloading process more efficient by compressing data to be offloaded. However, conventional compression techniques are designed for human beings, so they compress data to be possible to restore data, which looks like the original from the perspective of human eyes. As a result, the compressed data through the compression technique contains redundancy beyond the necessary information for DNN inference. In other words, the most fundamental question on extracting and offloading the minimal amount of necessary information that does not degrade the inference accuracy has remained unanswered. To answer the question, in Sec 3, we call such an ideal offloading semantic offloading and propose N-epitomizer, a new offloading framework that enables semantic offloading, thus achieving more reliable and timely inferences in highly-fluctuated or even low-bandwidth wireless networks. To realize N-epitomizer, we design an autoencoder-based scalable encoder trained to extract the most informative data and scale its output size to meet the latency and accuracy requirements of inferences over a network. Even though our proposed lightweight DNN and offloading framework with the essential information extractor achieve low latency while preserving DNN performance, they alone cannot realize latency-guaranteed DNN inferences. To realize latency-guaranteed DNN inferences, the computational complexity of the lightweight DNN and the compression performance of the encoder for offloading should be adaptively selected according to current computation resources and network conditions by utilizing the DNN's trade-off between computational complexity and DNN performance and the encoder's trade-off between compression performance and DNN performance. To this end, we propose a new framework for latency-guaranteed DNN inferences called LG-DI, which predicts DNN performance degradation given a latency budget in advance and utilizes the better method between the lightweight DNN and offloading with compression. As a result, our proposed framework for DNN inferences can guarantee latency regardless of changes in computation and network resources while maintaining DNN performance as much as possible.ope

    Investigating Deep Learning Model Calibration for Classification Problems in Mechanics

    Full text link
    Recently, there has been a growing interest in applying machine learning methods to problems in engineering mechanics. In particular, there has been significant interest in applying deep learning techniques to predicting the mechanical behavior of heterogeneous materials and structures. Researchers have shown that deep learning methods are able to effectively predict mechanical behavior with low error for systems ranging from engineered composites, to geometrically complex metamaterials, to heterogeneous biological tissue. However, there has been comparatively little attention paid to deep learning model calibration, i.e., the match between predicted probabilities of outcomes and the true probabilities of outcomes. In this work, we perform a comprehensive investigation into ML model calibration across seven open access engineering mechanics datasets that cover three distinct types of mechanical problems. Specifically, we evaluate both model and model calibration error for multiple machine learning methods, and investigate the influence of ensemble averaging and post hoc model calibration via temperature scaling. Overall, we find that ensemble averaging of deep neural networks is both an effective and consistent tool for improving model calibration, while temperature scaling has comparatively limited benefits. Looking forward, we anticipate that this investigation will lay the foundation for future work in developing mechanics specific approaches to deep learning model calibration.Comment: 21 pages, 9 figure

    Vibration-based damage localisation: Impulse response identification and model updating methods

    Get PDF
    Structural health monitoring has gained more and more interest over the recent decades. As the technology has matured and monitoring systems are employed commercially, the development of more powerful and precise methods is the logical next step in this field. Especially vibration sensor networks with few measurement points combined with utilisation of ambient vibration sources are attractive for practical applications, as this approach promises to be cost-effective while requiring minimal modification to the monitored structures. Since efficient methods for damage detection have already been developed for such sensor networks, the research focus shifts towards extracting more information from the measurement data, in particular to the localisation and quantification of damage. Two main concepts have produced promising results for damage localisation. The first approach involves a mechanical model of the structure, which is used in a model updating scheme to find the damaged areas of the structure. Second, there is a purely data-driven approach, which relies on residuals of vibration estimations to find regions where damage is probable. While much research has been conducted following these two concepts, different approaches are rarely directly compared using the same data sets. Therefore, this thesis presents advanced methods for vibration-based damage localisation using model updating as well as a data-driven method and provides a direct comparison using the same vibration measurement data. The model updating approach presented in this thesis relies on multiobjective optimisation. Hence, the applied numerical optimisation algorithms are presented first. On this basis, the model updating parameterisation and objective function formulation is developed. The data-driven approach employs residuals from vibration estimations obtained using multiple-input finite impulse response filters. Both approaches are then verified using a simulated cantilever beam considering multiple damage scenarios. Finally, experimentally obtained data from an outdoor girder mast structure is used to validate the approaches. In summary, this thesis provides an assessment of model updating and residual-based damage localisation by means of verification and validation cases. It is found that the residual-based method exhibits numerical performance sufficient for real-time applications while providing a high sensitivity towards damage. However, the localisation accuracy is found to be superior using the model updating method
    • 

    corecore