1,904 research outputs found

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0

    Get PDF
    The advancements in human-centered artificial intelligence (HCAI) systems for Industry 5.0 is a new phase of industrialization that places the worker at the center of the production process and uses new technologies to increase prosperity beyond jobs and growth. HCAI presents new objectives that were unreachable by either humans or machines alone, but this also comes with a new set of challenges. Our proposed method accomplishes this through the knowlEdge architecture, which enables human operators to implement AI solutions using a zero-touch framework. It relies on containerized AI model training and execution, supported by a robust data pipeline and rounded off with human feedback and evaluation interfaces. The result is a platform built from a number of components, spanning all major areas of the AI lifecycle. We outline both the architectural concepts and implementation guidelines and explain how they advance HCAI systems and Industry 5.0. In this article, we address the problems we encountered while implementing the ideas within the edge-to-cloud continuum. Further improvements to our approach may enhance the use of AI in Industry 5.0 and strengthen trust in AI systems

    Saturn: An Optimized Data System for Large Model Deep Learning Workloads

    Full text link
    Large language models such as GPT-3 & ChatGPT have transformed deep learning (DL), powering applications that have captured the public's imagination. These models are rapidly being adopted across domains for analytics on various modalities, often by finetuning pre-trained base models. Such models need multiple GPUs due to both their size and computational load, driving the development of a bevy of "model parallelism" techniques & tools. Navigating such parallelism choices, however, is a new burden for end users of DL such as data scientists, domain scientists, etc. who may lack the necessary systems knowhow. The need for model selection, which leads to many models to train due to hyper-parameter tuning or layer-wise finetuning, compounds the situation with two more burdens: resource apportioning and scheduling. In this work, we tackle these three burdens for DL users in a unified manner by formalizing them as a joint problem that we call SPASE: Select a Parallelism, Allocate resources, and SchedulE. We propose a new information system architecture to tackle the SPASE problem holistically, representing a key step toward enabling wider adoption of large DL models. We devise an extensible template for existing parallelism schemes and combine it with an automated empirical profiler for runtime estimation. We then formulate SPASE as an MILP. We find that direct use of an MILP-solver is significantly more effective than several baseline heuristics. We optimize the system runtime further with an introspective scheduling approach. We implement all these techniques into a new data system we call Saturn. Experiments with benchmark DL workloads show that Saturn achieves 39-49% lower model selection runtimes than typical current DL practice.Comment: Under submission at VLDB. Code available: https://github.com/knagrecha/saturn. 12 pages + 3 pages references + 2 pages appendi

    Engineering Systems of Anti-Repressors for Next-Generation Transcriptional Programming

    Get PDF
    The ability to control gene expression in more precise, complex, and robust ways is becoming increasingly relevant in biotechnology and medicine. Synthetic biology has sought to accomplish such higher-order gene regulation through the engineering of synthetic gene circuits, whereby a gene’s expression can be controlled via environmental, temporal, or cellular cues. A typical approach to gene regulation is through transcriptional control, using allosteric transcription factors (TFs). TFs are regulatory proteins that interact with operator DNA elements located in proximity to gene promoters to either compromise or activate transcription. For many TFs, including the ones discussed here, this interaction is modulated by binding to a small molecule ligand for which the TF evolved natural specificity and a related metabolism. This modulation can occur with two main phenotypes: a TF shows the repressor (X+) phenotype if its binding to the ligand causes it to dissociate from the DNA, allowing transcription, while a TF shows the anti-repressor (XA) phenotype if its binding to the ligand causes it to associate to the DNA, preventing transcription. While both functional phenotypes are vital components of regulatory gene networks, anti-repressors are quite rare in nature compared to repressors and thus must be engineered. We first developed a generalized workflow for engineering systems of anti-repressors from bacterial TFs in a family of transcription factors related to the ubiquitous lactose repressor (LacI), the LacI/GalR family. Using this workflow, which is based on a re-routing of the TF’s allosteric network, we engineered anti-repressors in the fructose repressor (anti-FruR – responsive to fructose-1,6-phosphate) and ribose repressor (anti-RbsR – responsive to D-ribose) scaffolds, to complement XA TFs engineered previously in the LacI scaffold (anti-LacI – responsive to IPTG). Engineered TFs were then conferred with alternate DNA binding. To demonstrate their utility in synthetic gene circuits, systems of engineered TFs were then deployed to construct transcriptional programs, achieving all of the NOT-oriented Boolean logical operations – NOT, NOR, NAND, and XNOR – in addition to BUFFER and AND. Notably, our gene circuits built using anti-repressors are far simpler in design and, therefore, exert decreased burden on the chassis cells compared to the state-of-the-art as anti-repressors represent compressed logical operations (gates). Further, we extended this workflow to engineer ligand specificity in addition to regulatory phenotype. Performing the engineering workflow with a fourth member of the LacI/GalR family, the galactose isorepressor (GalS – naturally responsive to D-fucose), we engineered IPTG-responsive repressor and anti-repressor GalS mutants in addition to a D-fucose responsive anti-GalS TF. These engineered TFs were then used to create BANDPASS and BANDSTOP biological signal processing filters, themselves compressed compared to the state-of-the-art, and open-loop control systems. These provided facile methods for dynamic turning ‘ON’ and ‘OFF’ of genes in continuous growth in real time. This presents a general advance in gene regulation, moving beyond simple inducible promoters. We then demonstrated the capabilities of our engineered TFs to function in combinatorial logic using a layered logic approach, which currently stands as the state-of-the art. Using our anti-repressors in layered logic had the advantage of reducing cellular metabolic burden, as we were able to create the fundamental NOT/NOR operations with fewer genetic parts. Additionally, we created more TFs to use in layered logic approaches to prevent cellular cross-talk and minimize the number of TFs necessary to create these gene circuits. Here we demonstrated the successful deployment of our XA-built NOR gate system to create the BUFFER, NOT, NOR, OR, AND, and NAND gates. The work presented here describes a workflow for engineering (i) allosteric phenotype, (ii) ligand selectivity, and (iii) DNA specificity in allosteric transcription factors. The products of the workflow themselves serve as vital tools for the construction of next-generation synthetic gene circuits and genetic regulatory devices. Further, from the products of the workflow presented here, certain design heuristics can be gleaned, which should better facilitate the design of allosteric TFs in the future, moving toward a semi-rational engineering approach. Additionally, the work presented here outlines a transcriptional programming structure and metrology which can be broadly adapted and scaled for future applications and expansion. Consequently, this thesis presents a means for advanced control of gene expression, with promise to have long-reaching implications in the future.Ph.D

    Tradition and Innovation in Construction Project Management

    Get PDF
    This book is a reprint of the Special Issue 'Tradition and Innovation in Construction Project Management' that was published in the journal Buildings

    Modern meat: the next generation of meat from cells

    Get PDF
    Modern Meat is the first textbook on cultivated meat, with contributions from over 100 experts within the cultivated meat community. The Sections of Modern Meat comprise 5 broad categories of cultivated meat: Context, Impact, Science, Society, and World. The 19 chapters of Modern Meat, spread across these 5 sections, provide detailed entries on cultivated meat. They extensively tour a range of topics including the impact of cultivated meat on humans and animals, the bioprocess of cultivated meat production, how cultivated meat may become a food option in Space and on Mars, and how cultivated meat may impact the economy, culture, and tradition of Asia

    Artificial Intelligence for the Edge Computing Paradigm.

    Get PDF
    With modern technologies moving towards the internet of things where seemingly every financial, private, commercial and medical transaction being carried out by portable and intelligent devices; Machine Learning has found its way into every smart device and application possible. However, Machine Learning cannot be used on the edge directly due to the limited capabilities of small and battery-powered modules. Therefore, this thesis aims to provide light-weight automated Machine Learning models which are applied on a standard edge device, the Raspberry Pi, where one framework aims to limit parameter tuning while automating feature extraction and a second which can perform Machine Learning classification on the edge traditionally, and can be used additionally for image-based explainable Artificial Intelligence. Also, a commercial Artificial Intelligence software have been ported to work in a client/server setups on the Raspberry Pi board where it was incorporated in all of the Machine Learning frameworks which will be presented in this thesis. This dissertation also introduces multiple algorithms that can convert images into Time-series for classification and explainability but also introduces novel Time-series feature extraction algorithms that are applied to biomedical data while introducing the concept of the Activation Engine, which is a post-processing block that tunes Neural Networks without the need of particular experience in Machine Leaning. Also, a tree-based method for multiclass classification has been introduced which outperforms the One-to-Many approach while being less complex that the One-to-One method.\par The results presented in this thesis exhibit high accuracy when compared with the literature, while remaining efficient in terms of power consumption and the time of inference. Additionally the concepts, methods or algorithms that were introduced are particularly novel technically, where they include: • Feature extraction of professionally annotated, and poorly annotated time-series. • The introduction of the Activation Engine post-processing block. • A model for global image explainability with inference on the edge. • A tree-based algorithm for multiclass classification

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted
    • …
    corecore