461 research outputs found

    Robotic Information Gathering with Reinforcement Learning assisted by Domain Knowledge: an Application to Gas Source Localization

    Get PDF
    Gas source localization tackles the problem of finding leakages of hazardous substances such as poisonous gases or radiation in the event of a disaster. In order to avoid threats for human operators, autonomous robots dispatched for localizing potential gas sources are preferable. This work investigates a Reinforcement Learning framework that allows a robotic agent to learn how to localize gas sources. We propose a solution that assists Reinforcement Learning with existing domain knowledge based on a model of the gas dispersion process. In particular, we incorporate a priori domain knowledge by designing appropriate rewards and observation inputs for the Reinforcement Learning algorithm. We show that a robot trained with our proposed method outperforms state-of-the-art gas source localization strategies, as well as robots that are trained without additional domain knowledge. Furthermore, the framework developed in this work can also be generalized to a large variety of information gathering tasks

    Autonomous search of an airborne release in urban environments using informed tree planning

    Full text link
    The use of autonomous vehicles for chemical source localisation is a key enabling tool for disaster response teams to safely and efficiently deal with chemical emergencies. Whilst much work has been performed on source localisation using autonomous systems, most previous works have assumed an open environment or employed simplistic obstacle avoidance, separate to the estimation procedure. In this paper, we explore the coupling of the path planning task for both source term estimation and obstacle avoidance in a holistic framework. The proposed system intelligently produces potential gas sampling locations based on the current estimation of the wind field and the local map. Then a tree search is performed to generate paths toward the estimated source location that traverse around any obstacles and still allow for exploration of potentially superior sampling locations. The proposed informed tree planning algorithm is then tested against the Entrotaxis technique in a series of high fidelity simulations. The proposed system is found to reduce source position error far more efficiently than Entrotaxis in a feature rich environment, whilst also exhibiting vastly more consistent and robust results

    Meshless Hemodynamics Modeling And Evolutionary Shape Optimization Of Bypass Grafts Anastomoses

    Get PDF
    Objectives: The main objective of the current dissertation is to establish a formal shape optimization procedure for a given bypass grafts end-to-side distal anastomosis (ETSDA). The motivation behind this dissertation is that most of the previous ETSDA shape optimization research activities cited in the literature relied on direct optimization approaches that do not guaranty accurate optimization results. Three different ETSDA models are considered herein: The conventional, the Miller cuff, and the hood models. Materials and Methods: The ETSDA shape optimization is driven by three computational objects: a localized collocation meshless method (LCMM) solver, an automated geometry pre-processor, and a genetic-algorithm-based optimizer. The usage of the LCMM solver is very convenient to set an autonomous optimization mechanism for the ETSDA models. The task of the automated pre-processor is to randomly distribute solution points in the ETSDA geometries. The task of the optimized is the adjust the ETSDA geometries based on mitigation of the abnormal hemodynamics parameters. Results: The results reported in this dissertation entail the stabilization and validation of the LCMM solver in addition to the shape optimization of the considered ETSDA models. The LCMM stabilization results consists validating a custom-designed upwinding scheme on different one-dimensional and two-dimensional test cases. The LCMM validation is done for incompressible steady and unsteady flow applications in the ETSDA models. The ETSDA shape optimization include single-objective optimization results in steady flow situations and bi-objective optimization results in pulsatile flow situations. Conclusions: The LCMM solver provides verifiably accurate resolution of hemodynamics and is demonstrated to be third order accurate in a comparison to a benchmark analytical solution of the Navier-Stokes. The genetic-algorithm-based shape optimization approach proved to be very effective for the conventional and Miller cuff ETSDA models. The shape optimization results for those two models definitely suggest that the graft caliber should be maximized whereas the anastomotic angle and the cuff height (in the Miller cuff model) should be chosen following a compromise between the wall shear stress spatial and temporal gradients. The shape optimization of the hood ETSDA model did not prove to be advantageous, however it could be meaningful with the inclusion of the suture line cut length as an optimization parameter

    Data Assimilation for Atmospheric CO2: Towards Improved Estimates of CO2 Concentrations and Fluxes.

    Full text link
    The lack of a process-level understanding of the carbon cycle is a major contributor to our uncertainty in understanding future changes in the carbon cycle and its interplay with the climate system. Recent initiatives to reduce this uncertainty, including increases in data density and the estimation of emissions and uptake (a.k.a. fluxes) at fine spatiotemporal scales, presents computational challenges that call for numerically-efficient schemes. Often based on data assimilation (DA) approaches, these schemes are common within the numerical weather prediction community. The goal of this research is to identify fundamental gaps in our knowledge regarding the precision and accuracy of DA for CO2 applications, and develop suitable methods to fill these gaps. First, a new tool for characterizing background error statistics based on predictions from carbon flux and atmospheric transport models is shown to yield improved estimates of CO2 concentration fields within an operational DA system at the European Centre for Medium-Range Weather Forecasts (ECMWF). Second, the impact of numerical approximations within existing DA approaches is explored using a simplified flux estimation problem. It is found that a complex interplay between the underlying numerical approximations and the observational characteristics regulates the performance of the DA methods. Third, a novel and versatile DA method called the geostatistical ensemble square root filter (GEnSRF) is developed to leverage the information content of atmospheric CO2 observations. The ability of GEnSRF to match the performance of a more traditional inverse modeling approach is confirmed using a series of synthetic data experiments over North America. Fourth, GEnSRF is used to assimilate high-density satellite observations from the recently launched GOSAT satellite, and deliver global data-driven estimates of fine-scale CO2 fluxes. Diagnostics tools are used to evaluate the benefit of satellite observations in constraining global surface fluxes, relative to a traditional surface monitoring network. Overall, this research has developed, applied, and evaluated a novel set of tools with unique capabilities that increase the credibility of DA methods for atmospheric CO2 applications. Such advancements are necessary if we are to accurately understand the critical controls over the atmospheric CO2 growth, and improve our understanding of carbon-climate feedbacks.PHDEnvironmental EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96172/1/abhishch_1.pd

    Numerical Simulations of Shock and Rarefaction Waves Interacting With Interfaces in Compressible Multiphase Flows

    Full text link
    Developing a highly accurate numerical framework to study multiphase mixing in high speed flows containing shear layers, shocks, and strong accelerations is critical to many scientific and engineering endeavors. These flows occur across a wide range of scales: from tiny bubbles in human tissue to massive stars collapsing. The lack of understanding of these flows has impeded the success of many engineering applications, our comprehension of astrophysical and planetary formation processes, and the development of biomedical technologies. Controlling mixing between different fluids is central to achieving fusion energy, where mixing is undesirable, and supersonic combustion, where enhanced mixing is important. Iron, found throughout the universe and a necessary component for life, is dispersed through the mixing processes of a dying star. Non-invasive treatments using ultrasound to induce bubble collapse in tissue are being developed to destroy tumors or deliver genes to specific cells. Laboratory experiments of these flows are challenging because the initial conditions and material properties are difficult to control, modern diagnostics are unable to resolve the flow dynamics and conditions, and experiments of these flows are expensive. Numerical simulations can circumvent these difficulties and, therefore, have become a necessary component of any scientific challenge. Advances in the three fields of numerical methods, high performance computing, and multiphase flow modeling are presented: (i) novel numerical methods to capture accurately the multiphase nature of the problem; (ii) modern high performance computing paradigms to resolve the disparate time and length scales of the physical processes; (iii) new insights and models of the dynamics of multiphase flows, including mixing through hydrodynamic instabilities. These studies have direct applications to engineering and biomedical fields such as fuel injection problems, plasma deposition, cancer treatments, and turbomachinery.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133458/1/marchdf_1.pd

    Enabling Automated, Reliable and Efficient Aerodynamic Shape Optimization With Output-Based Adapted Meshes

    Full text link
    Simulation-based aerodynamic shape optimization has been greatly pushed forward during the past several decades, largely due to the developments of computational fluid dynamics (CFD), geometry parameterization methods, mesh deformation techniques, sensitivity computation, and numerical optimization algorithms. Effective integration of these components has made aerodynamic shape optimization a highly automated process, requiring less and less human interference. Mesh generation, on the other hand, has become the main overhead of setting up the optimization problem. Obtaining a good computational mesh is essential in CFD simulations for accurate output predictions, which as a result significantly affects the reliability of optimization results. However, this is in general a nontrivial task, heavily relying on the user’s experience, and it can be worse with the emerging high-fidelity requirements or in the design of novel configurations. On the other hand, mesh quality and the associated numerical errors are typically only studied before and after the optimization, leaving the design search path unveiled to numerical errors. This work tackles these issues by integrating an additional component, output-based mesh adaptation, within traditional aerodynamic shape optimizations. First, we develop a more suitable error estimator for optimization problems by taking into account errors in both the objective and constraint outputs. The localized output errors are then used to drive mesh adaptation to achieve the desired accuracy on both the objective and constraint outputs. With the variable fidelity offered by the adaptive meshes, multi-fidelity optimization frameworks are developed to tightly couple mesh adaptation and shape optimization. The objective functional and its sensitivity are first evaluated on an initial coarse mesh, which is then subsequently adapted as the shape optimization proceeds. The effort to set up the optimization is minimal since the initial mesh can be fairly coarse and easy to generate. Meanwhile, the proposed framework saves computational costs by reducing the mesh size at the early stages of the optimization, when the design is far from optimal, and avoiding exhaustive search on low-fidelity meshes when the outputs are inaccurate. To further improve the computational efficiency, we also introduce new methods to accelerate the error estimation and mesh adaptation using machine learning techniques. Surrogate models are developed to predict the localized output error and optimal mesh anisotropy to guide the adaptation. The proposed machine learning approaches demonstrate good performance in two-dimensional test problems, encouraging more study and developments to incorporate them within aerodynamic optimization techniques. Although CFD has been extensively used in aircraft design and optimization, the design automation, reliability, and efficiency are largely limited by the mesh generation process and the fixed-mesh optimization paradigm. With the emerging high-fidelity requirements and the further developments of unconventional configurations, CFD-based optimization has to be made more accurate and more efficient to achieve higher design reliability and lower computational cost. Furthermore, future aerodynamic optimization needs to avoid unnecessary overhead in mesh generation and optimization setup to further automate the design process. The author expects the methods developed in this work to be the keys to enable more automated, reliable, and efficient aerodynamic shape optimization, making CFD-based optimization a more powerful tool in aircraft design.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163034/1/cgderic_1.pd

    Continuous reservoir model updating by ensemble Kalman filter on Grid computing architectures

    Get PDF
    A reservoir engineering Grid computing toolkit, ResGrid and its extensions, were developed and applied to designed reservoir simulation studies and continuous reservoir model updating. The toolkit provides reservoir engineers with high performance computing capacity to complete their projects without requiring them to delve into Grid resource heterogeneity, security certification, or network protocols. Continuous and real-time reservoir model updating is an important component of closed-loop model-based reservoir management. The method must rapidly and continuously update reservoir models by assimilating production data, so that the performance predictions and the associated uncertainty are up-to-date for optimization. The ensemble Kalman filter (EnKF), a Bayesian approach for model updating, uses Monte Carlo statistics for fusing observation data with forecasts from simulations to estimate a range of plausible models. The ensemble of updated models can be used for uncertainty forecasting or optimization. Grid environments aggregate geographically distributed, heterogeneous resources. Their virtual architecture can handle many large parallel simulation runs, and is thus well suited to solving model-based reservoir management problems. In the study, the ResGrid workflow for Grid-based designed reservoir simulation and an adapted workflow provide tools for building prior model ensembles, task farming and execution, extracting simulator output results, implementing the EnKF, and using a web portal for invoking those scripts. The ResGrid workflow is demonstrated for a geostatistical study of 3-D displacements in heterogeneous reservoirs. A suite of 1920 simulations assesses the effects of geostatistical methods and model parameters. Multiple runs are simultaneously executed using parallel Grid computing. Flow response analyses indicate that efficient, widely-used sequential geostatistical simulation methods may overestimate flow response variability when compared to more rigorous but computationally costly direct methods. Although the EnKF has attracted great interest in reservoir engineering, some aspects of the EnKF remain poorly understood, and are explored in the dissertation. First, guidelines are offered to select data assimilation intervals. Second, an adaptive covariance inflation method is shown to be effective to stabilize the EnKF. Third, we show that simple truncation can correct negative effects of nonlinearity and non-Gaussianity as effectively as more complex and expensive reparameterization methods

    Low-cost, high performance solar vapor generation

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 163-170).Sustainable access to energy and access to water are two of the defining technological problems that society currently faces. Threats of climate change and depletion of fossil fuel reserves are forcing a shift towards more renewable sources of energy, such as solar energy and others. At the same time, water resources are becoming scarcer, caused by unsustainable extraction of ground water resources. Current projections show that by 2025, the population of people living in water-stressed areas is expected to increase to 3.9 billion. Exacerbating this problem is continuing urbanization, which stresses local water supplies further. The two problems of energy and water are inextricably tied together. Water processing, such as desalination and wastewater management, fundamentally requires energy inputs, while energy production often requires water for operational cooling. This thesis focuses on developing technologies for low-intensity utilization of solar energy for desalination and wastewater management. Traditional solar thermal technologies collect sunlight, and use motorized optical concentrators to concentrate the weak solar flux to create high temperature steam, often 400'C or higher. These optical concentrators are costly and require maintenance that are unattractive in many small-scale and low-intensity applications. These applications include distributed desalination, medical sterilization, wastewater management, and more. In this thesis, the research has focused on 1) evaporation mechanisms in nanofluids for solar applications, 2) a solar steam generation structure that operates without optical concentrators, and 3) a floating solar still that produces water without the need for periodic cleaning of excess salts, and has a material cost of $3 to supply individual daily drinking water needs, which can be paid back quickly for some regions like the Maldive. One of the first approaches to solar vapor generation was to use nanoparticles suspended in water, or nanofluids, to localize solar absorption to near the evaporation surface. This approach reduces the temperature drop between the heat generation site and the evaporation surface, increasing the evaporation rate. This thesis first explores the vapor generation mechanisms in nanofluid-based solar vapor generation, and develops a small-scale nanofluid-based solar receiver that could generate vapor at 70% efficiency. A theory was developed to show how nanoparticle suspension could affect the nanofluid transient performance. This thesis next demonstrates a small-scale floating solar steam generator, that does not require optical concentration. This was achieved by further extending the heat localization concept, using various widely available materials to reduce radiative, convective, and conductive losses. By reconfiguring the device, steam at 100°C or vapor at 70% efficiency could be produced. The basic steam generator was then improved and adapted to reject excess salts left behind from vapor formation. The salt rejecting structure was coupled with a condensation cover, to form a floating solar still that was demonstrated to operate in the ocean, simultaneously producing drinkable water and rejecting the excess salts. Salt rejection experiments were conducted to prove the long-term ability of the structure to operate in saline waters.by George Wei Ni.Ph. D

    Stochastic mapping for chemical plume source localization with application to autonomous hydrothermal vent discovery

    Get PDF
    Thesis (Ph. D.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2007.Includes bibliographical references (p. 313-325).This thesis presents a stochastic mapping framework for autonomous robotic chemical plume source localization in environments with multiple sources. Potential applications for robotic chemical plume source localization include pollution and environmental monitoring, chemical plant safety, search and rescue, anti-terrorism, narcotics control, explosive ordinance removal, and hydrothermal vent prospecting. Turbulent flows make the spatial relationship between the detectable manifestation of a chemical plume source, the plume itself, and the location of its source inherently uncertain. Search domains with multiple sources compound this uncertainty because the number of sources as well as their locations is unknown a priori. Our framework for stochastic mapping is an adaptation of occupancy grid mapping where the binary state of map nodes is redefined to denote either the presence (occupancy) or absence of an active plume source. A key characteristic of the chemical plume source localization problem is that only a few sources are expected in the search domain. The occupancy grid framework allows for both plume detections and non-detections to inform the estimated state of grid nodes in the map, thereby explicitly representing explored but empty portions of the domain as well as probable source locations.(cont.) However, sparsity in the expected number of occupied grid nodes strongly violates a critical conditional independence assumption required by the standard Bayesian recursive map update rule. While that assumption makes for a computationally attractive algorithm, in our application it results in occupancy grid maps that are grossly inconsistent with the assumption of a small number of occupied cells. To overcome this limitation, several alternative occupancy grid update algorithms are presented, including an exact solution that is computationally tractable for small numbers of detections and an approximate recursive algorithm with improved performance relative to the standard algorithm but equivalent computational cost. Application to hydrothermal plume data collected by the autonomous underwater vehicle ABE during vent prospecting operations in both the Pacific and Atlantic oceans verifies the utility of the approach. The resulting maps enable nested surveys for homing-in on seafloor vent sites to be carried out autonomously. This eliminates inter-dive processing, recharging of batteries, and time spent deploying and recovering the vehicle that would otherwise be necessary with survey design directed by human operators.by Michael V. Jakuba.Ph.D

    Regional to Interhemispheric Connectivity of the Atlantic Ocean Circulation

    Get PDF
    This thesis investigates the connectivity and interaction of remote regions in the Atlantic Ocean based on high-resolution model experiments. Connectivity between remote regions has important implications on a range of spatial and temporal scales. It can affect global climate variability, the coherence of circulation changes on regional scales and the spreading of marine organisms. Based on several advancements in modelling, it is demonstrated how interhemispheric connectivity contributes to changes of the Atlantic Meridional Overturning Circulation (AMOC) on climate timescales. At the same time, the effect of wind-forcing and the interaction of individual AMOC pathways with eddies on regional scales are shown to be highly important to understand AMOC variability on sub-decadal timescales, with further implications on interdisciplinary research questions
    • …
    corecore