142 research outputs found

    A Bayesian Approach to Computer Model Calibration and Model-Assisted Design

    Get PDF
    Computer models of phenomena that are difficult or impossible to study directly are critical for enabling research and assisting design in many areas. In order to be effective, computer models must be calibrated so that they accurately represent the modeled phenomena. There exists a rich variety of methods for computer model calibration that have been developed in recent decades. Among the desiderata of such methods is a means of quantifying remaining uncertainty after calibration regarding both the values of the calibrated model inputs and the model outputs. Bayesian approaches to calibration have met this need in recent decades. However, limitations remain. Whereas in model calibration one finds point estimates or distributions of calibration inputs in order to induce the model to reflect reality accurately, interest in a computer model often centers primarily on its use for model-assisted design, in which the goal is to find values for design inputs to induce the modeled system to approximate some target outcome. Existing Bayesian approaches are limited to the first of these two tasks. The present work develops an approach adapting Bayesian methods for model calibration for application in model-assisted design. The approach retains the benefits of Bayesian calibration in accounting for and quantifying all sources of uncertainty. It is capable of generating a comprehensive assessment of the Pareto optimal inputs for a multi-objective optimization problem. The present work shows that this approach can apply as a method for model-assisted design using a previously calibrated system, and can also serve as a method for model-assisted design using a model that still requires calibration, accomplishing both ends simultaneously

    Active Localization of Gas Leaks using Fluid Simulation

    Get PDF
    Sensors are routinely mounted on robots to acquire various forms of measurements in spatio-temporal fields. Locating features within these fields and reconstruction (mapping) of the dense fields can be challenging in resource-constrained situations, such as when trying to locate the source of a gas leak from a small number of measurements. In such cases, a model of the underlying complex dynamics can be exploited to discover informative paths within the field. We use a fluid simulator as a model, to guide inference for the location of a gas leak. We perform localization via minimization of the discrepancy between observed measurements and gas concentrations predicted by the simulator. Our method is able to account for dynamically varying parameters of wind flow (e.g., direction and strength), and its effects on the observed distribution of gas. We develop algorithms for off-line inference as well as for on-line path discovery via active sensing. We demonstrate the efficiency, accuracy and versatility of our algorithm using experiments with a physical robot conducted in outdoor environments. We deploy an unmanned air vehicle (UAV) mounted with a CO2 sensor to automatically seek out a gas cylinder emitting CO2 via a nozzle. We evaluate the accuracy of our algorithm by measuring the error in the inferred location of the nozzle, based on which we show that our proposed approach is competitive with respect to state of the art baselines.Comment: Accepted as a journal paper at IEEE Robotics and Automation Letters (RA-L

    Olfactory learning alters navigation strategies and behavioral variability in C. elegans

    Full text link
    Animals adjust their behavioral response to sensory input adaptively depending on past experiences. The flexible brain computation is crucial for survival and is of great interest in neuroscience. The nematode C. elegans modulates its navigation behavior depending on the association of odor butanone with food (appetitive training) or starvation (aversive training), and will then climb up the butanone gradient or ignore it, respectively. However, the exact change in navigation strategy in response to learning is still unknown. Here we study the learned odor navigation in worms by combining precise experimental measurement and a novel descriptive model of navigation. Our model consists of two known navigation strategies in worms: biased random walk and weathervaning. We infer weights on these strategies by applying the model to worm navigation trajectories and the exact odor concentration it experiences. Compared to naive worms, appetitive trained worms up-regulate the biased random walk strategy, and aversive trained worms down-regulate the weathervaning strategy. The statistical model provides prediction with >90%>90 \% accuracy of the past training condition given navigation data, which outperforms the classical chemotaxis metric. We find that the behavioral variability is altered by learning, such that worms are less variable after training compared to naive ones. The model further predicts the learning-dependent response and variability under optogenetic perturbation of the olfactory neuron AWCON^\mathrm{ON}. Lastly, we investigate neural circuits downstream from AWCON^\mathrm{ON} that are differentially recruited for learned odor-guided navigation. Together, we provide a new paradigm to quantify flexible navigation algorithms and pinpoint the underlying neural substrates

    Artificial Intelligence for the Electron Ion Collider (AI4EIC)

    Full text link
    The Electron-Ion Collider (EIC), a state-of-the-art facility for studying the strong force, is expected to begin commissioning its first experiments in 2028. This is an opportune time for artificial intelligence (AI) to be included from the start at this facility and in all phases that lead up to the experiments. The second annual workshop organized by the AI4EIC working group, which recently took place, centered on exploring all current and prospective application areas of AI for the EIC. This workshop is not only beneficial for the EIC, but also provides valuable insights for the newly established ePIC collaboration at EIC. This paper summarizes the different activities and R&D projects covered across the sessions of the workshop and provides an overview of the goals, approaches and strategies regarding AI/ML in the EIC community, as well as cutting-edge techniques currently studied in other experiments.Comment: 27 pages, 11 figures, AI4EIC workshop, tutorials and hackatho

    Calibration of a grey box model using Particle Swarm Optimization on different building structures

    Get PDF
    Sviluppo della parte di calibrazione di un sistema BEMS basato su controllo predittivo (MPC), per l'ottimizzazione di una pompa di caloreaccoppiata con fotovoltaico

    Machine Learning for Smart and Energy-Efficient Buildings

    Full text link
    Energy consumption in buildings, both residential and commercial, accounts for approximately 40% of all energy usage in the U.S., and similar numbers are being reported from countries around the world. This significant amount of energy is used to maintain a comfortable, secure, and productive environment for the occupants. So, it is crucial that the energy consumption in buildings must be optimized, all the while maintaining satisfactory levels of occupant comfort, health, and safety. Recently, Machine Learning has been proven to be an invaluable tool in deriving important insights from data and optimizing various systems. In this work, we review the ways in which machine learning has been leveraged to make buildings smart and energy-efficient. For the convenience of readers, we provide a brief introduction of several machine learning paradigms and the components and functioning of each smart building system we cover. Finally, we discuss challenges faced while implementing machine learning algorithms in smart buildings and provide future avenues for research at the intersection of smart buildings and machine learning

    ECU-oriented models for NOx prediction. Part 1: a mean value engine model for NOx prediction

    Full text link
    The implantation of nitrogen oxide sensors in diesel engines was proposed in order to track the emissions at the engine exhaust, with applications to the control and diagnosis of the after-treatment devices. However, the use of models is still necessary since the output from these sensors is delayed and filtered. The present paper deals with the problem of nitrogen oxide estimation in turbocharged diesel engines combining the information provided by both models and sensors. In Part 1 of this paper, a control-oriented nitrogen oxide model is designed. The model is based on the mapping of the nitrogen oxide output and a set of corrections which account for the variations in the intake and ambient conditions, and it is designed for implementation in commercial electronic control units. The model is sensitive to variations in the engine's air path, which is solved through the engine volumetric efficiency and the first-principle equations but disregards the effect of variation in the injection settings. In order to consider the effect of the thermal transients on the in-cylinder temperature, the model introduces a dynamic factor. The model behaves well in both steady-state operation and transient operation, achieving a mean average error of 7% in the steady state and lower than 10% in an exigent sportive driving mountain profile cycle. The relatively low calibration effort and the model accuracy show the feasibility of the model for exhaust gas recirculation control as well as onboard diagnosis of the nitrogen oxide emissions.Guardiola, C.; Pla Moreno, B.; Blanco-Rodriguez, D.; Calendini, PO. (2015). ECU-oriented models for NOx prediction. Part 1: a mean value engine model for NOx prediction. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering. 229(8):992-1015. doi:10.1177/0954407014550191S9921015229

    Tree-structured multiclass probability estimators

    Get PDF
    Nested dichotomies are used as a method of transforming a multiclass classification problem into a series of binary problems. A binary tree structure is constructed over the label space that recursively splits the set of classes into subsets, and a binary classification model learns to discriminate between the two subsets of classes at each node. Several distinct nested dichotomy structures can be built in an ensemble for superior performance. In this thesis, we introduce two new methods for constructing more accurate nested dichotomies. Random-pair selection is a subset selection method that aims to group similar classes together in a non-deterministic fashion to easily enable the construction of accurate ensembles. Multiple subset evaluation takes this, and other subset selection methods, further by evaluating several different splits and choosing the best performing one. Finally, we also discuss the calibration of the probability estimates produced by nested dichotomies. We observe that nested dichotomies systematically produce under-confident predictions, even if the binary classifiers are well calibrated, and especially when the number of classes is high. Furthermore, substantial performance gains can be made when probability calibration methods are also applied to the internal models

    On Novel Approaches to Model-Based Structural Health Monitoring

    Get PDF
    Structural health monitoring (SHM) strategies have classically fallen into two main categories of approach: model-driven and data-driven methods. The former utilises physics-based models and inverse techniques as a method for inferring the health state of a structure from changes to updated parameters; hence defined as inverse model-driven approaches. The other frames SHM within a statistical pattern recognition paradigm. These methods require no physical modelling, instead inferring relationships between data and health states directly. Although successes with both approaches have been made, they both suffer from significant drawbacks, namely parameter estimation and interpretation difficulties within the inverse model-driven framework, and a lack of available full-system damage state data for data-driven techniques. Consequently, this thesis seeks to outline and develop a framework for an alternative category of approach; forward model-driven SHM. This class of strategies utilise calibrated physics-based models, in a forward manner, to generate health state data (i.e. the undamaged condition and damage states of interest) for training machine learning or pattern recognition technologies. As a result the framework seeks to provide potential solutions to these issues by removing the need for making health decisions from updated parameters and providing a mechanism for obtaining health state data. In light of this objective, a framework for forward model-driven SHM is established, highlighting key challenges and technologies that are required for realising this category of approach. The framework is constructed from two main components: generating physics-based models that accurately predict outputs under various damage scenarios, and machine learning methods used to infer decision bounds. This thesis deals with the former, developing technologies and strategies for producing statistically representative predictions from physics-based models. Specifically this work seeks to define validation within this context and propose a validation strategy, develop technologies that infer uncertainties from various sources, including model discrepancy, and offer a solution to the issue of validating full-system predictions when data is not available at this level. The first section defines validation within a forward model-driven context, offering a strategy of hypothesis testing, statistical distance metrics, visualisation tools, such as the witness function, and deterministic metrics. The statistical distances field is shown to provide a wealth of potential validation metrics that consider whole probability distributions. Additionally, existing validation metrics can be categorised within this fields terminology, providing greater insight. In the second part of this study emulator technologies, specifically Gaussian Process (GP) methods, are discussed. Practical implementation considerations are examined, including the establishment of validation and diagnostic techniques. Various GP extensions are outlined, with particular focus on technologies for dealing with large data sets and their applicability as emulators. Utilising these technologies two techniques for calibrating models, whilst accounting for and inferring model discrepancies, are demonstrated: Bayesian Calibration and Bias Correction (BCBC) and Bayesian History Matching (BHM). Both methods were applied to representative building structures in order to demonstrate their effectiveness within a forward model-driven SHM strategy. Sequential design heuristics were developed for BHM along with an importance sampling based technique for inferring the functional model discrepancy uncertainties. The third body of work proposes a multi-level uncertainty integration strategy by developing a subfunction discrepancy approach. This technique seeks to construct a methodology for producing valid full-system predictions through a combination of validated sub-system models where uncertainties and model discrepancy have been quantified. This procedure is demonstrated on a numerical shear structure where it is shown to be effective. Finally, conclusions about the aforementioned technologies are provided. In addition, a review of the future directions for forward model-driven SHM are outlined with the hope that this category receives wider investigation within the SHM community
    corecore