91,961 research outputs found
Recommended from our members
Ensemble learning of model hyperparameters and spatiotemporal data for calibration of low-cost PM2.5 sensors.
he PM2.5 air quality index (AQI) measurements from government-built supersites are accurate but cannot provide a dense coverage of monitoring areas. Low-cost PM2.5 sensors can be used to deploy a fine-grained internet-of-things (IoT) as a complement to government facilities. Calibration of low-cost sensors by reference to high-accuracy supersites is thus essential. Moreover, the imputation for missing-value in training data may affect the calibration result, the best performance of calibration model requires hyperparameter optimization, and the affecting factors of PM2.5 concentrations such as climate, geographical landscapes and anthropogenic activities are uncertain in spatial and temporal dimensions. In this paper, an ensemble learning for imputation method selection, calibration model hyperparameterization, and spatiotemporal training data composition is proposed. Three government supersites are chosen in central Taiwan for the deployment of low-cost sensors and hourly PM2.5 measurements are collected for 60 days for conducting experiments. Three optimizers, Sobol sequence, Nelder and Meads, and particle swarm optimization (PSO), are compared for evaluating their performances with various versions of ensembles. The best calibration results are obtained by using PSO, and the improvement ratios with respect to R2, RMSE, and NME, are 4.92%, 52.96%, and 56.85%, respectively
ASCENS: Engineering Autonomic Service-Component Ensembles
Today’s developers often face the demanding task of developing software for ensembles: systems with massive numbers of nodes, operating in open and non-deterministic environments with complex interactions, and the need to dynamically adapt to new requirements, technologies or environmental conditions without redeployment and without interruption of the system’s functionality. Conventional development approaches and languages do not provide adequate support for the problems posed by this challenge. The goal of the ASCENS project is to develop a coherent, integrated set of methods and tools to build software for ensembles. To this end we research foundational issues that arise during the development of these kinds of systems, and we build mathematical models that address them. Based on these theories we design a family of languages for engineering ensembles, formal methods that can handle the size, complexity and adaptivity required by ensembles, and software-development methods that provide guidance for developers. In this paper we provide an overview of several research areas of ASCENS: the SOTA approach to ensemble engineering and the underlying formal model called GEM, formal notions of adaptation and awareness, the SCEL language, quantitative analysis of ensembles, and finally software-engineering methods for ensembles
Data assimilation using bayesian filters and B-spline geological models
This paper proposes a new approach to problems of data assimilation, also known as history matching, of oilfield production data by adjustment of the location and sharpness of patterns of geological facies. Traditionally, this problem has been addressed using gradient based approaches with a level set parameterization of the geology. Gradient-based methods are robust, but computationally demanding with real-world reservoir problems and insufficient for reservoir management uncertainty assessment. Recently, the ensemble filter approach has been used to tackle this problem because of its high efficiency from the standpoint of implementation, computational cost, and performance. Incorporation of level set parameterization in this approach could further deal with the lack of differentiability with respect to facies type, but its practical implementation is based on some assumptions that are not easily satisfied in real problems. In this work, we propose to describe the geometry of the permeability field using B-spline curves. This transforms history matching of the discrete facies type to the estimation of continuous B-spline control points. As filtering scheme, we use the ensemble square-root filter (EnSRF). The efficacy of the EnSRF with the B-spline parameterization is investigated through three numerical experiments, in which the reservoir contains a curved channel, a disconnected channel or a 2-dimensional closed feature. It is found that the application of the proposed method to the problem of adjusting facies edges to match production data is relatively straightforward and provides statistical estimates of the distribution of geological facies and of the state of the reservoir
The measurement postulates of quantum mechanics are operationally redundant
Understanding the core content of quantum mechanics requires us to
disentangle the hidden logical relationships between the postulates of this
theory. Here we show that the mathematical structure of quantum measurements,
the formula for assigning outcome probabilities (Born's rule) and the
post-measurement state-update rule, can be deduced from the other quantum
postulates, often referred to as "unitary quantum mechanics", and the
assumption that ensembles on finite-dimensional Hilbert spaces are
characterised by finitely many parameters. This is achieved by taking an
operational approach to physical theories, and using the fact that the manner
in which a physical system is partitioned into subsystems is a subjective
choice of the observer, and hence should not affect the predictions of the
theory. In contrast to other approaches, our result does not assume that
measurements are related to operators or bases, it does not rely on the
universality of quantum mechanics, and it is independent of the interpretation
of probability.Comment: This is a post-peer-review, pre-copyedit version of an article
published in Nature Communications. The final authenticated version is
available online at: http://dx.doi.org/10.1038/s41467-019-09348-
Evolutionary model type selection for global surrogate modeling
Due to the scale and computational complexity of currently used simulation codes, global surrogate (metamodels) models have become indispensable tools for exploring and understanding the design space. Due to their compact formulation they are cheap to evaluate and thus readily facilitate visualization, design space exploration, rapid prototyping, and sensitivity analysis. They can also be used as accurate building blocks in design packages or larger simulation environments. Consequently, there is great interest in techniques that facilitate the construction of such approximation models while minimizing the computational cost and maximizing model accuracy. Many surrogate model types exist ( Support Vector Machines, Kriging, Neural Networks, etc.) but no type is optimal in all circumstances. Nor is there any hard theory available that can help make this choice. In this paper we present an automatic approach to the model type selection problem. We describe an adaptive global surrogate modeling environment with adaptive sampling, driven by speciated evolution. Different model types are evolved cooperatively using a Genetic Algorithm ( heterogeneous evolution) and compete to approximate the iteratively selected data. In this way the optimal model type and complexity for a given data set or simulation code can be dynamically determined. Its utility and performance is demonstrated on a number of problems where it outperforms traditional sequential execution of each model type
- …