18,879 research outputs found
Short and long-term wind turbine power output prediction
In the wind energy industry, it is of great importance to develop models that
accurately forecast the power output of a wind turbine, as such predictions are
used for wind farm location assessment or power pricing and bidding,
monitoring, and preventive maintenance. As a first step, and following the
guidelines of the existing literature, we use the supervisory control and data
acquisition (SCADA) data to model the wind turbine power curve (WTPC). We
explore various parametric and non-parametric approaches for the modeling of
the WTPC, such as parametric logistic functions, and non-parametric piecewise
linear, polynomial, or cubic spline interpolation functions. We demonstrate
that all aforementioned classes of models are rich enough (with respect to
their relative complexity) to accurately model the WTPC, as their mean squared
error (MSE) is close to the MSE lower bound calculated from the historical
data. We further enhance the accuracy of our proposed model, by incorporating
additional environmental factors that affect the power output, such as the
ambient temperature, and the wind direction. However, all aforementioned
models, when it comes to forecasting, seem to have an intrinsic limitation, due
to their inability to capture the inherent auto-correlation of the data. To
avoid this conundrum, we show that adding a properly scaled ARMA modeling layer
increases short-term prediction performance, while keeping the long-term
prediction capability of the model
Cover Tree Bayesian Reinforcement Learning
This paper proposes an online tree-based Bayesian approach for reinforcement
learning. For inference, we employ a generalised context tree model. This
defines a distribution on multivariate Gaussian piecewise-linear models, which
can be updated in closed form. The tree structure itself is constructed using
the cover tree method, which remains efficient in high dimensional spaces. We
combine the model with Thompson sampling and approximate dynamic programming to
obtain effective exploration policies in unknown environments. The flexibility
and computational simplicity of the model render it suitable for many
reinforcement learning problems in continuous state spaces. We demonstrate this
in an experimental comparison with least squares policy iteration
Connecting the Dots: Towards Continuous Time Hamiltonian Monte Carlo
Continuous time Hamiltonian Monte Carlo is introduced, as a powerful
alternative to Markov chain Monte Carlo methods for continuous target
distributions. The method is constructed in two steps: First Hamiltonian
dynamics are chosen as the deterministic dynamics in a continuous time
piecewise deterministic Markov process. Under very mild restrictions, such a
process will have the desired target distribution as an invariant distribution.
Secondly, the numerical implementation of such processes, based on adaptive
numerical integration of second order ordinary differential equations is
considered. The numerical implementation yields an approximate, yet highly
robust algorithm that, unlike conventional Hamiltonian Monte Carlo, enables the
exploitation of the complete Hamiltonian trajectories (hence the title). The
proposed algorithm may yield large speedups and improvements in stability
relative to relevant benchmarks, while incurring numerical errors that are
negligible relative to the overall Monte Carlo errors
Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins
In this work we propose an Uncertainty Quantification methodology for
sedimentary basins evolution under mechanical and geochemical compaction
processes, which we model as a coupled, time-dependent, non-linear,
monodimensional (depth-only) system of PDEs with uncertain parameters. While in
previous works (Formaggia et al. 2013, Porta et al., 2014) we assumed a
simplified depositional history with only one material, in this work we
consider multi-layered basins, in which each layer is characterized by a
different material, and hence by different properties. This setting requires
several improvements with respect to our earlier works, both concerning the
deterministic solver and the stochastic discretization. On the deterministic
side, we replace the previous fixed-point iterative solver with a more
efficient Newton solver at each step of the time-discretization. On the
stochastic side, the multi-layered structure gives rise to discontinuities in
the dependence of the state variables on the uncertain parameters, that need an
appropriate treatment for surrogate modeling techniques, such as sparse grids,
to be effective. We propose an innovative methodology to this end which relies
on a change of coordinate system to align the discontinuities of the target
function within the random parameter space. The reference coordinate system is
built upon exploiting physical features of the problem at hand. We employ the
locations of material interfaces, which display a smooth dependence on the
random parameters and are therefore amenable to sparse grid polynomial
approximations. We showcase the capabilities of our numerical methodologies
through two synthetic test cases. In particular, we show that our methodology
reproduces with high accuracy multi-modal probability density functions
displayed by target state variables (e.g., porosity).Comment: 25 pages, 30 figure
Estimation of extended mixed models using latent classes and latent processes: the R package lcmm
The R package lcmm provides a series of functions to estimate statistical
models based on linear mixed model theory. It includes the estimation of mixed
models and latent class mixed models for Gaussian longitudinal outcomes (hlme),
curvilinear and ordinal univariate longitudinal outcomes (lcmm) and curvilinear
multivariate outcomes (multlcmm), as well as joint latent class mixed models
(Jointlcmm) for a (Gaussian or curvilinear) longitudinal outcome and a
time-to-event that can be possibly left-truncated right-censored and defined in
a competing setting. Maximum likelihood esimators are obtained using a modified
Marquardt algorithm with strict convergence criteria based on the parameters
and likelihood stability, and on the negativity of the second derivatives. The
package also provides various post-fit functions including goodness-of-fit
analyses, classification, plots, predicted trajectories, individual dynamic
prediction of the event and predictive accuracy assessment. This paper
constitutes a companion paper to the package by introducing each family of
models, the estimation technique, some implementation details and giving
examples through a dataset on cognitive aging
Time-varying signal processing using multi-wavelet basis functions and a modified block least mean square algorithm
This paper introduces a novel parametric modeling and identification method for linear time-varying systems using a modified block least mean square (LMS) approach where the time-varying parameters are approximated using multi-wavelet basis functions. This approach can be used to track rapidly or even sharply varying processes and is more suitable for recursive estimation of process parameters by combining wavelet approximation theory with a modified block LMS algorithm. Numerical examples are provided to show the effectiveness of the proposed method for dealing with severely nonstatinoary processes
- …