3,138 research outputs found
Uncertainty Quantification in Neural-Network Based Pain Intensity Estimation
Improper pain management can lead to severe physical or mental consequences,
including suffering, and an increased risk of opioid dependency. Assessing the
presence and severity of pain is imperative to prevent such outcomes and
determine the appropriate intervention. However, the evaluation of pain
intensity is challenging because different individuals experience pain
differently. To overcome this, researchers have employed machine learning
models to evaluate pain intensity objectively. However, these efforts have
primarily focused on point estimation of pain, disregarding the inherent
uncertainty and variability present in the data and model. Consequently, the
point estimates provide only partial information for clinical decision-making.
This study presents a neural network-based method for objective pain interval
estimation, incorporating uncertainty quantification. This work explores three
algorithms: the bootstrap method, lower and upper bound estimation (LossL)
optimized by genetic algorithm, and modified lower and upper bound estimation
(LossS) optimized by gradient descent algorithm. Our empirical results reveal
that LossS outperforms the other two by providing a narrower prediction
interval. As LossS outperforms, we assessed its performance in three different
scenarios for pain assessment: (1) a generalized approach (single model for the
entire population), (2) a personalized approach (separate model for each
individual), and (3) a hybrid approach (separate model for each cluster of
individuals). Our findings demonstrate the hybrid approach's superior
performance, with notable practicality in clinical contexts. It has the
potential to be a valuable tool for clinicians, enabling objective pain
intensity assessment while taking uncertainty into account. This capability is
crucial in facilitating effective pain management and reducing the risks
associated with improper treatment.Comment: 26 pages, 5 figures, 9 table
A Physics-Informed, Deep Double Reservoir Network for Forecasting Boundary Layer Velocity
When a fluid flows over a solid surface, it creates a thin boundary layer
where the flow velocity is influenced by the surface through viscosity, and can
transition from laminar to turbulent at sufficiently high speeds. Understanding
and forecasting the wind dynamics under these conditions is one of the most
challenging scientific problems in fluid dynamics. It is therefore of high
interest to formulate models able to capture the nonlinear spatio-temporal
velocity structure as well as produce forecasts in a computationally efficient
manner. Traditional statistical approaches are limited in their ability to
produce timely forecasts of complex, nonlinear spatio-temporal structures which
are at the same time able to incorporate the underlying flow physics. In this
work, we propose a model to accurately forecast boundary layer velocities with
a deep double reservoir computing network which is capable of capturing the
complex, nonlinear dynamics of the boundary layer while at the same time
incorporating physical constraints via a penalty obtained by a Partial
Differential Equation (PDE). Simulation studies on a one-dimensional viscous
fluid demonstrate how the proposed model is able to produce accurate forecasts
while simultaneously accounting for energy loss. The application focuses on
boundary layer data on a wind tunnel with a PDE penalty derived from an
appropriate simplification of the Navier-Stokes equations, showing forecasts
more compliant with mass conservation
The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting
The numerous recent breakthroughs in machine learning (ML) make imperative to
carefully ponder how the scientific community can benefit from a technology
that, although not necessarily new, is today living its golden age. This Grand
Challenge review paper is focused on the present and future role of machine
learning in space weather. The purpose is twofold. On one hand, we will discuss
previous works that use ML for space weather forecasting, focusing in
particular on the few areas that have seen most activity: the forecasting of
geomagnetic indices, of relativistic electrons at geosynchronous orbits, of
solar flares occurrence, of coronal mass ejection propagation time, and of
solar wind speed. On the other hand, this paper serves as a gentle introduction
to the field of machine learning tailored to the space weather community and as
a pointer to a number of open challenges that we believe the community should
undertake in the next decade. The recurring themes throughout the review are
the need to shift our forecasting paradigm to a probabilistic approach focused
on the reliable assessment of uncertainties, and the combination of
physics-based and machine learning approaches, known as gray-box.Comment: under revie
Probabilistic load forecasting with Reservoir Computing
Some applications of deep learning require not only to provide accurate
results but also to quantify the amount of confidence in their prediction. The
management of an electric power grid is one of these cases: to avoid risky
scenarios, decision-makers need both precise and reliable forecasts of, for
example, power loads. For this reason, point forecasts are not enough hence it
is necessary to adopt methods that provide an uncertainty quantification.
This work focuses on reservoir computing as the core time series forecasting
method, due to its computational efficiency and effectiveness in predicting
time series. While the RC literature mostly focused on point forecasting, this
work explores the compatibility of some popular uncertainty quantification
methods with the reservoir setting. Both Bayesian and deterministic approaches
to uncertainty assessment are evaluated and compared in terms of their
prediction accuracy, computational resource efficiency and reliability of the
estimated uncertainty, based on a set of carefully chosen performance metrics
Wind Energy: Forecasting Challenges for its Operational Management
Renewable energy sources, especially wind energy, are to play a larger role
in providing electricity to industrial and domestic consumers. This is already
the case today for a number of European countries, closely followed by the US
and high growth countries, for example, Brazil, India and China. There exist a
number of technological, environmental and political challenges linked to
supplementing existing electricity generation capacities with wind energy.
Here, mathematicians and statisticians could make a substantial contribution at
the interface of meteorology and decision-making, in connection with the
generation of forecasts tailored to the various operational decision problems
involved. Indeed, while wind energy may be seen as an environmentally friendly
source of energy, full benefits from its usage can only be obtained if one is
able to accommodate its variability and limited predictability. Based on a
short presentation of its physical basics, the importance of considering wind
power generation as a stochastic process is motivated. After describing
representative operational decision-making problems for both market
participants and system operators, it is underlined that forecasts should be
issued in a probabilistic framework. Even though, eventually, the forecaster
may only communicate single-valued predictions. The existing approaches to wind
power forecasting are subsequently described, with focus on single-valued
predictions, predictive marginal densities and space-time trajectories.
Upcoming challenges related to generating improved and new types of forecasts,
as well as their verification and value to forecast users, are finally
discussed.Comment: Published in at http://dx.doi.org/10.1214/13-STS445 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Condition-based maintenance of wind turbine blades
The blades of offshore wind farms (OWTs) are susceptible to a wide variety of diverse sources of
damage. Internal impacts are caused primarily by structure deterioration, so even though outer
consequences are the consequence of harsh marine ecosystems. We examine condition-based
maintenance (CBM) for a multiblade OWT system that is exposed to environmental shocks in this
work. In recent years, there has been a significant rise in the number of wind turbines operating
offshore that make use of CBMs. The gearbox, generator, and drive train all have their own
vibration-based monitoring systems, which form most of their foundation. For the blades, drive
train, tower, and foundation, a cost analysis of the various widely viable CBM systems as well as
their individual prices has been done. The purpose of this article is to investigate the potential
benefits that may result from using these supplementary systems in the maintenance strategy.
Along with providing a theoretical foundation, this article reviews the previous research that has
been conducted on CBM of OWT blades. Utilizing the data collected from condition monitoring,
an artificial neural network is employed to provide predictions on the remaining life. For the
purpose of assessing and forecasting the cost and efficacy of CBM, a simple tool that is based on
artificial neural networks (ANN) has been developed. A CBM technique that is well-established
and is based on data from condition monitoring is used to reduce cost of maintenance. This can be
accomplished by reducing malfunctions, cutting down on service interruption, and reducing the
number of unnecessary maintenance works. In MATLAB, an ANN is used to research both the
failure replacement cost and the preventative maintenance cost. In addition to this, a technique for
optimization is carried out to gain the optimal threshold values. There is a significant opportunity
to save costs by improving how choices are made on maintenance to make the operations more
cost-effective. In this research, a technique to optimizing CBM program for elements whose
deterioration may be characterized according to the level of damage that it has sustained is
presented. The strategy may be used for maintenance that is based on inspections as well as
maintenance that is based on online condition monitoring systems
A Novel Neural Network-based Multi-objective Evolution Lower Upper Bound Estimation Method for Electricity Load Interval Forecast
Currently, an interval prediction model, lower and upper bounds estimation (LUBE) which constructs the prediction intervals (PIs) by using the double outputs of the neural network (NN) is growing popular. However, existing LUBE researches have two problems. One is that the applied NNs are flawed: feedforward NN (FNN) cannot map the dynamic relationship of data and recurrent NN (RNN) is computationally expensive. The other is that most LUBE models are built under a single-objective frame in which the uncertainty cannot be fully quantified. In this article, a novel wavelet NN (WNN) with direct input–output links (DLWNN) is proposed to obtain PIs in a multiobjective LUBE frame. Different from WNN, the proposed DLWNN adds the direct links from the input layer to output layer which can make full use of the information of time series data. Besides, a niched differential evolution nondominated fast sort genetic algorithm (NDENSGA) is proposed to optimize the prediction model, so as to achieve a balance between estimation accuracy and the average width of the PIs. NDENSGA modifies the traditional population renewal mechanism to increase population diversity and adopts a new elite selection strategy for obtaining more extensive and uniform solutions. The effectiveness of DLWNN and NDENSGA is evaluated through a series of experiments with real electricity load data sets. The results show that the proposed model has better performance than others in terms of convergence and diversity of obtained nondominated solutions
Parity Calibration
In a sequential regression setting, a decision-maker may be primarily
concerned with whether the future observation will increase or decrease
compared to the current one, rather than the actual value of the future
observation. In this context, we introduce the notion of parity calibration,
which captures the goal of calibrated forecasting for the increase-decrease (or
"parity") event in a timeseries. Parity probabilities can be extracted from a
forecasted distribution for the output, but we show that such a strategy leads
to theoretical unpredictability and poor practical performance. We then observe
that although the original task was regression, parity calibration can be
expressed as binary calibration. Drawing on this connection, we use an online
binary calibration method to achieve parity calibration. We demonstrate the
effectiveness of our approach on real-world case studies in epidemiology,
weather forecasting, and model-based control in nuclear fusion.Comment: To appear at UAI 2023; 19 pages and 10 figure
- …