12 research outputs found
Masked Multi-Step Probabilistic Forecasting for Short-to-Mid-Term Electricity Demand
Predicting the demand for electricity with uncertainty helps in planning and
operation of the grid to provide reliable supply of power to the consumers.
Machine learning (ML)-based demand forecasting approaches can be categorized
into (1) sample-based approaches, where each forecast is made independently,
and (2) time series regression approaches, where some historical load and other
feature information is used. When making a short-to-mid-term electricity demand
forecast, some future information is available, such as the weather forecast
and calendar variables. However, in existing forecasting models this future
information is not fully incorporated. To overcome this limitation of existing
approaches, we propose Masked Multi-Step Multivariate Probabilistic Forecasting
(MMMPF), a novel and general framework to train any neural network model
capable of generating a sequence of outputs, that combines both the temporal
information from the past and the known information about the future to make
probabilistic predictions. Experiments are performed on a real-world dataset
for short-to-mid-term electricity demand forecasting for multiple regions and
compared with various ML methods. They show that the proposed MMMPF framework
outperforms not only sample-based methods but also existing time-series
forecasting models with the exact same base models. Models trainded with MMMPF
can also generate desired quantiles to capture uncertainty and enable
probabilistic planning for grid of the future.Comment: Accepted by the 2023 IEEE Power & Energy Society General Meeting
(PESGM). arXiv admin note: substantial text overlap with arXiv:2209.1441
Context-aware Dynamic Data-driven Pattern Classification
AbstractThis work aims to mathematically formalize the notion of context, with the purpose of allowing contextual decision-making in order to improve performance in dynamic data driven classification systems. We present definitions for both intrinsic context, i.e. factors which directly affect sensor measurements for a given event, as well as extrinsic context, i.e. factors which do not affect the sensor measurements directly, but do affect the interpretation of collected data. Supervised and unsupervised modeling techniques to derive context and context labels from sensor data are formulated. Here, supervised modeling incorporates the a priori known factors affecting the sensing modalities, while unsupervised modeling autonomously discovers the structure of those factors in sensor data. Context-aware event classification algorithms are developed by adapting the classification boundaries, dependent on the current operational context. Improvements in context-aware classification have been quantified and validated in an unattended sensor-fence application for US Border Monitoring. Field data, collected with seismic sensors on different ground types, are analyzed in order to classify two types of walking across the border, namely, normal and stealthy. The classification is shown to be strongly dependent on the context (specifically, soil type: gravel or moist soil)
Uncertainty-aware Perception Models for Off-road Autonomous Unmanned Ground Vehicles
Off-road autonomous unmanned ground vehicles (UGVs) are being developed for
military and commercial use to deliver crucial supplies in remote locations,
help with mapping and surveillance, and to assist war-fighters in contested
environments. Due to complexity of the off-road environments and variability in
terrain, lighting conditions, diurnal and seasonal changes, the models used to
perceive the environment must handle a lot of input variability. Current
datasets used to train perception models for off-road autonomous navigation
lack of diversity in seasons, locations, semantic classes, as well as time of
day. We test the hypothesis that model trained on a single dataset may not
generalize to other off-road navigation datasets and new locations due to the
input distribution drift. Additionally, we investigate how to combine multiple
datasets to train a semantic segmentation-based environment perception model
and we show that training the model to capture uncertainty could improve the
model performance by a significant margin. We extend the Masksembles approach
for uncertainty quantification to the semantic segmentation task and compare it
with Monte Carlo Dropout and standard baselines. Finally, we test the approach
against data collected from a UGV platform in a new testing environment. We
show that the developed perception model with uncertainty quantification can be
feasibly deployed on an UGV to support online perception and navigation tasks
Justification-Based Reliability in Machine Learning
With the advent of Deep Learning, the field of machine learning (ML) has
surpassed human-level performance on diverse classification tasks. At the same
time, there is a stark need to characterize and quantify reliability of a
model's prediction on individual samples. This is especially true in
application of such models in safety-critical domains of industrial control and
healthcare. To address this need, we link the question of reliability of a
model's individual prediction to the epistemic uncertainty of the model's
prediction. More specifically, we extend the theory of Justified True Belief
(JTB) in epistemology, created to study the validity and limits of
human-acquired knowledge, towards characterizing the validity and limits of
knowledge in supervised classifiers. We present an analysis of neural network
classifiers linking the reliability of its prediction on an input to
characteristics of the support gathered from the input and latent spaces of the
network. We hypothesize that the JTB analysis exposes the epistemic uncertainty
(or ignorance) of a model with respect to its inference, thereby allowing for
the inference to be only as strong as the justification permits. We explore
various forms of support (for e.g., k-nearest neighbors (k-NN) and l_p-norm
based) generated for an input, using the training data to construct a
justification for the prediction with that input. Through experiments conducted
on simulated and real datasets, we demonstrate that our approach can provide
reliability for individual predictions and characterize regions where such
reliability cannot be ascertained.Comment: Extended version of paper accepted at AAAI 2020 with supplementary
materials, update remark and fix typ
Justification-Based Reliability in Machine Learning
With the advent of Deep Learning, the field of machine learning (ML) has surpassed human-level performance on diverse classification tasks. At the same time, there is a stark need to characterize and quantify reliability of a model's prediction on individual samples. This is especially true in applications of such models in safety-critical domains of industrial control and healthcare. To address this need, we link the question of reliability of a model's individual prediction to the epistemic uncertainty of the model's prediction. More specifically, we extend the theory of Justified True Belief (JTB) in epistemology, created to study the validity and limits of human-acquired knowledge, towards characterizing the validity and limits of knowledge in supervised classifiers. We present an analysis of neural network classifiers linking the reliability of its prediction on a test input to characteristics of the support gathered from the input and hidden layers of the network. We hypothesize that the JTB analysis exposes the epistemic uncertainty (or ignorance) of a model with respect to its inference, thereby allowing for the inference to be only as strong as the justification permits. We explore various forms of support (for e.g., k-nearest neighbors (k-NN) and â„“p-norm based) generated for an input, using the training data to construct a justification for the prediction with that input. Through experiments conducted on simulated and real datasets, we demonstrate that our approach can provide reliability for individual predictions and characterize regions where such reliability cannot be ascertained