404 research outputs found

    A Contextual Bandit Approach for Value-oriented Prediction Interval Forecasting

    Full text link
    Prediction interval (PI) is an effective tool to quantify uncertainty and usually serves as an input to downstream robust optimization. Traditional approaches focus on improving the quality of PI in the view of statistical scores and assume the improvement in quality will lead to a higher value in the power systems operation. However, such an assumption cannot always hold in practice. In this paper, we propose a value-oriented PI forecasting approach, which aims at reducing operational costs in downstream operations. For that, it is required to issue PIs with the guidance of operational costs in robust optimization, which is addressed within the contextual bandit framework here. Concretely, the agent is used to select the optimal quantile proportion, while the environment reveals the costs in operations as rewards to the agent. As such, the agent can learn the policy of quantile proportion selection for minimizing the operational cost. The numerical study regarding a two-timescale operation of a virtual power plant verifies the superiority of the proposed approach in terms of operational value. And it is especially evident in the context of extensive penetration of wind power.Comment: submitted to IEEE Transactions on Smart Gri

    Easy Learning from Label Proportions

    Full text link
    We consider the problem of Learning from Label Proportions (LLP), a weakly supervised classification setup where instances are grouped into "bags", and only the frequency of class labels at each bag is available. Albeit, the objective of the learner is to achieve low task loss at an individual instance level. Here we propose Easyllp: a flexible and simple-to-implement debiasing approach based on aggregate labels, which operates on arbitrary loss functions. Our technique allows us to accurately estimate the expected loss of an arbitrary model at an individual level. We showcase the flexibility of our approach by applying it to popular learning frameworks, like Empirical Risk Minimization (ERM) and Stochastic Gradient Descent (SGD) with provable guarantees on instance level performance. More concretely, we exhibit a variance reduction technique that makes the quality of LLP learning deteriorate only by a factor of k (k being bag size) in both ERM and SGD setups, as compared to full supervision. Finally, we validate our theoretical results on multiple datasets demonstrating our algorithm performs as well or better than previous LLP approaches in spite of its simplicity

    Testing and Learning on Distributions with Symmetric Noise Invariance

    Full text link
    Kernel embeddings of distributions and the Maximum Mean Discrepancy (MMD), the resulting distance between distributions, are useful tools for fully nonparametric two-sample testing and learning on distributions. However, it is rarely that all possible differences between samples are of interest -- discovered differences can be due to different types of measurement noise, data collection artefacts or other irrelevant sources of variability. We propose distances between distributions which encode invariance to additive symmetric noise, aimed at testing whether the assumed true underlying processes differ. Moreover, we construct invariant features of distributions, leading to learning algorithms robust to the impairment of the input distributions with symmetric additive noise.Comment: 22 page

    Regression with Sensor Data Containing Incomplete Observations

    Full text link
    This paper addresses a regression problem in which output label values are the results of sensing the magnitude of a phenomenon. A low value of such labels can mean either that the actual magnitude of the phenomenon was low or that the sensor made an incomplete observation. This leads to a bias toward lower values in labels and its resultant learning because labels may have lower values due to incomplete observations, even if the actual magnitude of the phenomenon was high. Moreover, because an incomplete observation does not provide any tags indicating incompleteness, we cannot eliminate or impute them. To address this issue, we propose a learning algorithm that explicitly models incomplete observations corrupted with an asymmetric noise that always has a negative value. We show that our algorithm is unbiased as if it were learned from uncorrupted data that does not involve incomplete observations. We demonstrate the advantages of our algorithm through numerical experiments

    The effects of achievement goals and feedback on performance: with a prologue on an individual search for meaning

    Get PDF
    What does it mean to be an individual? The development of an individual is not something that occurs instantaneously. Instead, over time we use our experiences and life stories to help us define and elaborate on our identities. Goals entice people toward action. Actions are given meaning, direction, and purpose by the goals we seek. Every goal is a desired outcome situated in the future. By examining goals, we better understand a person\u27s needs and their motivation for their behavior. Our needs, and thus our goals, vary based on the situations we find ourselves in throughout our lives. By identifying our traits and knowing our goals, both past and present, we are able to weave together our storied self. Our storied self is our narrative identity. It is what separates us from every single other person who is currently living, has lived, or ever will live. Along each stage of life development, we add more experiences to our life stories in the hope that by the end of our lives we will have written a story that is personally meaningful and memorable. Thus, being an individual means writing a meaningful story, having the potential to live life with purpose, and, in the author\u27s case, living to seek the magis

    Development of maths capabilities and confidence in primary school

    Get PDF

    Classifier Calibration: A survey on how to assess and improve predicted class probabilities

    Full text link
    This paper provides both an introduction to and a detailed overview of the principles and practice of classifier calibration. A well-calibrated classifier correctly quantifies the level of uncertainty or confidence associated with its instance-wise predictions. This is essential for critical applications, optimal decision making, cost-sensitive classification, and for some types of context change. Calibration research has a rich history which predates the birth of machine learning as an academic field by decades. However, a recent increase in the interest on calibration has led to new methods and the extension from binary to the multiclass setting. The space of options and issues to consider is large, and navigating it requires the right set of concepts and tools. We provide both introductory material and up-to-date technical details of the main concepts and methods, including proper scoring rules and other evaluation metrics, visualisation approaches, a comprehensive account of post-hoc calibration methods for binary and multiclass classification, and several advanced topics
    • …
    corecore