47 research outputs found

    Probabilistic short term wind power forecasts using deep neural networks with discrete target classes

    Get PDF
    Usually, neural networks trained on historical feed-in time series of wind turbines deterministically predict power output over the next hours to days. Here, the training goal is to minimise a scalar cost function, often the root mean square error (RMSE) between network output and target values. Yet similar to the analog ensemble (AnEn) method, the training algorithm can also be adapted to analyse the uncertainty of the power output from the spread of possible targets found in the historical data for a certain meteorological situation. In this study, the uncertainty estimate is achieved by discretising the continuous time series of power targets into several bins (classes). For each forecast horizon, a neural network then predicts the probability of power output falling into each of the bins, resulting in an empirical probability distribution. Similiar to the AnEn method, the proposed method avoids the use of costly numerical weather prediction (NWP) ensemble runs, although a selection of several deterministic NWP forecasts as input is helpful. Using state-of-the-art deep learning technology, we applied our method to a large region and a single wind farm. MAE scores of the 50-percentile were on par with or better than comparable deterministic forecasts. The corresponding Continuous Ranked Probability Score (CRPS) was even lower. Future work will investigate the overdispersiveness sometimes observed, and extend the method to solar power forecasts.</p

    Lightweight and Secure PUF Key Storage Using Limits of Machine Learning

    Get PDF
    13th International Workshop, Nara, Japan, September 28 – October 1, 2011. ProceedingsA lightweight and secure key storage scheme using silicon Physical Unclonable Functions (PUFs) is described. To derive stable PUF bits from chip manufacturing variations, a lightweight error correction code (ECC) encoder / decoder is used. With a register count of 69, this codec core does not use any traditional error correction techniques and is 75% smaller than a previous provably secure implementation, and yet achieves robust environmental performance in 65nm FPGA and 0.13μ ASIC implementations. The security of the syndrome bits uses a new security argument that relies on what cannot be learned from a machine learning perspective. The number of Leaked Bits is determined for each Syndrome Word, reducible using Syndrome Distribution Shaping. The design is secure from a min-entropy standpoint against a machine-learning-equipped adversary that, given a ceiling of leaked bits, has a classification error bounded by ε. Numerical examples are given using latest machine learning results

    Deep Reinforcement Learning: An Overview

    Full text link
    In recent years, a specific machine learning method called deep learning has gained huge attraction, as it has obtained astonishing results in broad applications such as pattern recognition, speech recognition, computer vision, and natural language processing. Recent research has also been shown that deep learning techniques can be combined with reinforcement learning methods to learn useful representations for the problems with high dimensional raw data input. This chapter reviews the recent advances in deep reinforcement learning with a focus on the most used deep architectures such as autoencoders, convolutional neural networks and recurrent neural networks which have successfully been come together with the reinforcement learning framework.Comment: Proceedings of SAI Intelligent Systems Conference (IntelliSys) 201

    Cirrus cloud retrieval with MSG/SEVIRI using artificial neural networks

    No full text
    Cirrus clouds play an important role in climate as they tend to warm the Earth–atmosphere system. Nevertheless their physical properties remain one of the largest sources of uncertainty in atmospheric research. To better understand the physical processes of cirrus clouds and their climate impact, enhanced satellite observations are necessary. In this paper we present a new algorithm, CiPS (Cirrus Properties from SEVIRI), that detects cirrus clouds and retrieves the corresponding cloud top height, ice optical thickness and ice water path using the SEVIRI imager aboard the geostationary Meteosat Second Generation satellites. CiPS utilises a set of artificial neural networks trained with SEVIRI thermal observations, CALIOP backscatter products, the ECMWF surface temperature and auxiliary data. CiPS detects 71 and 95 % of all cirrus clouds with an optical thickness of 0.1 and 1.0, respectively, that are retrieved by CALIOP. Among the cirrus-free pixels, CiPS classifies 96 % correctly. With respect to CALIOP, the cloud top height retrieved by CiPS has a mean absolute percentage error of 10 % or less for cirrus clouds with a top height greater than 8 km. For the ice optical thickness, CiPS has a mean absolute percentage error of 50 % or less for cirrus clouds with an optical thickness between 0.35 and 1.8 and of 100 % or less for cirrus clouds with an optical thickness down to 0.07 with respect to the optical thickness retrieved by CALIOP. The ice water path retrieved by CiPS shows a similar performance, with mean absolute percentage errors of 100 % or less for cirrus clouds with an ice water path down to 1.7 g m−2. Since the training reference data from CALIOP only include ice water path and optical thickness for comparably thin clouds, CiPS also retrieves an opacity flag, which tells us whether a retrieved cirrus is likely to be too thick for CiPS to accurately derive the ice water path and optical thickness. By retrieving CALIOP-like cirrus properties with the large spatial coverage and high temporal resolution of SEVIRI during both day and night, CiPS is a powerful tool for analysing the temporal evolution of cirrus clouds including their optical and physical properties. To demonstrate this, the life cycle of a thin cirrus cloud is analysed

    Reinforcement Learning

    No full text
    International audienc

    SPSA for layer-wise training of deep networks

    No full text
    Concerned with neural learning without backpropagation, we investigate variants of the simultaneous perturbation stochastic approximation (SPSA) algorithm. Experimental results suggest that these allow for the successful training of deep feed-forward neural networks using forward passes only. In particular, we find that SPSA-based algorithms which update network parameters in a layer-wise manner are superior to variants which update all weights simultaneously
    corecore