354 research outputs found
An Integrated Multi-Time-Scale Modeling for Solar Irradiance Forecasting Using Deep Learning
For short-term solar irradiance forecasting, the traditional point
forecasting methods are rendered less useful due to the non-stationary
characteristic of solar power. The amount of operating reserves required to
maintain reliable operation of the electric grid rises due to the variability
of solar energy. The higher the uncertainty in the generation, the greater the
operating-reserve requirements, which translates to an increased cost of
operation. In this research work, we propose a unified architecture for
multi-time-scale predictions for intra-day solar irradiance forecasting using
recurrent neural networks (RNN) and long-short-term memory networks (LSTMs).
This paper also lays out a framework for extending this modeling approach to
intra-hour forecasting horizons thus, making it a multi-time-horizon
forecasting approach, capable of predicting intra-hour as well as intra-day
solar irradiance. We develop an end-to-end pipeline to effectuate the proposed
architecture. The performance of the prediction model is tested and validated
by the methodical implementation. The robustness of the approach is
demonstrated with case studies conducted for geographically scattered sites
across the United States. The predictions demonstrate that our proposed unified
architecture-based approach is effective for multi-time-scale solar forecasts
and achieves a lower root-mean-square prediction error when benchmarked against
the best-performing methods documented in the literature that use separate
models for each time-scale during the day. Our proposed method results in a
71.5% reduction in the mean RMSE averaged across all the test sites compared to
the ML-based best-performing method reported in the literature. Additionally,
the proposed method enables multi-time-horizon forecasts with real-time inputs,
which have a significant potential for practical industry applications in the
evolving grid.Comment: 19 pages, 12 figures, 3 tables, under review for journal submissio
A State-of-the-Art Review of Time Series Forecasting Using Deep Learning Approaches
Time series forecasting has recently emerged as a crucial study area with a wide spectrum of real-world applications. The complexity of data processing originates from the amount of data processed in the digital world. Despite a long history of successful time-series research using classic statistical methodologies, there are some limits in dealing with an enormous amount of data and non-linearity. Deep learning techniques effectually handle the complicated nature of time series data. The effective analysis of deep learning approaches like Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long short-term memory (LSTM), Gated Recurrent Unit (GRU), Autoencoders, and other techniques like attention mechanism, transfer learning, and dimensionality reduction are discussed with their merits and limitations. The performance evaluation metrics used to validate the model's accuracy are discussed. This paper reviews various time series applications using deep learning approaches with their benefits, challenges, and opportunities
CUDA-bigPSF: An optimized version of bigPSF accelerated with Graphics Processing Unit
Accurate and fast short-term load forecasting is crucial in efficiently managing energy production and distribution. As such, many different algorithms have been proposed to address this topic, including hybrid models that combine clustering with other forecasting techniques. One of these algorithms is bigPSF, an algorithm that combines K-means clustering and a similarity search optimized for its use in distributed environments. The work presented in this paper aims to improve the time required to execute the algorithm with two main contributions. First, some of the issues of the original proposal that limited the number of cores simultaneously used are studied and highlighted. Second, a version of the algorithm optimized for Graphics Processing Unit (GPU) is proposed, solving the previously mentioned issues while taking into account the GPU architecture and memory structure. Experimentation was done with seven years of real-world electric demand data from Uruguay. Results show that the proposed algorithm executed consistently faster than the original version, achieving speedups up to 500 times faster during the training phase.Funding for open access charge: Universidad de Granada / CBUAGrant PID2020-112495RB-C21 funded by MCIN/ AEI /10.13039/501100011033I + D + i FEDER 2020 project B-TIC-42-UGR2
Recommended from our members
State-of-the-art on research and applications of machine learning in the building life cycle
Fueled by big data, powerful and affordable computing resources, and advanced algorithms, machine learning has been explored and applied to buildings research for the past decades and has demonstrated its potential to enhance building performance. This study systematically surveyed how machine learning has been applied at different stages of building life cycle. By conducting a literature search on the Web of Knowledge platform, we found 9579 papers in this field and selected 153 papers for an in-depth review. The number of published papers is increasing year by year, with a focus on building design, operation, and control. However, no study was found using machine learning in building commissioning. There are successful pilot studies on fault detection and diagnosis of HVAC equipment and systems, load prediction, energy baseline estimate, load shape clustering, occupancy prediction, and learning occupant behaviors and energy use patterns. None of the existing studies were adopted broadly by the building industry, due to common challenges including (1) lack of large scale labeled data to train and validate the model, (2) lack of model transferability, which limits a model trained with one data-rich building to be used in another building with limited data, (3) lack of strong justification of costs and benefits of deploying machine learning, and (4) the performance might not be reliable and robust for the stated goals, as the method might work for some buildings but could not be generalized to others. Findings from the study can inform future machine learning research to improve occupant comfort, energy efficiency, demand flexibility, and resilience of buildings, as well as to inspire young researchers in the field to explore multidisciplinary approaches that integrate building science, computing science, data science, and social science
Cloud Energy Micro-Moment Data Classification: A Platform Study
Energy efficiency is a crucial factor in the well-being of our planet. In
parallel, Machine Learning (ML) plays an instrumental role in automating our
lives and creating convenient workflows for enhancing behavior. So, analyzing
energy behavior can help understand weak points and lay the path towards better
interventions. Moving towards higher performance, cloud platforms can assist
researchers in conducting classification trials that need high computational
power. Under the larger umbrella of the Consumer Engagement Towards Energy
Saving Behavior by means of Exploiting Micro Moments and Mobile Recommendation
Systems (EM)3 framework, we aim to influence consumers behavioral change via
improving their power consumption consciousness. In this paper, common cloud
artificial intelligence platforms are benchmarked and compared for micro-moment
classification. The Amazon Web Services, Google Cloud Platform, Google Colab,
and Microsoft Azure Machine Learning are employed on simulated and real energy
consumption datasets. The KNN, DNN, and SVM classifiers have been employed.
Superb performance has been observed in the selected cloud platforms, showing
relatively close performance. Yet, the nature of some algorithms limits the
training performance.Comment: This paper has been accepted in IEEE RTDPCC 2020: International
Symposium on Real-time Data Processing for Cloud Computin
Image Super-resolution with An Enhanced Group Convolutional Neural Network
CNNs with strong learning abilities are widely chosen to resolve
super-resolution problem. However, CNNs depend on deeper network architectures
to improve performance of image super-resolution, which may increase
computational cost in general. In this paper, we present an enhanced
super-resolution group CNN (ESRGCNN) with a shallow architecture by fully
fusing deep and wide channel features to extract more accurate low-frequency
information in terms of correlations of different channels in single image
super-resolution (SISR). Also, a signal enhancement operation in the ESRGCNN is
useful to inherit more long-distance contextual information for resolving
long-term dependency. An adaptive up-sampling operation is gathered into a CNN
to obtain an image super-resolution model with low-resolution images of
different sizes. Extensive experiments report that our ESRGCNN surpasses the
state-of-the-arts in terms of SISR performance, complexity, execution speed,
image quality evaluation and visual effect in SISR. Code is found at
https://github.com/hellloxiaotian/ESRGCNN
An Application of Fuzzy Symbolic Time-Series for Energy Demand Forecasting
In this paper, we present a new fuzzy symbolization technique for energy load forecasting with neural networks, FPLS-Sym. Symbolization techniques transform a numerical time series into a smaller string of symbols, providing a high-level representation of time series by combining segmentation, aggregation and discretization. The dimensional reduction obtained with symbolization can speed up substantially the time required to train neural networks, however, it can also lead to considerable information losses that could lead to a less accurate forecast. FPLS-Sym introduces the use of fuzzy logic in the discretization process, maintaining more information about each segment of the neural network at the expense of requiring more space in memory. Extensive experimentation was made to evaluate FPLS-Sym with various neural-network-based models, including different neural network architectures and activation functions. The evaluation was done with energy demand data from Spain taken from 2009 to 2019. Results show that FPLS-Sym provides better quality metrics than other symbolization techniques and outperforms the use of the standard numerical time series representation in both quality metrics and training time.Grant PID2020-112495RB-C21 funded by MCIN/ AEI /10.13039/501100011033I+D+i FEDER 2020 project B-TIC-42-UGR2
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
- …