10,519 research outputs found

    Dynamic learning with neural networks and support vector machines

    Get PDF
    Neural network approach has proven to be a universal approximator for nonlinear continuous functions with an arbitrary accuracy. It has been found to be very successful for various learning and prediction tasks. However, supervised learning using neural networks has some limitations because of the black box nature of their solutions, experimental network parameter selection, danger of overfitting, and convergence to local minima instead of global minima. In certain applications, the fixed neural network structures do not address the effect on the performance of prediction as the number of available data increases. Three new approaches are proposed with respect to these limitations of supervised learning using neural networks in order to improve the prediction accuracy.;Dynamic learning model using evolutionary connectionist approach . In certain applications, the number of available data increases over time. The optimization process determines the number of the input neurons and the number of neurons in the hidden layer. The corresponding globally optimized neural network structure will be iteratively and dynamically reconfigured and updated as new data arrives to improve the prediction accuracy. Improving generalization capability using recurrent neural network and Bayesian regularization. Recurrent neural network has the inherent capability of developing an internal memory, which may naturally extend beyond the externally provided lag spaces. Moreover, by adding a penalty term of sum of connection weights, Bayesian regularization approach is applied to the network training scheme to improve the generalization performance and lower the susceptibility of overfitting. Adaptive prediction model using support vector machines . The learning process of support vector machines is focused on minimizing an upper bound of the generalization error that includes the sum of the empirical training error and a regularized confidence interval, which eventually results in better generalization performance. Further, this learning process is iteratively and dynamically updated after every occurrence of new data in order to capture the most current feature hidden inside the data sequence.;All the proposed approaches have been successfully applied and validated on applications related to software reliability prediction and electric power load forecasting. Quantitative results show that the proposed approaches achieve better prediction accuracy compared to existing approaches

    Review and Comparison of Intelligent Optimization Modelling Techniques for Energy Forecasting and Condition-Based Maintenance in PV Plants

    Get PDF
    Within the field of soft computing, intelligent optimization modelling techniques include various major techniques in artificial intelligence. These techniques pretend to generate new business knowledge transforming sets of "raw data" into business value. One of the principal applications of these techniques is related to the design of predictive analytics for the improvement of advanced CBM (condition-based maintenance) strategies and energy production forecasting. These advanced techniques can be used to transform control system data, operational data and maintenance event data to failure diagnostic and prognostic knowledge and, ultimately, to derive expected energy generation. One of the systems where these techniques can be applied with massive potential impact are the legacy monitoring systems existing in solar PV energy generation plants. These systems produce a great amount of data over time, while at the same time they demand an important e ort in order to increase their performance through the use of more accurate predictive analytics to reduce production losses having a direct impact on ROI. How to choose the most suitable techniques to apply is one of the problems to address. This paper presents a review and a comparative analysis of six intelligent optimization modelling techniques, which have been applied on a PV plant case study, using the energy production forecast as the decision variable. The methodology proposed not only pretends to elicit the most accurate solution but also validates the results, in comparison with the di erent outputs for the di erent techniques

    Neural Network with Genetic Algorithm Prediction Model of Energy Consumption for Billing Integrity in Gas Pipeline

    Get PDF
    Along the development of oil and gas industry, missing data is one of the contributors that restrains in analyzing and processing data task in database. By monitoring and maintaining using metering system, the reliability and billing integrity can be ensured and trustworthy can be developed between distributors and customers. In this context, PETRONAS Gas Berhad (PGB) as a gas distributor and an existing system in Nur Metering Station, Kulim is held responsible to evaluate the energy consumption from the sales gas produced. The system is standalone that consists of measuring equipment including pressure transmitter and temperature transmitter, turbine meter, gas chromatography and flow computer but does not have any reference system to verify its integrity. Customers are being charge according to the amount of energy consumption calculated and any error in calculation will cause loss of profit to the company and affect PETRONAS’s business credibility. In the future, it is such a vital to have an ideal analysis in order to maintain the sustainability. In this paper, several techniques will be discuss and selected including neural network prediction model, least square vector regression and combination of either two methods mentioned before with genetic algorithm as preferable technique to indicate the missing data. The model that has been selected based on its evaluation will predict the missing data and compare it with the results of the existing metering system to ensure the reliability and accuracy of the system. The billing integrity between oil and gas company especially PETRONAS and the customers could be maintained and in the future if the project is expanded, it will have the potential of saving of millions of dollars to Malaysian oil and gas companies

    A brief network analysis of Artificial Intelligence publication

    Full text link
    In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940. We collected and mined through the IEEE publish data base to analysis the geological and chronological variance of the activeness of research in AI. The connections between different institutes are showed. The result shows that the leading community of AI research are mainly in the USA, China, the Europe and Japan. The key institutes, authors and the research hotspots are revealed. It is found that the research institutes in the fields like Data Mining, Computer Vision, Pattern Recognition and some other fields of Machine Learning are quite consistent, implying a strong interaction between the community of each field. It is also showed that the research of Electronic Engineering and Industrial or Commercial applications are very active in California. Japan is also publishing a lot of papers in robotics. Due to the limitation of data source, the result might be overly influenced by the number of published articles, which is to our best improved by applying network keynode analysis on the research community instead of merely count the number of publish.Comment: 18 pages, 7 figure

    Large-Scale Detection of Non-Technical Losses in Imbalanced Data Sets

    Get PDF
    Non-technical losses (NTL) such as electricity theft cause significant harm to our economies, as in some countries they may range up to 40% of the total electricity distributed. Detecting NTLs requires costly on-site inspections. Accurate prediction of NTLs for customers using machine learning is therefore crucial. To date, related research largely ignore that the two classes of regular and non-regular customers are highly imbalanced, that NTL proportions may change and mostly consider small data sets, often not allowing to deploy the results in production. In this paper, we present a comprehensive approach to assess three NTL detection models for different NTL proportions in large real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and Support Vector Machine. This work has resulted in appreciable results that are about to be deployed in a leading industry solution. We believe that the considerations and observations made in this contribution are necessary for future smart meter research in order to report their effectiveness on imbalanced and large real world data sets.Comment: Proceedings of the Seventh IEEE Conference on Innovative Smart Grid Technologies (ISGT 2016
    • …
    corecore