75 research outputs found

    Improving Stock Trading Decisions Based on Pattern Recognition Using Machine Learning Technology

    Get PDF
    PRML, a novel candlestick pattern recognition model using machine learning methods, is proposed to improve stock trading decisions. Four popular machine learning methods and 11 different features types are applied to all possible combinations of daily patterns to start the pattern recognition schedule. Different time windows from one to ten days are used to detect the prediction effect at different periods. An investment strategy is constructed according to the identified candlestick patterns and suitable time window. We deploy PRML for the forecast of all Chinese market stocks from Jan 1, 2000 until Oct 30, 2020. Among them, the data from Jan 1, 2000 to Dec 31, 2014 is used as the training data set, and the data set from Jan 1, 2015 to Oct 30, 2020 is used to verify the forecasting effect. Empirical results show that the two-day candlestick patterns after filtering have the best prediction effect when forecasting one day ahead; these patterns obtain an average annual return, an annual Sharpe ratio, and an information ratio as high as 36.73%, 0.81, and 2.37, respectively. After screening, three-day candlestick patterns also present a beneficial effect when forecasting one day ahead in that these patterns show stable characteristics. Two other popular machine learning methods, multilayer perceptron network and long short-term memory neural networks, are applied to the pattern recognition framework to evaluate the dependency of the prediction model. A transaction cost of 0.2% is considered on the two-day patterns predicting one day ahead, thus confirming the profitability. Empirical results show that applying different machine learning methods to two-day and three-day patterns for one-day-ahead forecasts can be profitable

    Machine Learning Approach to Forecast Global Solar Radiation Time Series

    Get PDF
    The integration of Renewable Energy (RE) into Power Systems brings new challenges to the Smart Grids (SG) technologies. The generation output from renewable sources generally depends on the atmospheric conditions. This fact causes intermittences on the power output from renewable source, and hence the power quality of the grid is directly affected by atmospheric phenomena. The increasing advances on technologies for energy storage open a track to the Energy Management (EM). Therefore, the power output from a renewable source can be stored or dispatched in a particular time-instant in order to meet the demand. Scheduling Demand Respond (DR) action on the grid, can optimize the dispatch by reducing over generated energy wastage. The difficulty now is to ensure the availability of energy to supply into the grid by forecasting the Global Solar Radiation (GSR) on a localization where a Photovoltaic (PV) system is connected. This thesis tries to address the issue using Machine Learning (ML) techniques. This eases the generation scheduling task. The work developed on this thesis is focused on exploring ML techniques to hourly forecast GSR and optimize the dispatch of energy on a SG. The experiments present results for different configuration of Deep Learning and Gaussian Processes for GSR time-series regression, aiming to discuss the advantages of using hybrid methods on the context of SG

    Structured representation learning from complex data

    Full text link
    This thesis advances several theoretical and practical aspects of the recently introduced restricted Boltzmann machine - a powerful probabilistic and generative framework for modelling data and learning representations. The contributions of this study represent a systematic and common theme in learning structured representations from complex data

    Machine Learning for Load Profile Data Analytics and Short-term Load Forecasting

    Get PDF
    Short-term load forecasting (STLF) is a key issue for the operation and dispatch of day ahead energy market. It is a prerequisite for the economic operation of power systems and the basis of dispatching and making startup-shutdown plans, which plays a key role in the automatic control of power systems. Accurate power load forecasting not only help users choose a more appropriate electricity consumption scheme and reduces a lot of electric cost expenditure but also is conducive to optimizing the resources of power systems. This advantage helps while improving equipment utilization for reducing the production cost and improving the economic benefit, and improving power supply capability. Therefore, ultimately achieving the aim of efficient demand response program. This thesis outlines some machine learning based data driven models for STLF in smart grid. It also presents different policies and current statuses as well as future research direction for developing new STLF models. This thesis outlines three projects for load profile data analytics and machine learning based STLF models. First project is, load profile classification and determining load demand variability with the aim to estimate the load demand of a customer. In this project load profile data collected from smart meter are classified using recently developed extended nearest neighbor (ENN) algorithm. Here we have calculated generalized class wise statistics which will give the idea of load demand variability of a customer. Finally the load demand of a particular customer is estimated based on generalized class wise statistics, maximum load demand and minimum load demand. In the second project, a composite ENN model is proposed for STLF. The ENN model is proposed to improve the performance of k-nearest neighbor (kNN) algorithm based STLF models. In this project we have developed three individual models to process weather data i.e., temperature, social variables, and load demand data. The load demand is predicted separately for different input variables. Finally the load demand is forecasted from the weighted average of three models. The weights are determined based on the change in generalized class wise statistics. This projects provides a significant improvement in the performance of load forecasting accuracy compared to kNN based models. In the third project, an advanced data driven model is developed. Here, we have proposed a novel hybrid load forecasting model based on novel signal decomposition and correlation analysis. The hybrid model consists of improved empirical mode decomposition, T-Copula based correlation analysis. Finally we have employed deep belief network for making load demand forecasting. The results are compared with previous studies and it is evident that there is a significant improvement in mean absolute percentage error (MAPE) and root mean square error (RMSE)

    Deep Learning for Plant Stress Phenotyping: Trends and Future Perspectives

    Get PDF
    Deep learning (DL), a subset of machine learning approaches, has emerged as a versatile tool to assimilate large amounts of heterogeneous data and provide reliable predictions of complex and uncertain phenomena. These tools are increasingly being used by the plant science community to make sense of the large datasets now regularly collected via high-throughput phenotyping and genotyping. We review recent work where DL principles have been utilized for digital image–based plant stress phenotyping. We provide a comparative assessment of DL tools against other existing techniques, with respect to decision accuracy, data size requirement, and applicability in various scenarios. Finally, we outline several avenues of research leveraging current and future DL tools in plant science

    Energy and Area Efficient Machine Learning Architectures using Spin-Based Neurons

    Get PDF
    Recently, spintronic devices with low energy barrier nanomagnets such as spin orbit torque-Magnetic Tunnel Junctions (SOT-MTJs) and embedded magnetoresistive random access memory (MRAM) devices are being leveraged as a natural building block to provide probabilistic sigmoidal activation functions for RBMs. In this dissertation research, we use the Probabilistic Inference Network Simulator (PIN-Sim) to realize a circuit-level implementation of deep belief networks (DBNs) using memristive crossbars as weighted connections and embedded MRAM-based neurons as activation functions. Herein, a probabilistic interpolation recoder (PIR) circuit is developed for DBNs with probabilistic spin logic (p-bit)-based neurons to interpolate the probabilistic output of the neurons in the last hidden layer which are representing different output classes. Moreover, the impact of reducing the Magnetic Tunnel Junction\u27s (MTJ\u27s) energy barrier is assessed and optimized for the resulting stochasticity present in the learning system. In p-bit based DBNs, different defects such as variation of the nanomagnet thickness can undermine functionality by decreasing the fluctuation speed of the p-bit realized using a nanomagnet. A method is developed and refined to control the fluctuation frequency of the output of a p-bit device by employing a feedback mechanism. The feedback can alleviate this process variation sensitivity of p-bit based DBNs. This compact and low complexity method which is presented by introducing the self-compensating circuit can alleviate the influences of process variation in fabrication and practical implementation. Furthermore, this research presents an innovative image recognition technique for MNIST dataset on the basis of p-bit-based DBNs and TSK rule-based fuzzy systems. The proposed DBN-fuzzy system is introduced to benefit from low energy and area consumption of p-bit-based DBNs and high accuracy of TSK rule-based fuzzy systems. This system initially recognizes the top results through the p-bit-based DBN and then, the fuzzy system is employed to attain the top-1 recognition results from the obtained top outputs. Simulation results exhibit that a DBN-Fuzzy neural network not only has lower energy and area consumption than bigger DBN topologies while also achieving higher accuracy
    • …
    corecore