4 research outputs found

    Optimized Fuzzy Backpropagation Neural Network using Genetic Algorithm for Predicting Indonesian Stock Exchange Composite Index

    Get PDF
    Investment activities in the capital market have the possibility to generate profits and at the same time also cause losses. The composite stock price index as an indicator used to determine investment continues to change over time. Uncertainty of stock exchange composite index requires investors to be able to make predictions so as to produce maximum profits. The aim of this study is to forecast the composite stock price index. The input variables used are Indonesia interest rates, rupiah exchange rates, Dow Jones index, and world gold prices. All data obtained in the period from January 2008 to March 2019. Data are used to build the Fuzzy Backpropagation Neural Network (FBPNN), model. The weight of FBPNN model was optimized using Genetic Algorithm then used to forecast the composite stock price index. The forecasting result of the composite stock price index for April to June 2019 respectively were 5822.6, 5826.8, and 5767.3 with the MAPE value of 8.42%. These results indicate that Indonesia interest rates, rupiah exchange rate, Dow Jones index, and the gold price are the proper indicators to predict the composite stock price index

    Variable Precision Rough Set Model for Incomplete Information Systems and Its Beta-Reducts

    Get PDF
    As the original rough set model is quite sensitive to noisy data, Ziarko proposed the variable precision rough set (VPRS) model to deal with noisy data and uncertain information. This model allowed for some degree of uncertainty and misclassification in the mining process. In this paper, the variable precision rough set model for an incomplete information system is proposed by combining the VPRS model and incomplete information system, and the beta-lower and beta-upper approximations are defined. Considering that classical VPRS model lacks a feasible method to determine the precision parameter beta when calculating the beta-reducts, we present an approach to determine the parameter beta. Then, by calculating discernibility matrix and discernibility functions based on beta-lower approximation, the beta-reducts and the generalized decision rules are obtained. Finally, a concrete example is given to explain the validity and practicability of beta-reducts which is proposed in this paper

    Improving the Scalability of Reduct Determination in Rough Sets

    Get PDF
    Rough Set Data Analysis (RSDA) is a non-invasive data analysis approach that solely relies on the data to find patterns and decision rules. Despite its noninvasive approach and ability to generate human readable rules, classical RSDA has not been successfully used in commercial data mining and rule generating engines. The reason is its scalability. Classical RSDA slows down a great deal with the larger data sets and takes much longer times to generate the rules. This research is aimed to address the issue of scalability in rough sets by improving the performance of the attribute reduction step of the classical RSDA - which is the root cause of its slow performance. We propose to move the entire attribute reduction process into the database. We defined a new schema to store the initial data set. We then defined SOL queries on this new schema to find the attribute reducts correctly and faster than the traditional RSDA approach. We tested our technique on two typical data sets and compared our results with the traditional RSDA approach for attribute reduction. In the end we also highlighted some of the issues with our proposed approach which could lead to future research
    corecore