229,195 research outputs found

    Rough sets for predicting the Kuala Lumpur Stock Exchange Composite Index returns

    Get PDF
    This study aims to prove the usability of Rough Set approach in capturing the relationship between the technical indicators and the level of Kuala Lumpur Stock Exchange Composite Index (KLCI) over time.Stock markets are affected by many interrelated economic, political, and even psychological factors.Therefore, it is generally very difficult to predict its movements. There are extensive literatures available describing attempts to use artificial intelligence techniques; in particular neural networks and genetic algorithm for analyzing stock market variations.However, drawbacks are found where neural networks have great complexity in interpreting the results; genetic algorithms create large data redundancies.A relatively new approach, the rough sets are suggested for its simple knowledge representation, ability to deal with uncertainties and lowering data redundancies.In this study, a few different discretization algorithms were used at data preprocessing. From the simulations and result produced, the rough sets approach can be a promising alternative to the existing methods for stock market prediction

    Rough Neural Networks Architecture For Improving Generalization In Pattern Recognition

    Get PDF
    Neural networks are found to be attractive trainable machines for pattern recognition. The capability of these models to accommodate wide variety and variability of conditions, and the ability to imitate brain functions, make them popular research area. This research focuses on developing hybrid rough neural networks. These novel approaches are assumed to provide superior performance with respect to detection and automatic target recognition.In this thesis, hybrid architectures of rough set theory and neural networks have been investigated, developed, and implemented. The first hybrid approach provides novel neural network referred to as Rough Shared weight Neural Networks (RSNN). It uses the concept of approximation based on rough neurons to feature extraction, and experiences the methodology of weight sharing. The network stages are a feature extraction network, and a classification network. The extraction network is composed of rough neurons that accounts for the upper and lower approximations and embeds a membership function to replace ordinary activation functions. The neural network learns the rough set’s upper and lower approximations as feature extractors simultaneously with classification. The RSNN implements a novel approximation transform. The basic design for the network is provided together with the learning rules. The architecture provides a novel method to pattern recognition and is expected to be robust to any pattern recognition problem. The second hybrid approach is a two stand alone subsystems, referred to as Rough Neural Networks (RNN). The extraction network extracts detectors that represent pattern’s classes to be supplied to the classification network. It works as a filter for original distilled features based on equivalence relations and rough set reduction, while the second is responsible for classification of the outputs from the first system. The two approaches were applied to image pattern recognition problems. The RSNN was applied to automatic target recognition problem. The data is Synthetic Aperture Radar (SAR) image scenes of tanks, and background. The RSNN provides a novel methodology for designing nonlinear filters without prior knowledge of the problem domain. The RNN was used to detect patterns present in satellite image. A novel feature extraction algorithm was developed to extract the feature vectors. The algorithm enhances the recognition ability of the system compared to manual extraction and labeling of pattern classes. The performance of the rough backpropagation network is improved compared to backpropagation of the same architecture. The network has been designed to produce detection plane for the desired pattern. The hybrid approaches developed in this thesis provide novel techniques to recognition static and dynamic representation of patterns. In both domains the rough set theory improved generalization of the neural networks paradigms. The methodologies are theoretically robust to any pattern recognition problem, and are proved practically for image environments

    Analysing imperfect temporal information in GIS using the Triangular Model

    Get PDF
    Rough set and fuzzy set are two frequently used approaches for modelling and reasoning about imperfect time intervals. In this paper, we focus on imperfect time intervals that can be modelled by rough sets and use an innovative graphic model [i.e. the triangular model (TM)] to represent this kind of imperfect time intervals. This work shows that TM is potentially advantageous in visualizing and querying imperfect time intervals, and its analytical power can be better exploited when it is implemented in a computer application with graphical user interfaces and interactive functions. Moreover, a probabilistic framework is proposed to handle the uncertainty issues in temporal queries. We use a case study to illustrate how the unique insights gained by TM can assist a geographical information system for exploratory spatio-temporal analysis

    Class Association Rules Mining based Rough Set Method

    Full text link
    This paper investigates the mining of class association rules with rough set approach. In data mining, an association occurs between two set of elements when one element set happen together with another. A class association rule set (CARs) is a subset of association rules with classes specified as their consequences. We present an efficient algorithm for mining the finest class rule set inspired form Apriori algorithm, where the support and confidence are computed based on the elementary set of lower approximation included in the property of rough set theory. Our proposed approach has been shown very effective, where the rough set approach for class association discovery is much simpler than the classic association method.Comment: 10 pages, 2 figure

    FEATURE SELECTION APPLIED TO THE TIME-FREQUENCY REPRESENTATION OF MUSCLE NEAR-INFRARED SPECTROSCOPY (NIRS) SIGNALS: CHARACTERIZATION OF DIABETIC OXYGENATION PATTERNS

    Get PDF
    Diabetic patients might present peripheral microcirculation impairment and might benefit from physical training. Thirty-nine diabetic patients underwent the monitoring of the tibialis anterior muscle oxygenation during a series of voluntary ankle flexo-extensions by near-infrared spectroscopy (NIRS). NIRS signals were acquired before and after training protocols. Sixteen control subjects were tested with the same protocol. Time-frequency distributions of the Cohen's class were used to process the NIRS signals relative to the concentration changes of oxygenated and reduced hemoglobin. A total of 24 variables were measured for each subject and the most discriminative were selected by using four feature selection algorithms: QuickReduct, Genetic Rough-Set Attribute Reduction, Ant Rough-Set Attribute Reduction, and traditional ANOVA. Artificial neural networks were used to validate the discriminative power of the selected features. Results showed that different algorithms extracted different sets of variables, but all the combinations were discriminative. The best classification accuracy was about 70%. The oxygenation variables were selected when comparing controls to diabetic patients or diabetic patients before and after training. This preliminary study showed the importance of feature selection techniques in NIRS assessment of diabetic peripheral vascular impairmen

    A comparative study of the AHP and TOPSIS methods for implementing load shedding scheme in a pulp mill system

    Get PDF
    The advancement of technology had encouraged mankind to design and create useful equipment and devices. These equipment enable users to fully utilize them in various applications. Pulp mill is one of the heavy industries that consumes large amount of electricity in its production. Due to this, any malfunction of the equipment might cause mass losses to the company. In particular, the breakdown of the generator would cause other generators to be overloaded. In the meantime, the subsequence loads will be shed until the generators are sufficient to provide the power to other loads. Once the fault had been fixed, the load shedding scheme can be deactivated. Thus, load shedding scheme is the best way in handling such condition. Selected load will be shed under this scheme in order to protect the generators from being damaged. Multi Criteria Decision Making (MCDM) can be applied in determination of the load shedding scheme in the electric power system. In this thesis two methods which are Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) were introduced and applied. From this thesis, a series of analyses are conducted and the results are determined. Among these two methods which are AHP and TOPSIS, the results shown that TOPSIS is the best Multi criteria Decision Making (MCDM) for load shedding scheme in the pulp mill system. TOPSIS is the most effective solution because of the highest percentage effectiveness of load shedding between these two methods. The results of the AHP and TOPSIS analysis to the pulp mill system are very promising

    Qualitative Effects of Knowledge Rules in Probabilistic Data Integration

    Get PDF
    One of the problems in data integration is data overlap: the fact that different data sources have data on the same real world entities. Much development time in data integration projects is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates from the integration result or solve other semantic conflicts, but it proofs impossible to get rid of all semantic problems in data integration. An often-used rule of thumb states that about 90% of the development effort is devoted to solving the remaining 10% hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that stores any remaining semantic uncertainty and conflicts in a probabilistic database enabling it to already be meaningfully used. The main development effort in our approach is devoted to defining and tuning knowledge rules and thresholds. Rules and thresholds directly impact the size and quality of the integration result. We measure integration quality indirectly by measuring the quality of answers to queries on the integrated data set in an information retrieval-like way. The main contribution of this report is an experimental investigation of the effects and sensitivity of rule definition and threshold tuning on the integration quality. This proves that our approach indeed reduces development effort — and not merely shifts the effort to rule definition and threshold tuning — by showing that setting rough safe thresholds and defining only a few rules suffices to produce a ‘good enough’ integration that can be meaningfully used
    corecore