28,751 research outputs found

    Fuzzy Integration to Standard Calculation of K-Nearest Neighbour Attributes

    Get PDF
    The development of information and data in the era of the industrial revolution 4.0 is very fast. Researchers, institutions and even industry are competing to find and utilize methods in data processing that are more effective and efficient. In data mining classification, there are several best methods and are widely used by researchers. One of them is K-Nearest Neighbor (KNN). The calculation process in the KNN algorithm is carried out by comparing the testing data to all existing training data. This comparison is generally symbolized by the value of closeness or similarity between attribute records. The KNN method is proven to be good for handling large datasets and datasets with many attributes. One of the drawbacks in calculating the similarity of the KNN is that if there are attributes with a large range value, the similarity value will also be large. Conversely, if the range in an attribute is small, the similarity is also small. This condition is clearly unfair considering the types of attributes in the current data vary widely. One solution to this problem is to use standardization for all existing data attributes. Fuzzy is a model introduced by Prof. Zadeh which allows a faint value to be a value between 1 and 0. In this study the fuzzy model will be integrated in the KNN similarity calculation to obtain standardization of all data attributes. The results show that the use of the KNN algorithm in the classification of credit approval has an accuracy rate of 91.83%

    A review of electricity load profile classification methods

    Get PDF
    With the electricity market liberalisation in Indonesia, the electricity companies will have the right to develop tariff rates independently. Thus, precise knowledge of load profile classifications of customers will become essential for designing a variety of tariff options, in which the tariff rates are in line with efficient revenue generation and will encourage optimum take up of the available electricity supplies, by various types of customers. Since the early days of the liberalisation of the Electricity Supply Industries (ESI) considerable efforts have been made to investigate methodologies to form optimal tariffs based on customer classes, derived from various clustering and classification techniques. Clustering techniques are analytical processes which are used to develop groups (classes) of customers based on their behaviour and to derive representative sets of load profiles and help build models for daily load shapes. Whereas classification techniques are processes that start by analysing load demand data (LDD) from various customers and then identify the groups that these customers' LDD fall into. In this paper we will review some of the popular clustering algorithms, explain the difference between each method

    Probabilistic latent semantic analysis as a potential method for integrating spatial data concepts

    Get PDF
    In this paper we explore the use of Probabilistic Latent Semantic Analysis (PLSA) as a method for quantifying semantic differences between land cover classes. The results are promising, revealing ‘hidden’ or not easily discernible data concepts. PLSA provides a ‘bottom up’ approach to interoperability problems for users in the face of ‘top down’ solutions provided by formal ontologies. We note the potential for a meta-problem of how to interpret the concepts and the need for further research to reconcile the top-down and bottom-up approaches

    Combining Labelled and Unlabelled Data in the Design of Pattern Classification Systems

    Get PDF
    There has been much interest in applying techniques that incorporate knowledge from unlabelled data into a supervised learning system but less effort has been made to compare the effectiveness of different approaches on real world problems and to analyse the behaviour of the learning system when using different amount of unlabelled data. In this paper an analysis of the performance of supervised methods enforced by unlabelled data and some semisupervised approaches using different ratios of labelled to unlabelled samples is presented. The experimental results show that when supported by unlabelled samples much less labelled data is generally required to build a classifier without compromising the classification performance. If only a very limited amount of labelled data is available the results show high variability and the performance of the final classifier is more dependant on how reliable the labelled data samples are rather than use of additional unlabelled data. Semi-supervised clustering utilising both labelled and unlabelled data have been shown to offer most significant improvements when natural clusters are present in the considered problem
    • 

    corecore