747 research outputs found

    Evolutionary design of nearest prototype classifiers

    Get PDF
    In pattern classification problems, many works have been carried out with the aim of designing good classifiers from different perspectives. These works achieve very good results in many domains. However, in general they are very dependent on some crucial parameters involved in the design. These parameters have to be found by a trial and error process or by some automatic methods, like heuristic search and genetic algorithms, that strongly decrease the performance of the method. For instance, in nearest prototype approaches, main parameters are the number of prototypes to use, the initial set, and a smoothing parameter. In this work, an evolutionary approach based on Nearest Prototype Classifier (ENPC) is introduced where no parameters are involved, thus overcoming all the problems that classical methods have in tuning and searching for the appropiate values. The algorithm is based on the evolution of a set of prototypes that can execute several operators in order to increase their quality in a local sense, and with a high classification accuracy emerging for the whole classifier. This new approach has been tested using four different classical domains, including such artificial distributions as spiral and uniform distibuted data sets, the Iris Data Set and an application domain about diabetes. In all the cases, the experiments show successfull results, not only in the classification accuracy, but also in the number and distribution of the prototypes achieved.Publicad

    An exploratory analysis of methods for extracting credit risk rules

    Get PDF
    This paper performs a comparative analysis of two kind of methods for extracting credit risk rules. On one hand we have a set of methods based on the combination of an optimization technique initialized with a neural network. On the other hand there are partition algorithms, based on trees. We show results obtain on two real databases. The main findings are that the set of rules obtained by the first set of methods give a set of rules with a reduced cardinality, with an acceptable precision regarding classification. This is a desirable property for financial institutions, who want to decide credit approval face to face with customers. Bank employees who daily deal with retail customers can be easily trained for selecting the best customers, by using this kind of solutions.XIII Workshop Bases de datos y Minería de Datos (WBDMD).Red de Universidades con Carreras en Informática (RedUNCI

    Simplifying credit scoring rules using LVQ+PSO

    Full text link
    One of the key elements in the banking industry rely on the appropriate selection of customers. In order to manage credit risk, banks dedicate special efforts in order to classify customers according to their risk. The usual decision making process consists in gathering personal and financial information about the borrower. Processing this information can be time consuming, and presents some difficulties due to the heterogeneous structure of data. We offer in this paper an alternative method that is able to classify customers' profiles from numerical and nominal attributes. The key feature of our method, called LVQ+PSO, is the finding of a reduced set of classifying rules. This is possible, due to the combination of a competitive neural network with an optimization technique. These rules constitute a predictive model for credit risk approval. The reduced quantity of rules makes this method not only useful for credit officers aiming to make quick decisions about granting a credit, but also could act as borrower's self selection. Our method was applied to an actual database of a credit consumer financial institution in Ecuador. We obtain very satisfactory results. Future research lines are exposed

    An exploratory analysis of methods for extracting credit risk rules

    Get PDF
    This paper performs a comparative analysis of two kind of methods for extracting credit risk rules. On one hand we have a set of methods based on the combination of an optimization technique initialized with a neural network. On the other hand there are partition algorithms, based on trees. We show results obtain on two real databases. The main findings are that the set of rules obtained by the first set of methods give a set of rules with a reduced cardinality, with an acceptable precision regarding classification. This is a desirable property for financial institutions, who want to decide credit approval face to face with customers. Bank employees who daily deal with retail customers can be easily trained for selecting the best customers, by using this kind of solutions.XIII Workshop Bases de datos y Minería de Datos (WBDMD).Red de Universidades con Carreras en Informática (RedUNCI

    Unsupervised Pattern Recognition for the Classification of EMG Signals

    Get PDF
    The shapes and firing rates of motor unit action potentials (MUAPs) in an electromyographic (EMG) signal provide an important source of information for the diagnosis of neuromuscular disorders. In order to extract this information from EMG signals recorded at low to moderate force levels, it is required: i) to identify the MUAPs composing the EMG signal, ii) to classify MUAPs with similar shape, and iii) to decompose the superimposed MUAP waveforms into their constituent MUAPs. For the classification of MUAPs two different pattern recognition techniques are presented: i) an artificial neural network (ANN) technique based on unsupervised learning, using a modified version of the self-organizing feature maps (SOFM) algorithm and learning vector quantization (LVQ) and ii) a statistical pattern recognition technique based on the Euclidean distance. A total of 1213 MUAPs obtained from 12 normal subjects, 13 subjects suffering from myopathy, and 15 subjects suffering from motor neuron disease were analyzed. The success rate for the ANN technique was 97.6% and for the statistical technique 95.3%. For the decomposition of the superimposed waveforms, a technique using crosscorrelation for MUAP's alignment, and a combination of Euclidean distance and area measures in order to classify the decomposed waveforms is presented. The success rate for the decomposition procedure was 90%

    Simplifying credit scoring rules using LVQ + PSO

    Get PDF
    Purpose: One of the key elements in the banking industry relies on the appropriate selection of customers. To manage credit risk, banks dedicate special efforts to classify customers according to their risk. The usual decision-making process consists of gathering personal and financial information about the borrower. Processing this information can be time-consuming, and presents some difficulties because of the heterogeneous structure of data. Design/methodology/approach: This paper presents an alternative method that is able to generate rules that work not only on numerical attributes but also on nominal ones. The key feature of this method, called learning vector quantization and particle swarm optimization (LVQ + PSO), is the finding of a reduced set of classifying rules. This is possible because of the combination of a competitive neural network with an optimization technique. Findings: These rules constitute a predictive model for credit risk approval. The reduced quantity of rules makes this method useful for credit officers aiming to make decisions about granting a credit. It also could act as an orientation for borrower’s self evaluation about her/his creditworthiness. Research limitations/implications: In spite of the fact that conducted tests showed no evidence of dependence between results and the initial size of the LVQ network, it is considered desirable to repeat the measurements using an LVQ network of minimum size and a version of variable population PSO to adequately explore the solution space in the future. Practical implications: In the past decades, there has been an increase in consumer credit. Retail banking is a growing industry. Not only has there been a boom in credit card memberships, specially in emerging economies, but also an increase in small consumption credits. For example, it is very common in emerging economies that families buy home appliances on installments. In those countries, the association of a home appliance shop with a financial institution is usual, to provide customers with quick-decision credit line facilities. The existence of such a financial instrument aids to boost sales. This association generates conflict of interests. On one hand, the home appliance shop wants to sell products to all customers. Therefore, it is in its best interest to promote a generous credit policy. On the other hand, the financial institution wants to maximize the revenue from credits, leading to a strict surveillance of loan losses. Having a fair and transparent credit-granting policy favors a good business relationship between home appliances shops and financial institutions. One way of developing such a policy is to construct objective rules to decide to grant or deny a credit application. Social implications: Better credit decision rules generate enhanced risk sharing. In addition, it improves transparency in credit acceptance decisions, giving less room to arbitrary decisions. Originality/value: This study develops a new method that combines a competitive neural network and an optimization technique. It was applied to a real database of a financial institution in a developing country.Instituto de Investigación en Informátic

    SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

    Get PDF
    After production and operations, finance and investments are one of the most frequent areas of neural network applications in business. The lack of standardized paradigms that can determine the efficiency of certain NN architectures in a particular problem domain is still present. The selection of NN architecture needs to take into consideration the type of the problem, the nature of the data in the model, as well as some strategies based on result comparison. The paper describes previous research in that area and suggests a forward strategy for selecting best NN algorithm and structure. Since the strategy includes both parameter-based and variable-based testings, it can be used for selecting NN architectures as well as for extracting models. The backpropagation, radialbasis, modular, LVQ and probabilistic neural network algorithms were used on two independent sets: stock market and credit scoring data. The results show that neural networks give better accuracy comparing to multiple regression and logistic regression models. Since it is model-independant, the strategy can be used by researchers and professionals in other areas of application

    Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2

    Get PDF
    Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making

    Rough Neural Networks Architecture For Improving Generalization In Pattern Recognition

    Get PDF
    Neural networks are found to be attractive trainable machines for pattern recognition. The capability of these models to accommodate wide variety and variability of conditions, and the ability to imitate brain functions, make them popular research area. This research focuses on developing hybrid rough neural networks. These novel approaches are assumed to provide superior performance with respect to detection and automatic target recognition.In this thesis, hybrid architectures of rough set theory and neural networks have been investigated, developed, and implemented. The first hybrid approach provides novel neural network referred to as Rough Shared weight Neural Networks (RSNN). It uses the concept of approximation based on rough neurons to feature extraction, and experiences the methodology of weight sharing. The network stages are a feature extraction network, and a classification network. The extraction network is composed of rough neurons that accounts for the upper and lower approximations and embeds a membership function to replace ordinary activation functions. The neural network learns the rough set’s upper and lower approximations as feature extractors simultaneously with classification. The RSNN implements a novel approximation transform. The basic design for the network is provided together with the learning rules. The architecture provides a novel method to pattern recognition and is expected to be robust to any pattern recognition problem. The second hybrid approach is a two stand alone subsystems, referred to as Rough Neural Networks (RNN). The extraction network extracts detectors that represent pattern’s classes to be supplied to the classification network. It works as a filter for original distilled features based on equivalence relations and rough set reduction, while the second is responsible for classification of the outputs from the first system. The two approaches were applied to image pattern recognition problems. The RSNN was applied to automatic target recognition problem. The data is Synthetic Aperture Radar (SAR) image scenes of tanks, and background. The RSNN provides a novel methodology for designing nonlinear filters without prior knowledge of the problem domain. The RNN was used to detect patterns present in satellite image. A novel feature extraction algorithm was developed to extract the feature vectors. The algorithm enhances the recognition ability of the system compared to manual extraction and labeling of pattern classes. The performance of the rough backpropagation network is improved compared to backpropagation of the same architecture. The network has been designed to produce detection plane for the desired pattern. The hybrid approaches developed in this thesis provide novel techniques to recognition static and dynamic representation of patterns. In both domains the rough set theory improved generalization of the neural networks paradigms. The methodologies are theoretically robust to any pattern recognition problem, and are proved practically for image environments
    corecore