3,106 research outputs found

    Application of artificial neural network in market segmentation: A review on recent trends

    Full text link
    Despite the significance of Artificial Neural Network (ANN) algorithm to market segmentation, there is a need of a comprehensive literature review and a classification system for it towards identification of future trend of market segmentation research. The present work is the first identifiable academic literature review of the application of neural network based techniques to segmentation. Our study has provided an academic database of literature between the periods of 2000-2010 and proposed a classification scheme for the articles. One thousands (1000) articles have been identified, and around 100 relevant selected articles have been subsequently reviewed and classified based on the major focus of each paper. Findings of this study indicated that the research area of ANN based applications are receiving most research attention and self organizing map based applications are second in position to be used in segmentation. The commonly used models for market segmentation are data mining, intelligent system etc. Our analysis furnishes a roadmap to guide future research and aid knowledge accretion and establishment pertaining to the application of ANN based techniques in market segmentation. Thus the present work will significantly contribute to both the industry and academic research in business and marketing as a sustainable valuable knowledge source of market segmentation with the future trend of ANN application in segmentation.Comment: 24 pages, 7 figures,3 Table

    Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem

    Full text link
    Self Organizing Migrating Algorithm (SOMA) is a meta-heuristic algorithm based on the self-organizing behavior of individuals in a simulated social environment. SOMA performs iterative computations on a population of potential solutions in the given search space to obtain an optimal solution. In this paper, an Opportunistic Self Organizing Migrating Algorithm (OSOMA) has been proposed that introduces a novel strategy to generate perturbations effectively. This strategy allows the individual to span across more possible solutions and thus, is able to produce better solutions. A comprehensive analysis of OSOMA on multi-dimensional unconstrained benchmark test functions is performed. OSOMA is then applied to solve real-time Dynamic Traveling Salesman Problem (DTSP). The problem of real-time DTSP has been stipulated and simulated using real-time data from Google Maps with a varying cost-metric between any two cities. Although DTSP is a very common and intuitive model in the real world, its presence in literature is still very limited. OSOMA performs exceptionally well on the problems mentioned above. To substantiate this claim, the performance of OSOMA is compared with SOMA, Differential Evolution and Particle Swarm Optimization.Comment: 6 pages, published in CISS 201

    Multiple 2D self organising map network for surface reconstruction of 3D unstructured data

    Get PDF
    Surface reconstruction is a challenging task in reverse engineering because it must represent the surface which is similar to the original object based on the data obtained. The data obtained are mostly in unstructured type whereby there is not enough information and incorrect surface will be obtained. Therefore, the data should be reorganised by finding the correct topology with minimum surface error. Previous studies showed that Self Organising Map (SOM) model, the conventional surface approximation approach with Non Uniform Rational B-Splines (NURBS) surfaces, and optimisation methods such as Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimisation (PSO) methods are widely implemented in solving the surface reconstruction. However, the model, approach and optimisation methods are still suffer from the unstructured data and accuracy problems. Therefore, the aims of this research are to propose Cube SOM (CSOM) model with multiple 2D SOM network in organising the unstructured surface data, and to propose optimised surface approximation approach in generating the NURBS surfaces. GA, DE and PSO methods are implemented to minimise the surface error by adjusting the NURBS control points. In order to test and validate the proposed model and approach, four primitive objects data and one medical image data are used. As to evaluate the performance of the proposed model and approach, three performance measurements have been used: Average Quantisation Error (AQE) and Number Of Vertices (NOV) for the CSOM model while surface error for the proposed optimised surface approximation approach. The accuracy of AQE for CSOM model has been improved to 64% and 66% when compared to 2D and 3D SOM respectively. The NOV for CSOM model has been reduced from 8000 to 2168 as compared to 3D SOM. The accuracy of surface error for the optimised surface approximation approach has been improved to 7% compared to the conventional approach. The proposed CSOM model and optimised surface approximation approach have successfully reconstructed surface of all five data with better performance based on three performance measurements used in the evaluation

    Multistrategy Self-Organizing Map Learning for Classification Problems

    Get PDF
    Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test

    Learning enhancement of radial basis function network with particle swarm optimization

    Get PDF
    Back propagation (BP) algorithm is the most common technique in Artificial Neural Network (ANN) learning, and this includes Radial Basis Function Network. However, major disadvantages of BP are its convergence rate is relatively slow and always being trapped at the local minima. To overcome this problem, Particle Swarm Optimization (PSO) has been implemented to enhance ANN learning to increase the performance of network in terms of convergence rate and accuracy. In Back Propagation Radial Basis Function Network (BP-RBFN), there are many elements to be considered. These include the number of input nodes, hidden nodes, output nodes, learning rate, bias, minimum error and activation/transfer functions. These elements will affect the speed of RBF Network learning. In this study, Particle Swarm Optimization (PSO) is incorporated into RBF Network to enhance the learning performance of the network. Two algorithms have been developed on error optimization for Back Propagation of Radial Basis Function Network (BP-RBFN) and Particle Swarm Optimization of Radial Basis Function Network (PSO-RBFN) to seek and generate better network performance. The results show that PSO-RBFN give promising outputs with faster convergence rate and better classifications compared to BP-RBFN

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
    corecore