6 research outputs found

    Dynamic learning with neural networks and support vector machines

    Get PDF
    Neural network approach has proven to be a universal approximator for nonlinear continuous functions with an arbitrary accuracy. It has been found to be very successful for various learning and prediction tasks. However, supervised learning using neural networks has some limitations because of the black box nature of their solutions, experimental network parameter selection, danger of overfitting, and convergence to local minima instead of global minima. In certain applications, the fixed neural network structures do not address the effect on the performance of prediction as the number of available data increases. Three new approaches are proposed with respect to these limitations of supervised learning using neural networks in order to improve the prediction accuracy.;Dynamic learning model using evolutionary connectionist approach . In certain applications, the number of available data increases over time. The optimization process determines the number of the input neurons and the number of neurons in the hidden layer. The corresponding globally optimized neural network structure will be iteratively and dynamically reconfigured and updated as new data arrives to improve the prediction accuracy. Improving generalization capability using recurrent neural network and Bayesian regularization. Recurrent neural network has the inherent capability of developing an internal memory, which may naturally extend beyond the externally provided lag spaces. Moreover, by adding a penalty term of sum of connection weights, Bayesian regularization approach is applied to the network training scheme to improve the generalization performance and lower the susceptibility of overfitting. Adaptive prediction model using support vector machines . The learning process of support vector machines is focused on minimizing an upper bound of the generalization error that includes the sum of the empirical training error and a regularized confidence interval, which eventually results in better generalization performance. Further, this learning process is iteratively and dynamically updated after every occurrence of new data in order to capture the most current feature hidden inside the data sequence.;All the proposed approaches have been successfully applied and validated on applications related to software reliability prediction and electric power load forecasting. Quantitative results show that the proposed approaches achieve better prediction accuracy compared to existing approaches

    Automated network optimisation using data mining as support for economic decision systems

    Get PDF
    The evolution from wired voice communications to wireless and cloud computing services has led to the rapid growth of wireless communication companies attempting to meet consumer needs. While these companies have generally been able to achieve quality of service (QoS) high enough to meet most consumer demands, the recent growth in data hungry services in addition to wireless voice communication, has placed significant stress on the infrastructure and begun to translate into increased QoS issues. As a result, wireless providers are finding difficulty to meet demand and dealing with an overwhelming volume of mobile data. Many telecommunication service providers have turned to data analytics techniques to discover hidden insights for fraud detection, customer churn detection and credit risk analysis. However, most are illequipped to prioritise expansion decisions and optimise network faults and costs to ensure customer satisfaction and optimal profitability. The contribution of this thesis in the decision-making process is significant as it initially proposes a network optimisation scheme using data mining algorithms to develop a monitoring framework capable of troubleshooting network faults while optimising costs based on financial evaluations. All the data mining experiments contribute to the development of a super–framework that has been tested using real-data to demonstrate that data mining techniques play a crucial role in the prediction of network optimisation actions. Finally, the insights extracted from the super-framework demonstrate that machine learning mechanisms can draw out promising solutions for network optimisation decisions, customer segmentation, customers churn prediction and also in revenue management. The outputs of the thesis seek to help wireless providers to determine the QoS factors that should be addressed for an efficient network optimisation plan and also presents the academic contribution of this research

    Bayesian Learning Of Neural Networks By Means Of Artificial Immune Systems

    No full text
    Once the design of Artificial Neural Networks (ANN) may require the optimization of numerical and structural parameters, bio-inspired algorithms have been successfully applied to accomplish this task, since they are population-based search strategies capable of dealing successfully with complex and large search spaces, avoiding local minima. In tills paper, we propose the use of an Artificial Immune System for learning feedforward ANN's topologies. Besides the number of neurons in the hidden layer, the algorithm also optimizes the type of activation function for each node. The use of a Bayesian framework to infer the weights and weight decay terms as well as to perform model selection allows us to find neural models with high generalization capability and low complexity, once the Occam's razor principle is incorporated into the framework. We demonstrate the applicability of the proposal on seven classification problems and promising results were obtained. © 2006 IEEE.48314838Ada, G.L., Nossal, G.J.V., The Clonal Selection Theory (1987) Scientific American, 257 (2), pp. 50-57Angeline, P.J., Sauders, G.M., Pollack, J.B., An evolutionary algorithm that constructs recurrent neural networks (1994) IEEE Trans. Neural Networks, 5, p. 5465Battiti, R., First- and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method (1992) Neural Computation, 4 (2), pp. 141-166Blake, C.L., Merz, C.J., (1998) UCI Repository of machine learning databases, , http://www.ics.uci.edu/~mlearn/MLRepository.html, Irvine, CA: University of California, Department of Information and Computer ScienceBishop, C.M., (1995) Neural Networks for Pattern Recognition, , Clarendon PressBuntine, W.L., Weigend, A.S., Bayesian backpropagation (1991) Complex Systems, 5, p. 603643Blum, C., Socha, K., Training feed-forward neural networks with ant colony optimization: An application to pattern classification (2005) Proc. 5th Intern. Conf. on Hybrid Intelligent Svstems(HIS-2005), pp. 10-15Castro, P.A.D., Coelho, G.P., Caetano, M.F., Von Zuben, F.J., Designing Ensembles of Fuzzy Classification Systems: An Immune-Inspired Approach (2005) Lectures Notes in Computer Science, 3627, pp. 469-482. , Springer-VerlagCastro, P.A.D., Von Zuben, F.J., An Immune-Inspired Approach to Bayesian Networks (2005) Proc. 5th Intern. Conf. on Hybrid Intelligent Systems (HIS-2005), pp. 23-28. , N. Nedjah et al, edsCoelho, G.P., Castro, P.A.D., Von Zuben, F.J., The Effective Use of Diverse Rule Bases in Fuzzy Classification (2005) VII Brazilian Congress on Neural NetworksDasgupta, D., (1999) Artificial Immune Systems and Their Applications, , Springer-Verlagde Castro, L.N., Timmis, J., (2002) An Introduction to Artificial Immune Systems: A New Computational Intelligence Paradigm, , Springer-Verlagde Castro, L.N., Timmis, J.I., An Artificial Immune Network for Multimodal Function Optimization (2002) Proceedings of IEEE Congress of Evolutionary Computation, 1, pp. 699-674de Castro, L.N., Von Zuben, F.J., An Immunological Approach to Initialize Feedforward Neural Network Weights (2001) Proc. Intern. Conf. on Artificial Neural Networks and Genetic Algorithms, pp. 126-129Gomes, L.C.T., de Sousa, J.S., Bezerra, G.B., de Castro, L.N., Von Zuben, F.J., Copt-aiNet and the Gene Ordering Problem (2003) Revista Tecnologia da Informação, 3 (2), pp. 27-33Haykin, S., (1998) Neural Networks: A Comprehensive Foundation, , 2nd edition, Prentice Hall PTRIyoda, E.M., Von Zuben, F.J., Hybrid Neural Networks: An Evolutionary Approach With Local Search (2002) Integrated Computer-Aided Engineering, 9 (1), pp. 57-72Jerne, N.K., Towards a Network Theory of the Immune System (1974) Ann. Immunol. (Inst. Pasteur), 125 C, pp. 373-389Jones, A.J., Genetic algorithms and their applications to the design of neural networks (1993) Neural Computing & Appl, 1, pp. 32-45Kwok, T.Y., Yeung, D.Y., Bayesian regularization in constructive neural networks (1996) Proc. Intern. Conf. on Artificial Neural Networks, p. 557562Kwok, T.Y., Yeung, D.Y., Constructive algorithms for structure learning in feedforward neural networks for regression problems (1997) IEEE Trans. on Neural Networks, 8 (3), pp. 630-645Liu, Y., Yao, X., Evolutionary design of artificial neural networks with different node transfer functions (1996) Proc. 3rd IEEE Intern. Conf. on Evolutionary Computation (CEC, pp. 670-675. , 96, ppMatteucci, M., Spadoni, D., Evolutionary Learning of Rich Neural Networks in the Bayesian Model Selection Framework (2004) Intern. Journal of Applied Mathematics and Computer Science, 14 (3), pp. 423-440MacKay, D.J.C., A Practical Bayesian Framework for Backpropagation Networks (1992) Neural Computation, 4 (3), pp. 448-472MacKay, D.J.C., Bayesian Interpolation (1992) Neural Computation, 4 (3), pp. 415-447MacKay, D.J.C., The Evidence Framework Applied to Classification Networks (1992) Neural Computation, 4 (5), pp. 698-714Moller, M.F., A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning (1993) Neural Networks, 6 (4), pp. 525-533Neal, R.M., (1996) Bayesian Learning for Neural Networks, , Springer-VerlagRussell, R., Pruning algorithm A survey (1993) IEEE Trans. on Neural Networks, 4 (5), pp. 740-747Thodberg, H.H., A review of Bayesian neural networks with an application to near infrared spectroscopy (1996) IEEE Trans. Neural Networks, 7, p. 5672Vertosick, F.T., Kelly, R.H., The Immune System as a Neural Network: A Multi-epitope Approach (1991) Journal of Theoretical Biology, 150, pp. 225-237Yang, J., Parekh, R., Honavar, V., DistAL: An interpattern distancebased constructive learning algorithm (1999) Intelligent Data Analysis, 3, pp. 55-73Yao, X., Evolving Artificial Neural Networks (1999) Proceedings of the IEEE, 87, pp. 1423-1447Zhang, B.T., A Bayesian Evolutionary Approach to the Design and Learning of Heterogeneous Neural Trees (2002) Integrated Computer-Aided Engineering, 9 (1), pp. 73-8
    corecore