329 research outputs found

    Binary Independent Component Analysis with OR Mixtures

    Full text link
    Independent component analysis (ICA) is a computational method for separating a multivariate signal into subcomponents assuming the mutual statistical independence of the non-Gaussian source signals. The classical Independent Components Analysis (ICA) framework usually assumes linear combinations of independent sources over the field of realvalued numbers R. In this paper, we investigate binary ICA for OR mixtures (bICA), which can find applications in many domains including medical diagnosis, multi-cluster assignment, Internet tomography and network resource management. We prove that bICA is uniquely identifiable under the disjunctive generation model, and propose a deterministic iterative algorithm to determine the distribution of the latent random variables and the mixing matrix. The inverse problem concerning inferring the values of latent variables are also considered along with noisy measurements. We conduct an extensive simulation study to verify the effectiveness of the propose algorithm and present examples of real-world applications where bICA can be applied.Comment: Manuscript submitted to IEEE Transactions on Signal Processin

    An examination and analysis of the Boltzmann machine, its mean field theory approximation, and learning algorithm

    Get PDF
    It is currently believed that artificial neural network models may form the basis for inte1ligent computational devices. The Boltzmann Machine belongs to the class of recursive artificial neural networks and uses a supervised learning algorithm to learn the mapping between input vectors and desired outputs. This study examines the parameters that influence the performance of the Boltzmann Machine learning algorithm. Improving the performance of the algorithm through the use of a naïve mean field theory approximation is also examined. The study was initiated to examine the hypothesis that the Boltzmann Machine learning algorithm, when used with the mean field approximation, is an efficient, reliable, and flexible model of machine learning. An empirical analysis of the performance of the algorithm supports this hypothesis. The performance of the algorithm is investigated by applying it to training the Boltzmann Machine, and its mean field approximation, the exclusive-Or function. Simulation results suggest that the mean field theory approximation learns faster than the Boltzmann Machine, and shows better stability. The size of the network and the learning rate were found to have considerable impact upon the performance of the algorithm, especially in the case of the mean field theory approximation. A comparison is made with the feed forward back propagation paradigm and it is found that the back propagation network learns the exclusive-Or function eight times faster than the mean field approximation. However, the mean field approximation demonstrated better reliability and stability. Because the mean field approximation is local and asynchronous it has an advantage over back propagation with regard to a parallel implementation. The mean field approximation is domain independent and structurally flexible. These features make the network suitable for use with a structural adaption algorithm, allowing the network to modify its architecture in response to the external environment

    On the optimization problems in multiaccess communication systems

    Get PDF
    In a communication system, the bandwidth is often a primary resource. In order to support concurrent access by numerous users in a network, this finite and expensive resource must be shared among many independent contending users. Multi-access protocols control this access of the resource among users to achieve its efficient utilization, satisfy connectivity requirements and resolve any conflict among the contending users. Many optimization problems arise in designing a multi-access protocol. Among these, there is a class of optimization problems known as NP-complete, and no polynomial algorithm can possibly solve them. Conventional methods may not be efficient arid often produce poor solutions. In this dissertation, we propose a neural network-based algorithm for solving NP-complete problems encountered in multi-access communication systems. Three combinatorial optimization problems have been solved by the proposed algorithms; namely, frame pattern design in integrated TDMA communication networks, optimal broadcast scheduling in multihop packet radio networks, and optimal channel assignment in FDM A mobile communication networks. Numerical studies have shown encouraging results in searching for the global optimal solutions by using this algorithm. The determination of the related parameters regarding convergence and solution quality is investigated in this dissertation. Performance evaluations and comparisons with other algorithms have been performed

    Traveling Salesman Problem

    Get PDF
    This book is a collection of current research in the application of evolutionary algorithms and other optimal algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry students in the field of applied Mathematics, Computing Science and Engineering

    Theory and applications of artificial neural networks

    Get PDF
    In this thesis some fundamental theoretical problems about artificial neural networks and their application in communication and control systems are discussed. We consider the convergence properties of the Back-Propagation algorithm which is widely used for training of artificial neural networks, and two stepsize variation techniques are proposed to accelerate convergence. Simulation results demonstrate significant improvement over conventional Back-Propagation algorithms. We also discuss the relationship between generalization performance of artificial neural networks and their structure and representation strategy. It is shown that the structure of the network which represent a priori knowledge of the environment has a strong influence on generalization performance. A Theorem about the number of hidden units and the capacity of self-association MLP (Multi-Layer Perceptron) type network is also given in the thesis. In the application part of the thesis, we discuss the feasibility of using artificial neural networks for nonlinear system identification. Some advantages and disadvantages of this approach are analyzed. The thesis continues with a study of artificial neural networks applied to communication channel equalization and the problem of call access control in broadband ATM (Asynchronous Transfer Mode) communication networks. A final chapter provides overall conclusions and suggestions for further work

    Models for calculating confidence intervals for neural networks

    Get PDF
    This research focused on coding and analyzing existing models to calculate confidence intervals on the results of neural networks. The three techniques for determining confidence intervals determination were the non-linear regression, the bootstrapping estimation, and the maximum likelihood estimation. Confidence intervals for non-linear regression, bootstrap estimation, and maximum likelihood were coded in Visual Basic. The neural network used the backpropagation algorithm with an input layer, one hidden layer and an output layer with one unit. The hidden layer had a logistic or binary sigmoidal activation function and the output layer had a linear activation function. These techniques were tested on various data sets with and without additional noise. Out of eight cases studied, non-linear regression and bootstrapping each had the four lowest values for the average coverage probability minus the nominal probability. For the average coverage probabilities minus the nominal probabilities of all data sets, the bootstrapping estimation obtained the lowest values. The ranges and standard deviations of the coverage probabilities over 15 simulations for the three techniques were computed, and it was observed that the non-linear regression obtained consistent results with the least range and standard deviation, and bootstrapping had the largest ranges and standard deviations. The bootstrapping estimation technique gave a slightly better average coverage probability (CP) minus nominal values than the non-linear regression method, but it had considerably more variation in individual simulations. The maximum likelihood estimation had the poorest results with respect to the average CP minus nominal values

    Pertanika Journal of Science & Technology

    Get PDF
    corecore