10 research outputs found

    Image Restoration Using Noisy ICA, PCA Compression and Code Shrinkage Technique

    Get PDF
    The research reported in the paper aims the development of some methodologies for noise removal in image restoration. In real life, there is always some kind of noise present in the observed data. Therefore, it has been proposed that the ICA model used in image restoration should include noise term as well. Different methods for estimating the ICA model when noise is present have been developed. In noisy ICA, we have to deal with the problem of estimation of the noise free realization of the independent components. The noisy ICA model can be use to develop a denoising method, namely the sparse code shrinkage [10]. The final part of the paper presents a LMS optimal PCA compression/decompression scheme, where the noise is annihilated in the feature space. In order to derive conclusions concerning the correlations between the dimensionality reduction and the resulted quality of the restored images as well as the effect of using both LMS optimal compression/decompression technique and PCA based noise removal method several tests were performed on the same set of data. The tests proved that the proposed restoration technique yields high quality restored images in both cases, when the CSPCA algorithm was applied directly on the initial image and when it was applied in the reduced feature space respectively.ICA, noisy ICA, feature extraction, PCA, image processing, data restoration, noise removal, shrinkage function

    Topographic mappings and feed-forward neural networks

    Get PDF
    This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in the mapping process. This then enables the generation of more appropriate feature spaces for the purposes of enhanced visualisation or subsequent classification. A comparison with established methods for feature extraction is given for data taken from the 1992 Research Assessment Exercise for higher educational institutions in the United Kingdom. This is a difficult high-dimensional dataset, and illustrates well the benefit of the new topographic technique. A generalisation of the proposed model is considered for implementation of the classical multidimensional scaling (¸mds}) routine. This is related to Oja's principal subspace neural network, whose learning rule is shown to descend the error surface of the proposed ¸mds model. Some of the technical issues concerning the design and training of topographic neural networks are investigated. It is shown that neural network models can be less sensitive to entrapment in the sub-optimal global minima that badly affect the standard Sammon algorithm, and tend to exhibit good generalisation as a result of implicit weight decay in the training process. It is further argued that for ideal structure retention, the network transformation should be perfectly smooth for all inter-data directions in input space. Finally, there is a critique of optimisation techniques for topographic mappings, and a new training algorithm is proposed. A convergence proof is given, and the method is shown to produce lower-error mappings more rapidly than previous algorithms

    LYAPUNOV FUNCTIONS FOR CONVERGENCE OF PRINCIPAL COMPONENT ALGORITHMS

    Get PDF
    Recent theoretical analyses of a class of unsupervized Hebbian principal component algorithms have identified its local stability conditions. The only locally stable solution for the subspace P extracted by the network is the principal component subspace P∗. In this paper we use the Lyapunov function approach to discover the global stability characteristics of this class of algorithms. The subspace projection error, least mean squared projection error, and mutual information I are all Lyapunov functions for convergence to the principal subspace, although the various domains of convergence indicated by these Lyapunov functions leave some of P-space uncovered. A modification to I yields a principal subspace information Lyapunov function I′ with a domain of convergence that covers almost all of P-space. This shows that this class of algorithms converges to the principal subspace from almost everywhere

    LYAPUNOV FUNCTIONS FOR CONVERGENCE OF PRINCIPAL COMPONENT ALGORITHMS

    No full text
    Recent theoretical analyses of a class of unsupervized Hebbian principal component algorithms have identified its local stability conditions. The only locally stable solution for the subspace P extracted by the network is the principal component subspace P∗. In this paper we use the Lyapunov function approach to discover the global stability characteristics of this class of algorithms. The subspace projection error, least mean squared projection error, and mutual information I are all Lyapunov functions for convergence to the principal subspace, although the various domains of convergence indicated by these Lyapunov functions leave some of P-space uncovered. A modification to I yields a principal subspace information Lyapunov function I′ with a domain of convergence that covers almost all of P-space. This shows that this class of algorithms converges to the principal subspace from almost everywhere
    corecore