8 research outputs found
Local matrix learning in clustering and applications for manifold visualization
Arnonkijpanich B, Hasenfuss A, Hammer B. Local matrix learning in clustering and applications for manifold visualization. Neural Networks. 2010;23(4):476-486.Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. (C) 2009 Elsevier Ltd. All rights reserved
Local matrix adaptation in topographic neural maps
Arnonkijpanich B, Hasenfuss A, Hammer B. Local matrix adaptation in topographic neural maps. Neurocomputing. 2011;74(4):522-539.The self-organizing map (SOM) and neural gas (NG) and generalizations thereof such as the generative topographic map constitute popular algorithms to represent data by means of prototypes arranged on a (hopefully) topology representing map. Most standard methods rely on the Euclidean metric, hence the resulting clusters tend to have isotropic form and they cannot account for local distortions or correlations of data. For this reason, several proposals exist in the literature which extend prototype-based clustering towards more general models which, for example, incorporate local principal directions into the winner computation. This allows to represent data faithfully using less prototypes. In this contribution, we establish a link of models which rely on local principal components (PCA), matrix learning, and a formal cost function of NG and SOM which allows to show convergence of the algorithm. For this purpose, we consider an extension of prototype-based clustering algorithms such as NG and SOM towards a more general metric which is given by a full adaptive matrix such that ellipsoidal clusters are accounted for. The approach is derived from a natural extension of the standard cost functions of NG and SOM (in the form of Heskes). We obtain batch optimization learning rules for prototype and matrix adaptation based on these generalized cost functions and we show convergence of the algorithm. The batch optimization schemes can be interpreted as local principal component analysis (PCA) and the local eigenvectors correspond to the main axes of the ellipsoidal clusters. Thus, this approach provides a cost function associated to proposals in the literature which combine SOM or NG with local PCA models. We demonstrate the behavior of matrix NG and SOM in several benchmark examples and in an application to image compression
PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA
Global Coordination based on Matrix Neural Gas for Dynamic Texture Synthesis
Arnonkijpanich B, Hammer B. Global Coordination based on Matrix Neural Gas for Dynamic Texture Synthesis. In: El Gayar N, Schwenker F, eds. ANNPR'2010. Lecture Notes in Artificial Intelligence, 5998. Springer; 2010: 84-95
Matrix Learning for Topographic Neural Maps
Arnonkijpanich B, Hammer B, Hasenfuss A, Lursinsap C. Matrix Learning for Topographic Neural Maps. In: Kurková V, Neruda R, Koutn'ık J, eds. ICANN (1). Lecture Notes in Computer Science, 5163. Berlin: Springer; 2008: 572-582
Integrating new data balancing technique with committee networks for imbalanced data: GRSOM approach
Local matrix adaptation in topographic neural maps
Arnonkijpanich B, Hasenfuss A, Hammer B. Local matrix adaptation in topographic neural maps. Neurocomputing. 2011;74(4):522-539.The self-organizing map (SOM) and neural gas (NG) and generalizations thereof such as the generative topographic map constitute popular algorithms to represent data by means of prototypes arranged on a (hopefully) topology representing map. Most standard methods rely on the Euclidean metric, hence the resulting clusters tend to have isotropic form and they cannot account for local distortions or correlations of data. For this reason, several proposals exist in the literature which extend prototype-based clustering towards more general models which, for example, incorporate local principal directions into the winner computation. This allows to represent data faithfully using less prototypes. In this contribution, we establish a link of models which rely on local principal components (PCA), matrix learning, and a formal cost function of NG and SOM which allows to show convergence of the algorithm. For this purpose, we consider an extension of prototype-based clustering algorithms such as NG and SOM towards a more general metric which is given by a full adaptive matrix such that ellipsoidal clusters are accounted for. The approach is derived from a natural extension of the standard cost functions of NG and SOM (in the form of Heskes). We obtain batch optimization learning rules for prototype and matrix adaptation based on these generalized cost functions and we show convergence of the algorithm. The batch optimization schemes can be interpreted as local principal component analysis (PCA) and the local eigenvectors correspond to the main axes of the ellipsoidal clusters. Thus, this approach provides a cost function associated to proposals in the literature which combine SOM or NG with local PCA models. We demonstrate the behavior of matrix NG and SOM in several benchmark examples and in an application to image compression