125 research outputs found

    An efficient approach for nonconvex semidefinite optimization via customized alternating direction method of multipliers

    Full text link
    We investigate a class of general combinatorial graph problems, including MAX-CUT and community detection, reformulated as quadratic objectives over nonconvex constraints and solved via the alternating direction method of multipliers (ADMM). We propose two reformulations: one using vector variables and a binary constraint, and the other further reformulating the Burer-Monteiro form for simpler subproblems. Despite the nonconvex constraint, we prove the ADMM iterates converge to a stationary point in both formulations, under mild assumptions. Additionally, recent work suggests that in this latter form, when the matrix factors are wide enough, local optimum with high probability is also the global optimum. To demonstrate the scalability of our algorithm, we include results for MAX-CUT, community detection, and image segmentation benchmark and simulated examples.Comment: arXiv admin note: text overlap with arXiv:1805.1067

    Investigating microstructural variation in the human hippocampus using non-negative matrix factorization

    No full text
    In this work we use non-negative matrix factorization to identify patterns of microstructural variance in the human hippocampus. We utilize high-resolution structural and diffusion magnetic resonance imaging data from the Human Connectome Project to query hippocampus microstructure on a multivariate, voxelwise basis. Application of non-negative matrix factorization identifies spatial components (clusters of voxels sharing similar covariance patterns), as well as subject weightings (individual variance across hippocampus microstructure). By assessing the stability of spatial components as well as the accuracy of factorization, we identified 4 distinct microstructural components. Furthermore, we quantified the benefit of using multiple microstructural metrics by demonstrating that using three microstructural metrics (T1-weighted/T2-weighted signal, mean diffusivity and fractional anisotropy) produced more stable spatial components than when assessing metrics individually. Finally, we related individual subject weightings to demographic and behavioural measures using a partial least squares analysis. Through this approach we identified interpretable relationships between hippocampus microstructure and demographic and behavioural measures. Taken together, our work suggests non-negative matrix factorization as a spatially specific analytical approach for neuroimaging studies and advocates for the use of multiple metrics for data-driven component analyses

    Robust unsupervised learning using kernels

    Get PDF
    This thesis aims to study deep connections between statistical robustness and machine learning techniques, in particular, the relationship between some particular kernel (the Gaussian kernel) and the robustness of kernel-based learning methods that use it. This thesis also presented that estimating the mean in the feature space with the RBF kernel, is like doing robust estimation of the mean in the data space with the Welsch M-estimator. Based on these ideas, new robust kernel to machine learning algorithms are designed and implemented in the current thesis: Tukey’s, Andrews’ and Huber’s robust kernels which each one corresponding to Tukey’s, Andrews’ and Huber’s M-robust estimator, respectively. On the one hand, kernel-based algorithms are an important tool which is widely applied to different machine learning and information retrieval problems including: clustering, latent topic analysis, recommender systems, image annotation, and contentbased image retrieval, amongst others. Robustness is the ability of a statistical estimation method or machine learning method to deal with noise and outliers. There is a strong theory of robustness in statistics; however, it receives little attention in machine learning. A systematic evaluation is performed in order to evaluate the robustness of kernel-based algorithms in clustering showing that some robust kernels including Tukey’s and Andrews’ robust kernels perform on par to state-of-the-art algorithmsResumen: Esta tesis apunta a mostrar la profunda relación que existe entre robustez estadística y técnicas de aprendizaje de maquina, en particular, la relación entre algunos tipos de kernels (kernel Gausiano) y la robustez de los métodos basados en kernels. Esta tesis también presenta que la estimación de la media en el espacio de características con el kernel rbf, es como hacer estimación de la media en el espacio de los datos con el m-estimador de Welsch. Basado en las ideas anteriores, un conjunto de nuevos kernel robustos son propuestos y diseñdos: Tukey, Andrews, y Huber kernels robustos correspondientes a los m-estimadores de Tukey, Andrews y Huber respectivamente. Por un lado, los algoritmos basados en kernel es una importante herramienta aplicada en diferentes problemas de aprendizaje automático y recuperación de información, incluyendo: el agrupamiento, análisis de tema latente, sistemas de recomendación, anotación de imágenes, recuperación de informacion, entre otros. La robustez es la capacidad de un método o procedimiento de estimación aprendizaje estadístico automatico para lidiar con el ruido y los valores atípicos. Hay una fuerte teoría de robustez en estadística, sin embargo, han recibido poca atención en aprendizaje de máquina. Una evaluación sistemática se realiza con el fin de evaluar la robustez de los algoritmos basados en kernel en tareas de agrupación mostrando que algunos kernels robustos incluyendo los kernels de Tukey y de Andrews se desempeñan a la par de los algoritmos del estado del arte.Maestrí

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Data Clustering And Visualization Through Matrix Factorization

    Get PDF
    Clustering is traditionally an unsupervised task which is to find natural groupings or clusters in multidimensional data based on perceived similarities among the patterns. The purpose of clustering is to extract useful information from unlabeled data. In order to present the extracted useful knowledge obtained by clustering in a meaningful way, data visualization becomes a popular and growing area of research field. Visualization can provide a qualitative overview of large and complex data sets, which help us the desired insight in truly understanding the phenomena of interest in data. The contribution of this dissertation is two-fold: Semi-Supervised Non-negative Matrix Factorization (SS-NMF) for data clustering/co-clustering and Exemplar-based data Visualization (EV) through matrix factorization. Compared to traditional data mining models, matrix-based methods are fast, easy to understand and implement, especially suitable to solve large-scale challenging problems in text mining, image grouping, medical diagnosis, and bioinformatics. In this dissertation, we present two effective matrix-based solutions in the new directions of data clustering and visualization. First, in many practical learning domains, there is a large supply of unlabeled data but limited labeled data, and in most cases it might be expensive to generate large amounts of labeled data. Traditional clustering algorithms completely ignore these valuable labeled data and thus are inapplicable to these problems. Consequently, semi-supervised clustering, which can incorporate the domain knowledge to guide a clustering algorithm, has become a topic of significant recent interest. Thus, we develop a Non-negative Matrix Factorization (NMF) based framework to incorporate prior knowledge into data clustering. Moreover, with the fast growth of Internet and computational technologies in the past decade, many data mining applications have advanced swiftly from the simple clustering of one data type to the co-clustering of multiple data types, usually involving high heterogeneity. To this end, we extend SS-NMF to perform heterogeneous data co-clustering. From a theoretical perspective, SS-NMF for data clustering/co-clustering is mathematically rigorous. The convergence and correctness of our algorithms are proved. In addition, we discuss the relationship between SS-NMF with other well-known clustering and co-clustering models. Second, most of current clustering models only provide the centroids (e.g., mathematical means of the clusters) without inferring the representative exemplars from real data, thus they are unable to better summarize or visualize the raw data. A new method, Exemplar-based Visualization (EV), is proposed to cluster and visualize an extremely large-scale data. Capitalizing on recent advances in matrix approximation and factorization, EV provides a means to visualize large scale data with high accuracy (in retaining neighbor relations), high efficiency (in computation), and high flexibility (through the use of exemplars). Empirically, we demonstrate the superior performance of our matrix-based data clustering and visualization models through extensive experiments performed on the publicly available large scale data sets
    • …
    corecore