14 research outputs found

    Clustering Partially Observed Graphs via Convex Optimization

    Get PDF
    This paper considers the problem of clustering a partially observed unweighted graph---i.e., one where for some node pairs we know there is an edge between them, for some others we know there is no edge, and for the remaining we do not know whether or not there is an edge. We want to organize the nodes into disjoint clusters so that there is relatively dense (observed) connectivity within clusters, and sparse across clusters. We take a novel yet natural approach to this problem, by focusing on finding the clustering that minimizes the number of "disagreements"---i.e., the sum of the number of (observed) missing edges within clusters, and (observed) present edges across clusters. Our algorithm uses convex optimization; its basis is a reduction of disagreement minimization to the problem of recovering an (unknown) low-rank matrix and an (unknown) sparse matrix from their partially observed sum. We evaluate the performance of our algorithm on the classical Planted Partition/Stochastic Block Model. Our main theorem provides sufficient conditions for the success of our algorithm as a function of the minimum cluster size, edge density and observation probability; in particular, the results characterize the tradeoff between the observation probability and the edge density gap. When there are a constant number of clusters of equal size, our results are optimal up to logarithmic factors.Comment: This is the final version published in Journal of Machine Learning Research (JMLR). Partial results appeared in International Conference on Machine Learning (ICML) 201

    Graph Clustering With Missing Data: Convex Algorithms and Analysis

    Get PDF
    We consider the problem of finding clusters in an unweighted graph, when the graph is partially observed. We analyze two programs, one which works for dense graphs and one which works for both sparse and dense graphs, but requires some a priori knowledge of the total cluster size, that are based on the convex optimization approach for low-rank matrix recovery using nuclear norm minimization. For the commonly used Stochastic Block Model, we obtain explicit bounds on the parameters of the problem (size and sparsity of clusters, the amount of observed data) and the regularization parameter characterize the success and failure of the programs. We corroborate our theoretical findings through extensive simulations. We also run our algorithm on a real data set obtained from crowdsourcing an image classification task on the Amazon Mechanical Turk, and observe significant performance improvement over traditional methods such as k-means

    A descoberta de conhecimento em bases de dados geográficas através da explicitação semântica

    Get PDF
    A investigação na área da Descoberta de Conhecimento em Bases de Dados Geográficas tem sido caracterizada pelo desenvolvimento de algoritmos de Data Mining capazes de captar e utilizar a semântica associada à componente espacial dos dados analisados. Este artigo apresenta uma nova abordagem na qual é possível a utilização de algoritmos de Data Mining já disponíveis no mercado e não desenvolvidos para lidar com dados geográficos. Esta aproximação baseia-se na explicitação semântica de alguns dos relacionamentos espaciais existentes entre as entidades analisadas, como seja a direcção ou distância entre elas. A explicitação é conseguida utilizando os princípios definidos nas normas CEN TC 287 para Informação Geográfica. A partir das suas directivas e baseado num conjunto reduzido de relacionamentos é possível a inferência de novos relacionamentos espaciais desconhecidos para o sistema. A abordagem proposta foi validada partindo de uma base de dados geográfica contendo a orientação espacial existente entre alguns Concelhos de Portugal, a qual permitiu a inferência da direcção actual entre os Distritos que agregam os Concelhos analisados.Knowledge Discovery in Geographic Databases has been characterised by the development of new algorithms able to catch and use the semantic associated with the fact’s locations. This paper presents a new approach in that is possible the use of Data Mining algorithms not implemented for geographic data treatment and already available in market. It is based in the semantic explicitation of some spatial relations that exist between the analysed entities, as the direction or distance among them. This explicitation is reached using the principles established by the CEN TC 287 Geographic Information standard. Under its directives and based on a small set of relations is possible the inference of new spatial relations. The proposed approach was validated using a geographic database with the spatial directions that exist between some Municipalities of Portugal, allowing the inference of the direction relation among the Districts that aggregate the analysed Municipalities

    Structural Analysis of Glazed Tubular Tiles of Oriental Architectures Based on 3D Point Clouds for Cultural Heritage

    Get PDF
    Laser scanning, along with its resultant 3D point clouds, constitutes a prevalent method for the documentation of cultural heritage. This paper introduces a novel workflow for the structural analysis of glazed tubular tiles that adorn the roofs of historical buildings in the Orient, utilizing 3D point clouds. The workflow integrates a robust segmentation algorithm utilizing the maximum principal curvature and normal vectors. Moreover, clustering algorithms, including DBSCAN, are incorporated to refine the clusters and thus increase segmentation accuracy. Structural analysis is enabled by cylindrical model fitting, which allows for the estimation of parameters and residuals. While the results exhibit commendable performance in individual tile segmentation, it is imperative to address the impact of substantial variations in scanning range and incident angles before engaging in the structural analysis fitting process. The results of experiment demonstrate that under conditions of significantly large scanning angles, the root mean square error (RMSE) for inadequately fitted tiles can extend to 0.066 m, surpassing twice the RMSE observed for well-fitted tiles. The proposed workflow proves to be applicable and exhibits significant potential to advance practices in cultural heritage documentation

    Finding localized associations in market basket data

    Full text link

    Method and system for data clustering for very large databases

    Get PDF
    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points

    Detection of outliers and outliers clustering on large datasets with distributed computing

    Get PDF
    Tese de mestrado em Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2012Outlier detection is a data analysis related problem, of great importance in diverse science fields and with many applications. Without a definitive formal definition and holding several other designations – deviations, anomalies, exceptions, noise, atypical data, – outliers are, succinctly, the samples in a dataset that, for some reason, are different from the rest of the set. It can be of interest to either remove them, as a filtering process to smoothing data, or collect them as new dataset holding additional information potentially relevant. Its importance can be seen from the broad range of applications, like fraud or intrusion detection, specialized pattern recognition, data filtering, scientific data mining, medical diagnosis, etc. Although an old problem, with roots in Statistics, the outlier detection problem has become more pertinent then ever and yet further difficult to deal with. Better and more ubiquitous ways of data acquisition and storage capacities increasing constantly, made the size of datasets grow considerably in recent years, along with its number and its availability. Larger volumes of data becomes harder to explore and filter, while simultaneously data treatment and analysis emerges as more demanded and fundamental in today’s life. Distributed computing is a computer science paradigm to distribute hard, complex problems across several independent machines, connected on a network. A problem is break down in more simple sub-problems, that are solved simultaneous by the autonomous machines, and all resultant sub-solutions collected and put together into a final solution. Distributed computing provides a solution for the limitations in the hardware scaling, both economical and physical, by building up computational capacity, as needed, with the addition of new machines, not necessarily new or advanced models, but any commodity hardware. This work presents several distributed computing algorithms to outlier detection, starting from a distributed version of an existent algorithm, CURIO[9], and introducing a series of optimizations and variants that leads to a new method, Curio3XD, that allows to resolve both the common issues typical of this problem, the constraints imposed by the size and the dimensionality of the datasets. The final version, and its variant, is applicable for any volume of data, by scaling the hardware in the distributed computing, and to high dimensionality datasets, by moving the original exponential dependency on the dimension to a dependency, quadratic, on the local density of data, easily tunable with an algorithm parameter, the precision. Intermediate versions are presented for the sake of clarification of the process that took to the final method, and as an alternative approach, possibly useful with very sparse datasets. For a distributed computing environment with full support for the distributed system and the underlying hardware infrastructure, it was chosen Apache Hadoop[23] as a platform for developing, implementation and testing, due to its power and flexibility, and yet relatively easy usability. This constitutes an open-source solution, well studied and documented, employed by several major companies, with an excellent applicability to both clouds and local clusters. The different algorithms and variants were developed within the MapReduce programing model, and implemented in the Hadoop framework, which supports that model. MapReduce was conceived to permit the deployment of distributed computing applications in a simple, developer-oriented way, with main focus on the programmatic solutions of the problems, and leaving the underneath distributed network control and maintenance absolutely transparent. The developed implementations are included in appendix. Results of tests, with an adapted real world dataset, showed very good performances of the referred algorithms’ final versions, with excellent scalability on both size and dimensionality of data, as previewed theoretically. Performance tests with the precision parameter and comparative tests between all variants developed are also presented and discussed.Detecção de outliers é um problema relativo à análise de dados, de grande importância em diversos campos científicos. Sem um definição formal definitiva e possuindo diversas outras designações – desvios, anomalias, exceções, ruído, dados atípicos, – outliers são, sucintamente, as amostras num conjunto de dados que, por alguma razão, são diferentes do resto do dados. Pode ser de interesse quer a sua remoção, como um processo de filtragem para uma suavização dos dados, quer para a recolecção num novo conjunto de dados constituindo informação adicional potencialmente relevante. A sua importância pode ser notada no diversificado espectro de aplicações, como sejam a detecção de fraudes ou intrusos, reconhecimento especializado de padrões, filtragem de dados, prospecção de dados científicos, diagnósticos médicos, etc. Apesar de se tratar de um problema antigo, com origem na Estatística, a detecção de outliers tem-se tornado mais pertinente que nunca e contudo mais difícil de lidar. Melhor e mais ubíquas formas de aquisição de dados e capacidades de armazenamento em constante crescimento, fizeram as bases de dados crescer consideravelmente nos últimos anos, em conjunto com o aumento do seu número e disponibilidade. Um maior volume de dados torna-se mais difícil de explorar e filtrar, e simultaneamente o tratamento e análise de dados emerge como um processo mais necessário e fundamental nos dias de hoje. A computação distribuída é um paradigma das ciências da computação para distribuir problemas complexos e difíceis por diferentes máquinas independentes, ligadas em rede. Os problemas são divididos em problemas menores, mais simples, que são resolvidos simultaneamente pelas várias máquinas autónomas, e todas as sub-soluções resultantes coligidas e combinadas para obter uma solução final. A computação distribuída fornece uma solução para as limitações, físicas e económicas, no escalamento de equipamento, pela incremento de capacidade computacional, conforme a necessidade, com a adição de novas máquinas , não necessariamente modelos novos ou avançados, mas quaisquer equipamento à disposição. Este trabalho apresenta diversos algoritmos em computação distribuída para detecção de outliers, tendo como ponto de partida uma versão distribuída de um algoritmo existente, CURIO[9], e introduzindo uma série de optimizações e variantes que levam a um novo método, Curio3XD, que permite resolver ambos os problemas típicos comuns a este tipo de problemas, relacionados com o tamanho e com a dimensionalidade dos conjuntos de dados. Essa versão final, ou a sua variante, é aplicável a qualquer volume de dados, por escalamento de equipamento na computação distribuída, e a conjuntos de qualquer dimensão, pela remoção da dependência exponencial original na dimensão, substituindo-a por uma dependência, quadrática, na densidade local dos dados, facilmente controlável por um parâmetro do algoritmo, a precisão. As versões intermédias são apresentadas pela clarificação do processo que levou ao método final, e como uma abordagem alternativa, potencialmente útil com conjuntos de dados muito esparsos. Para um ambiente de computação distribuída com suporte completo a um sistema distribuído e uma infraestrutura de hardware adjacente, foi escolhido o Apache Hadoop[23] como plataforma para desenvolvimento, implementação e teste, devido às suas potencialidades e flexibilidade, e sendo contudo de relativo uso fácil. Este constitui um solução open-source, bem estudada e documentada, empregue por diversas grandes empresas, com uma excelente aplicabilidade quer em cloud como em clusters locais. Os diferentes algoritmos e variantes foram desenvolvidos no modelo programático MapReduce, e implementados no quadro do Apache Hadoop, que suporta esse modelo e oferece a capacidade de um fácil desenvolvimento em cloud e grandes clusters. Resultados dos testes, com um conjunto de dados real adaptado, mostrou um muito bom desempenho das versões finais dos referidos algoritmos, com uma excelente escalabilidade em ambas as variáveis tamanho e dimensionalidade dos dados, conforme previsto teoricamente. Testes de desempenho com a precisão e testes comparativos entre todas as variantes desenvolvidas são também apresentados e discutidos

    Direct Manipulation Querying of Database Systems.

    Full text link
    Database systems are tremendously powerful and useful, as evidenced by their popularity in modern business. Unfortunately, for non-expert users, to use a database is still a daunting task due to its poor usability. This PhD dissertation examines stages in the information seeking process and proposes techniques to help users interact with the database through direct manipulation, which has been proven a natural interaction paradigm. For the first stage of information seeking, query formulation, we proposed a spreadsheet algebra upon which a direct manipulation interface for database querying can be built. We developed a spreadsheet algebra that is powerful (capable of expressing at least all single-block SQL queries) and can be intuitively implemented in a spreadsheet. In addition, we proposed assisted querying by browsing, where we help users query the database through browsing. For the second stage, result review, instead of asking users to review possibly many results in a flat table, we proposed a hierarchical navigation scheme that allows users to browse the results through representatives with easy drill-down and filtering capabilities. We proposed an efficient tree-based method for generating the representatives. For the query refinement stage, we proposed and implemented a provenance-based automatic refinement framework. Users label a set of output tuples and our framework produces a ranked list of changes that best improve the query. This dissertation significantly lowers the barrier for non-expert users and reduces the effort for expert users to use a database.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/86282/1/binliu_1.pd
    corecore