484 research outputs found
One-class classifiers based on entropic spanning graphs
One-class classifiers offer valuable tools to assess the presence of outliers
in data. In this paper, we propose a design methodology for one-class
classifiers based on entropic spanning graphs. Our approach takes into account
the possibility to process also non-numeric data by means of an embedding
procedure. The spanning graph is learned on the embedded input data and the
outcoming partition of vertices defines the classifier. The final partition is
derived by exploiting a criterion based on mutual information minimization.
Here, we compute the mutual information by using a convenient formulation
provided in terms of the -Jensen difference. Once training is
completed, in order to associate a confidence level with the classifier
decision, a graph-based fuzzy model is constructed. The fuzzification process
is based only on topological information of the vertices of the entropic
spanning graph. As such, the proposed one-class classifier is suitable also for
data characterized by complex geometric structures. We provide experiments on
well-known benchmarks containing both feature vectors and labeled graphs. In
addition, we apply the method to the protein solubility recognition problem by
considering several representations for the input samples. Experimental results
demonstrate the effectiveness and versatility of the proposed method with
respect to other state-of-the-art approaches.Comment: Extended and revised version of the paper "One-Class Classification
Through Mutual Information Minimization" presented at the 2016 IEEE IJCNN,
Vancouver, Canad
Learning Embeddings for Graphs and Other High Dimensional Data
An immense amount of data is nowadays produced on a daily basis and extracting knowledge from such data proves fruitful for many scientific purposes. Machine learning algorithms are means to such end and have morphed from a nascent research field to omnipresent algorithms running in the background of many applications we use on a daily basis. Low-dimensionality of data, however, is highly conducive to efficient machine learning methods. However, real-world data is seldom low-dimensional; on the contrary, real-world data can be starkly high-dimensional. Such high-dimensional data is exemplified by graph-structured data, such as biological networks of protein-protein interaction, social networks, etc., on which machine learning techniques in their traditional form cannot easily be applied.
The focus of this report is thus to explore algorithms whose aim is to generate representation vectors that best encode structural information of the vertices of graphs. The vectors can be in turn passed onto down-stream machine learning algorithms to classify nodes or predict links among them. This study is firstly prefaced by introducing dimensionality reduction techniques for data residing in geometric spaces, followed by two techniques for embedding vertices of graphs into low-dimensional spaces
An interactive analysis of harmonic and diffusion equations on discrete 3D shapes
AbstractRecent results in geometry processing have shown that shape segmentation, comparison, and analysis can be successfully addressed through the spectral properties of the Laplace–Beltrami operator, which is involved in the harmonic equation, the Laplacian eigenproblem, the heat diffusion equation, and the definition of spectral distances, such as the bi-harmonic, commute time, and diffusion distances. In this paper, we study the discretization and the main properties of the solutions to these equations on 3D surfaces and their applications to shape analysis. Among the main factors that influence their computation, as well as the corresponding distances, we focus our attention on the choice of different Laplacian matrices, initial boundary conditions, and input shapes. These degrees of freedom motivate our choice to address this study through the executable paper, which allows the user to perform a large set of experiments and select his/her own parameters. Finally, we represent these distances in a unified way and provide a simple procedure to generate new distances on 3D shapes
Evaluation of whole graph embedding techniques for a clustering task in the manufacturing domain
Production systems in manufacturing consume and generate data. Representing the relationships between subsystems and their associated data is complex, but suitable for Knowledge Graphs (KG), which allow us to visualize the relationships between subsystems and store their measurement data. In this work, KG act as a feature engineering technique for a clustering task by converting KG into Euclidean space with so-called graph embeddings and serving as input to a clustering algorithm. The Python library Karate Club proposes 10 different technologies for embedding whole graphs, i.e., only one vector is generated for each graph. These were successfully tested on benchmark datasets that include social media platforms and chemical or biochemical structures. This work presents the potential of graph embeddings for the manufacturing domain for a clustering task by modifying and evaluating Karate Club’s techniques for a manufacturing dataset. First, an introduction to graph theory is given and the state of the art in whole graph embedding techniques is explained. Second, the Bosch production line dataset is examined with an Exploratory Data Analysis (EDA), and a graph data model for directed and undirected graphs is defined based on the results. Third, a data processing pipeline is developed to generate graph embeddings from the raw data. Finally, the graph embeddings are used as input to a clustering algorithm, and a quantitative comparison of the performance of the techniques is conducted
Partitioning Well-clustered Graphs with k-Means and Heat Kernel
We study a suitable class of well-clustered graphs that admit good k-way partitions and present the first almost-linear time algorithm for with almost-optimal approximation guarantees partitioning such graphs. A good k-way partition is a partition of the vertices of a graph into disjoint clusters (subsets) , such that each cluster is better connected on the inside than towards the outside. This problem is a key building block in algorithm design, and has wide applications in community detection and network analysis. Key to our result is a theorem on the multi-cut and eigenvector structure of the graph Laplacians of these well-clustered graphs. Based on this theorem, we give the first rigorous guarantees on the approximation ratios of the widely used k-means clustering algorithms. We also give an almost-linear time algorithm based on heat kernel embeddings and approximate nearest neighbor data structures
- …