47 research outputs found

    Reprezentacije i metrike za mašinsko učenje i analizu podataka velikih dimenzija

    Get PDF
    In the current information age, massive amounts of data are gathered, at a rate prohibiting their effective structuring, analysis, and conversion into useful knowledge. This information overload is manifested both in large numbers of data objects recorded in data sets, and large numbers of attributes, also known as high dimensionality. This dis-sertation deals with problems originating from high dimensionality of data representation, referred to as the “curse of dimensionality,” in the context of machine learning, data mining, and information retrieval. The described research follows two angles: studying the behavior of (dis)similarity metrics with increasing dimensionality, and exploring feature-selection methods, primarily with regard to document representation schemes for text classification. The main results of the dissertation, relevant to the first research angle, include theoretical insights into the concentration behavior of cosine similarity, and a detailed analysis of the phenomenon of hubness, which refers to the tendency of some points in a data set to become hubs by being in-cluded in unexpectedly many k-nearest neighbor lists of other points. The mechanisms behind the phenomenon are studied in detail, both from a theoretical and empirical perspective, linking hubness with the (intrinsic) dimensionality of data, describing its interaction with the cluster structure of data and the information provided by class la-bels, and demonstrating the interplay of the phenomenon and well known algorithms for classification, semi-supervised learning, clustering, and outlier detection, with special consideration being given to time-series classification and information retrieval. Results pertaining to the second research angle include quantification of the interaction between various transformations of high-dimensional document representations, and feature selection, in the context of text classification.U tekućem „informatičkom dobu“, masivne količine podataka se sakupljaju brzinom koja ne dozvoljava njihovo efektivno strukturiranje, analizu, i pretvaranje u korisno znanje. Ovo zasićenje informacijama se manifestuje kako kroz veliki broj objekata uključenih u skupove podataka, tako i kroz veliki broj atributa, takođe poznat kao velika dimenzionalnost. Disertacija se bavi problemima koji proizilaze iz velike dimenzionalnosti reprezentacije podataka, često nazivanim „prokletstvom dimenzionalnosti“, u kontekstu mašinskog učenja, data mining-a i information retrieval-a. Opisana istraživanja prate dva pravca: izučavanje ponašanja metrika (ne)sličnosti u odnosu na rastuću dimenzionalnost, i proučavanje metoda odabira atributa, prvenstveno u interakciji sa tehnikama reprezentacije dokumenata za klasifikaciju teksta. Centralni rezultati disertacije, relevantni za prvi pravac istraživanja, uključuju teorijske uvide u fenomen koncentracije kosinusne mere sličnosti, i detaljnu analizu fenomena habovitosti koji se odnosi na tendenciju nekih tačaka u skupu podataka da postanu habovi tako što bivaju uvrštene u neočekivano mnogo lista k najbližih suseda ostalih tačaka. Mehanizmi koji pokreću fenomen detaljno su proučeni, kako iz teorijske tako i iz empirijske perspektive. Habovitost je povezana sa (latentnom) dimenzionalnošću podataka, opisana je njena interakcija sa strukturom klastera u podacima i informacijama koje pružaju oznake klasa, i demonstriran je njen efekat na poznate algoritme za klasifikaciju, semi-supervizirano učenje, klastering i detekciju outlier-a, sa posebnim osvrtom na klasifikaciju vremenskih serija i information retrieval. Rezultati koji se odnose na drugi pravac istraživanja uključuju kvantifikaciju interakcije između različitih transformacija višedimenzionalnih reprezentacija dokumenata i odabira atributa, u kontekstu klasifikacije teksta

    Reprezentacije i metrike za mašinsko učenje i analizu podataka velikih dimenzija

    Get PDF
    In the current information age, massive amounts of data are gathered, at a rate prohibiting their effective structuring, analysis, and conversion into useful knowledge. This information overload is manifested both in large numbers of data objects recorded in data sets, and large numbers of attributes, also known as high dimensionality. This dis-sertation deals with problems originating from high dimensionality of data representation, referred to as the “curse of dimensionality,” in the context of machine learning, data mining, and information retrieval. The described research follows two angles: studying the behavior of (dis)similarity metrics with increasing dimensionality, and exploring feature-selection methods, primarily with regard to document representation schemes for text classification. The main results of the dissertation, relevant to the first research angle, include theoretical insights into the concentration behavior of cosine similarity, and a detailed analysis of the phenomenon of hubness, which refers to the tendency of some points in a data set to become hubs by being in-cluded in unexpectedly many k-nearest neighbor lists of other points. The mechanisms behind the phenomenon are studied in detail, both from a theoretical and empirical perspective, linking hubness with the (intrinsic) dimensionality of data, describing its interaction with the cluster structure of data and the information provided by class la-bels, and demonstrating the interplay of the phenomenon and well known algorithms for classification, semi-supervised learning, clustering, and outlier detection, with special consideration being given to time-series classification and information retrieval. Results pertaining to the second research angle include quantification of the interaction between various transformations of high-dimensional document representations, and feature selection, in the context of text classification.U tekućem „informatičkom dobu“, masivne količine podataka se sakupljaju brzinom koja ne dozvoljava njihovo efektivno strukturiranje, analizu, i pretvaranje u korisno znanje. Ovo zasićenje informacijama se manifestuje kako kroz veliki broj objekata uključenih u skupove podataka, tako i kroz veliki broj atributa, takođe poznat kao velika dimenzionalnost. Disertacija se bavi problemima koji proizilaze iz velike dimenzionalnosti reprezentacije podataka, često nazivanim „prokletstvom dimenzionalnosti“, u kontekstu mašinskog učenja, data mining-a i information retrieval-a. Opisana istraživanja prate dva pravca: izučavanje ponašanja metrika (ne)sličnosti u odnosu na rastuću dimenzionalnost, i proučavanje metoda odabira atributa, prvenstveno u interakciji sa tehnikama reprezentacije dokumenata za klasifikaciju teksta. Centralni rezultati disertacije, relevantni za prvi pravac istraživanja, uključuju teorijske uvide u fenomen koncentracije kosinusne mere sličnosti, i detaljnu analizu fenomena habovitosti koji se odnosi na tendenciju nekih tačaka u skupu podataka da postanu habovi tako što bivaju uvrštene u neočekivano mnogo lista k najbližih suseda ostalih tačaka. Mehanizmi koji pokreću fenomen detaljno su proučeni, kako iz teorijske tako i iz empirijske perspektive. Habovitost je povezana sa (latentnom) dimenzionalnošću podataka, opisana je njena interakcija sa strukturom klastera u podacima i informacijama koje pružaju oznake klasa, i demonstriran je njen efekat na poznate algoritme za klasifikaciju, semi-supervizirano učenje, klastering i detekciju outlier-a, sa posebnim osvrtom na klasifikaciju vremenskih serija i information retrieval. Rezultati koji se odnose na drugi pravac istraživanja uključuju kvantifikaciju interakcije između različitih transformacija višedimenzionalnih reprezentacija dokumenata i odabira atributa, u kontekstu klasifikacije teksta

    A Survey on Intent-based Diversification for Fuzzy Keyword Search

    Get PDF
    Keyword search is an interesting phenomenon, it is the process of finding important and relevant information from various data repositories. Structured and semistructured data can precisely be stored. Fully unstructured documents can annotate and be stored in the form of metadata. For the total web search, half of the web search is for information exploration process. In this paper, the earlier works for semantic meaning of keywords based on their context in the specified documents are thoroughly analyzed. In a tree data representation, the nodes are objects and could hold some intention. These nodes act as anchors for a Smallest Lowest Common Ancestor (SLCA) based pruning process. Based on their features, nodes are clustered. The feature is a distinctive attribute, it is the quality, property or traits of something. Automatic text classification algorithms are the modern way for feature extraction. Summarization and segmentation produce n consecutive grams from various forms of documents. The set of items which describe and summarize one important aspect of a query is known as the facet. Instead of exact string matching a fuzzy mapping based on semantic correlation is the new trend, whereas the correlation is quantified by cosine similarity. Once the outlier is detected, nearest neighbors of the selected points are mapped to the same hash code of the intend nodes with high probability. These methods collectively retrieve the relevant data and prune out the unnecessary data, and at the same time create a hash signature for the nearest neighbor search. This survey emphasizes the need for a framework for fuzzy oriented keyword search

    Exploração das propriedades de hubness para detecção semissupervisionada de outliers em dados de alta dimensão

    Get PDF
    With the increase in the amount of data stored, the area of data mining has become essential for it to be possible to manipulate and extract knowledge from these data. Much of the work in this area focuses on finding patterns in the data, but non-standard data (anomalies) can also add much to the knowledge of the data set under study. The study, development and enhancement of outliers detection techniques are important objectives and have proven useful in several scenarios, such as: fraud detection, intrusion detection and monitoring of medical conditions, among others. The paper presented here describes a novel method for semi-supervised detection of outliers in high dimensional data. Experiments with several real datasets indicate the superiority of the proposed method in relation to the literature methods selected as the baseline.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorDissertação (Mestrado)Com o crescente aumento da quantidade de dados armazenados, a área de mineração de dados tornou-se imprescindível para que seja possível manipular e extrair conhecimento a partir desses dados. Grande parte dos trabalhos nessa área focam em encontrar padrão nos dados, porém os dados fora do padrão (anomalias) também podem agregar muito no conhecimento do conjunto de dados em estudo. O estudo, o desenvolvimento e o aprimoramento de técnicas de detecção de outliers são objetivos importantes e têm se mostrado útil em diversos cenários, como: detecção de fraudes, detecção de intrusão e monitoramento de condições médicas entre outros. O trabalho apresentado aqui descreve um novo método para detecção semissupervisionada de outliers em dados com alta dimensionalidade. Os experimentos realizados com diversos conjuntos de dados reais indicam a superioridade do método proposto em relação aos métodos da literatura selecionados como linha de base

    Exploring 3D Data and Beyond in a Low Data Regime

    Get PDF
    3D object classification of point clouds is an essential task as laser scanners, or other depth sensors, producing point clouds are now a commodity on, e.g., autonomous vehicles, surveying vehicles, service robots, and drones. There have been fewer advances using deep learning methods in the area of point clouds compared to 2D images and videos, partially because the data in a point cloud are typically unordered as opposed to the pixels in a 2D image, which implies standard deep learning architectures are not suitable. Additionally, we identify there is a shortcoming of labelled 3D data in many computer vision tasks, as collecting 3D data is significantly more costly and difficult. This implies using zero- or few-shot learning approaches, where some classes have not been observed often or at all during training. As our first objective, we study the problem of 3D object classification of point clouds in a supervised setting where there are labelled samples for each class in the dataset. To this end, we introduce the {3DCapsule}, which is a 3D extension of the recently introduced Capsule concept by Hinton et al. that makes it applicable to unordered point sets. The 3DCapsule is a drop-in replacement of the commonly used fully connected classifier. It is demonstrated that when the 3DCapsule is applied to contemporary 3D point set classification architectures, it consistently shows an improvement, in particular when subjected to noisy data. We then turn our attention to the problem of 3D object classification of point clouds in a Zero-shot Learning (ZSL) setting, where there are no labelled data for some classes. Several recent 3D point cloud recognition algorithms are adapted to the ZSL setting with some necessary changes to their respective architectures. To the best of our knowledge, at the time, this was the first attempt to classify unseen 3D point cloud objects in a ZSL setting. A standard protocol (which includes the choice of datasets and determines the seen/unseen split) to evaluate such systems is also proposed. In the next contribution, we address the hubness problem on 3D point cloud data, which is when a model is biased to predict only a few particular labels for most of the test instances. To this end, we propose a loss function which is useful for both Zero-Shot and Generalized Zero-Shot Learning. Besides, we tackle 3D object classification of point clouds in a different setting, called the transductive setting, wherein the test samples are allowed to be observed during the training stage but then as unlabelled data. We extend, for the first time, transductive Zero-Shot Learning (ZSL) and Generalized Zero-Shot Learning (GZSL) approaches to the domain of 3D point cloud classification by developing a novel triplet loss that takes advantage of the unlabeled test data. While designed for the task of 3D point cloud classification, the method is also shown to be applicable to the more common use-case of 2D image classification. Lastly, we study the Generalized Zero-Shot Learning (GZSL) problem in the 2D image domain. However, we also demonstrate that our proposed method is applicable to 3D point cloud data. We propose using a mixture of subspaces which represents input features and semantic information in a way that reduces the imbalance between seen and unseen prediction scores. Subspaces define the cluster structure of the visual domain and help describe the visual and semantic domain considering the overall distribution of the data

    Information-theoretic graph mining

    Get PDF
    Real world data from various application domains can be modeled as a graph, e.g. social networks and biomedical networks like protein interaction networks or co-activation networks of brain regions. A graph is a powerful concept to model arbitrary (structural) relationships among objects. In recent years, the prevalence of social networks has made graph mining an important center of attention in the data mining field. There are many important tasks in graph mining, such as graph clustering, outlier detection, and link prediction. Many algorithms have been proposed in the literature to solve these tasks. However, normally these issues are solved separately, although they are closely related. Detecting and exploiting the relationship among them is a new challenge in graph mining. Moreover, with data explosion, more information has already been integrated into graph structure. For example, bipartite graphs contain two types of node and graphs with node attributes offer additional non-structural information. Therefore, more challenges arise from the increasing graph complexity. This thesis aims to solve these challenges in order to gain new knowledge from graph data. An important paradigm of data mining used in this thesis is the principle of Minimum Description Length (MDL). It follows the assumption: the more knowledge we have learned from the data, the better we are able to compress the data. The MDL principle balances the complexity of the selected model and the goodness of fit between model and data. Thus, it naturally avoids over-fitting. This thesis proposes several algorithms based on the MDL principle to acquire knowledge from various types of graphs: Info-spot (Automatically Spotting Information-rich Nodes in Graphs) proposes a parameter-free and efficient algorithm for the fully automatic detection of interesting nodes which is a novel outlier notion in graph. Then in contrast to traditional graph mining approaches that focus on discovering dense subgraphs, a novel graph mining technique CXprime (Compression-based eXploiting Primitives) is proposed. It models the transitivity and the hubness of a graph using structure primitives (all possible three-node substructures). Under the coding scheme of CXprime, clusters with structural information can be discovered, dominating substructures of a graph can be distinguished, and a new link prediction score based on substructures is proposed. The next algorithm SCMiner (Summarization-Compression Miner) integrates tasks such as graph summarization, graph clustering, link prediction, and the discovery of the hidden structure of a bipartite graph on the basis of data compression. Finally, a method for non-redundant graph clustering called IROC (Information-theoretic non-Redundant Overlapping Clustering) is proposed to smartly combine structural information with non-structural information based on MDL. IROC is able to detect overlapping communities within subspaces of the attributes. To sum up, algorithms to unify different learning tasks for various types of graphs are proposed. Additionally, these algorithms are based on the MDL principle, which facilitates the unification of different graph learning tasks, the integration of different graph types, and the automatic selection of input parameters that are otherwise difficult to estimate
    corecore