98 research outputs found

    Enhancing Domain Word Embedding via Latent Semantic Imputation

    Full text link
    We present a novel method named Latent Semantic Imputation (LSI) to transfer external knowledge into semantic space for enhancing word embedding. The method integrates graph theory to extract the latent manifold structure of the entities in the affinity space and leverages non-negative least squares with standard simplex constraints and power iteration method to derive spectral embeddings. It provides an effective and efficient approach to combining entity representations defined in different Euclidean spaces. Specifically, our approach generates and imputes reliable embedding vectors for low-frequency words in the semantic space and benefits downstream language tasks that depend on word embedding. We conduct comprehensive experiments on a carefully designed classification problem and language modeling and demonstrate the superiority of the enhanced embedding via LSI over several well-known benchmark embeddings. We also confirm the consistency of the results under different parameter settings of our method.Comment: ACM SIGKDD 201

    Cross-lingual Distillation for Text Classification

    Full text link
    Cross-lingual text classification(CLTC) is the task of classifying documents written in different languages into the same taxonomy of categories. This paper presents a novel approach to CLTC that builds on model distillation, which adapts and extends a framework originally proposed for model compression. Using soft probabilistic predictions for the documents in a label-rich language as the (induced) supervisory labels in a parallel corpus of documents, we train classifiers successfully for new languages in which labeled training data are not available. An adversarial feature adaptation technique is also applied during the model training to reduce distribution mismatch. We conducted experiments on two benchmark CLTC datasets, treating English as the source language and German, French, Japan and Chinese as the unlabeled target languages. The proposed approach had the advantageous or comparable performance of the other state-of-art methods.Comment: Accepted at ACL 2017; Code available at https://github.com/xrc10/cross-distil

    A Multiple Source based Transfer Learning Framework for Marketing Campaigns

    Full text link
    © 2018 IEEE. The rapid growing number of marketing campaigns demands an efficient learning model to identify prospective customers to target. Transfer learning is widely considered as a major way to improve the learning performance by using the generated knowledge from previous learning tasks. Most recent studies focused on transferring knowledge from source domains to target domains which may result in knowledge missing. To avoid this, we proposed a multiple source based transfer learning framework to do it reversely. The data in target domains is transferred into source domains by normalizing them into the same distributions and then improving the learning task in target domains by its generated knowledge in source domains. The proposed method is general and can deal with supervised and unsupervised inductive and transductive learning simultaneously with a compatibility to work with different machine learning models. The experiments on real-world campaign data demonstrate the performance of the proposed method

    Metinsel veri madenciliği için anlamsal yarı-eğitimli algoritmaların geliştirilmesi

    Get PDF
    Ganiz, Murat Can (Dogus Author) -- Zeynep Hilal, Kilimci (Dogus Author)Metinsel veri madenciliği büyük miktarlardaki metinsel verilerden faydalı bilgilerin çıkarılması veya bunların otomatik olarak organize edilmesini içerir. Büyük miktarlarda metinsel belgenin otomatik olarak organize edilmesinde metin sınıflandırma algoritmaları önemli bir rol oynar. Bu alanda kullanılan sınıflandırma algoritmaları “eğitimli” (supervised), kümeleme algoritmaları ise “eğitimsiz” (unsupervised) olarak adlandırılırlar. Bunların ortasında yer alan “yarı-eğitimli” (semisupervised) algoritmalar ise etiketli verinin yanı sıra bol miktarda bulunan etiketsiz veriden faydalanarak sınıflandırma başarımını arttırabilirler. Metinsel veri madenciliği algoritmalarında geleneksel olarak kelime sepeti (bag-of-words) olarak tabir edilen model kullanılmaktadır. Kelime sepeti modeli metinde geçen kelimeleri bulundukları yerden ve birbirinden bağımsız olarak değerlendirir. Ayrıca geleneksel algoritmalardaki bir başka varsayım ise metinlerin birbirinden bağımsız ve eşit olarak dağıldıklarıdır. Sonuç olarak bu yaklaşım tarzı kelimelerin ve metinlerin birbirleri arasındaki anlamsal ilişkileri göz ardı etmektedir. Metinsel veri madenciliği alanında son yıllarda özellikle kelimeler arasındaki anlamsal ilişkilerden faydalanan çalışmalara ilgi artmaktadır. Anlamsal bilginin kullanılması geleneksel makine öğrenmesi algoritmalarının başarımını özellikle eldeki verinin az, seyrek veya gürültülü olduğu durumlarda arttırmaktadır. Gerçek hayat uygulamalarında algoritmaların eğitim için kullanacağı veri genellikle sınırlı ve gürültülüdür. Bu yüzden anlamsal bilgiyi kullanabilen algoritmalar gerçek hayat problemlerinde büyük yarar sağlama potansiyeline sahiptir. Bu projede, ilk aşamada eğitimli metinsel veri madenciliği için anlamsal algoritmalar geliştirdik. Bu anlamsal algoritmalar metin sınıflandırma ve özellik seçimi alanlarında performans artışı sağlamaktadır. Projenin ikinci aşamasında ise bu yöntemlerden yola çıkarak etiketli ve etiketsiz verileri kullanan yarı-eğitimli metin sınıflandırma algoritmaları geliştirme faaliyetleri yürüttük. Proje süresince 5 yüksek lisans tezi tamamlanmış, 1 Doktora tezi tez savunma aşamasına gelmiş, 2 adet SCI dergi makalesi yayınlanmış, 8 adet bildiri ulusal ve uluslararası konferanslar ve sempozyumlarda sunulmuş ve yayınlanmıştır. Hazırlanan 2 adet dergi makalesi ise dergilere gönderilmiş ve değerlendirme aşamasındadır. Projenin son aşamasındaki bulgularımızı içeren 1 adet konferans bildirisi 2 adet dergi makalesi de hazırlık aşamasındadır. Ayrıca proje ile ilgili olarak üniversite çıkışlı bir girişim şirketi (spin-off) kurulmuştur.Textual data mining is the process of extracting useful knowledge from large amount of textual data. In this field, classification algorithms are called supervised and clustering algorithms are called unsupervised algorithms. Between these there are semi supervised algorithms which can improve the accuracy of the classification by making use of the unlabeled data. Traditionally, bag-of-words model is being used in textual data mining algorithms. Bag-of-words model assumes that words independent from each other and their positions in the text. Furthermore, traditional algorithms assume that texts are independent and identically distributed. As a result this approach ignores the semantic relationship between words and between texts. There has been a recent interest in works that make use of the semantic relationships especially between the words. Use of semantic knowledge increase the performance of the systems especially when there are few, sparse and noisy data. In fact, there are very sparse and noisy data in real world settings. As a result, algorithms that can make use of the semantic knowledge have a great potential to increase the performance. In this project, in the first phase, we developed semantic algorithms and methods for supervised classification. These semantic algorithms provide performance improvements on text classification and feature selection. On the second phase of the project we have pursued development activities for semi-supervised classification algorithms that make use of labeled and unlabeled data, based on the methods developed in the first phase. During the project, 5 master’s thesis is completed, the PhD student is advanced to the dissertation defense stage, two articles are published on SCI indexed journals, 8 proceedings are presented in national and international conferences. Two journal articles are sent and 1 conference proceeding and two journal articles are in preparation, which include the findings of the last phase of the project. Furthermore, a spin-off technology company is founded related to the project.TÜBİTA

    Machine Learning in Automated Text Categorization

    Full text link
    The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual definition of a classifier by domain experts) are a very good effectiveness, considerable savings in terms of expert manpower, and straightforward portability to different domains. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. We will discuss in detail issues pertaining to three different problems, namely document representation, classifier construction, and classifier evaluation.Comment: Accepted for publication on ACM Computing Survey

    Statistical learning techniques for text categorization with sparse labeled data

    Get PDF
    Many applications involve learning a supervised classifier from very few explicitly labeled training examples, since the cost of manually labeling the training data is often prohibitively high. For instance, we expect a good classifier to learn our interests from a few example books or movies we like, and recommend similar ones in the future, or we expect a search engine to give more personalized search results based on whatever little it learned about our past queries and clicked documents. There is thus a need for classification techniques capable of learning from sparse labeled data, by exploiting additional information about the classification task at hand (e.g., background knowledge) or by employing more sophisticated features (e.g., n-gram sequences, trees, graphs). In this thesis, we focus on two approaches for overcoming the bottleneck of sparse labeled data. We first propose the Inductive/Transductive Latent Model (ILM/TLM), which is a new generative model for text documents. ILM/TLM has various building blocks designed to facilitate the integration of background knowledge (e.g., unlabeled documents, ontologies of concepts, encyclopedia) into the process of learning from small training data. Our method can be used for inductive and transductive learning and achieves significant gains over state-of-the-art methods for very small training sets. Second, we propose Structured Logistic Regression (SLR), which is a new coordinate-wise gradient ascent technique for learning logistic regression in the space of all (word or character) sequences in the training data. SLR exploits the inherent structure of the n-gram feature space in order to automatically provide a compact set of highly discriminative n-gram features. Our detailed experimental study shows that while SLR achieves similar classification results to those of the state-of-the-art methods (which use all n-gram features given explicitly), it is more than an order of magnitude faster than its opponents. The techniques presented in this thesis can be used to advance the technologies for automatically and efficiently building large training sets, therefore reducing the need for spending human computation on this task.Viele Anwendungen benutzen Klassifikatoren, die auf dünn gesäten Trainingsdaten lernen müssen, da es oft aufwändig ist, Trainingsdaten zur Verfügung zu stellen. Ein Beispiel für solche Anwendungen sind Empfehlungssysteme, die auf der Basis von sehr wenigen Büchern oder Filmen die Interessen des Benutzers erraten müssen, um ihm ähnliche Bücher oder Filme zu empfehlen. Ein anderes Beispiel sind Suchmaschinen, die sich auf den Benutzer einzustellen versuchen, auch wenn sie bisher nur sehr wenig Information über den Benutzer in Form von gestellten Anfragen oder geklickten Dokumenten besitzen. Wir benötigen also Klassifikationstechniken, die von dünn gesäten Trainingsdaten lernen können. Dies kann geschehen, indem zusätzliche Information über die Klassifikationsaufgabe ausgenutzt wird (z.B. mit Hintergrundwissen) oder indem raffiniertere Merkmale verwendet werden (z.B. n-Gram-Folgen, Bäume oder Graphen). In dieser Arbeit stellen wir zwei Ansätze vor, um das Problem der dünn gesäten Trainingsdaten anzugehen. Als erstes schlagen wir das Induktiv-Transduktive Latente Modell (ILM/TLM) vor, ein neues generatives Modell für Text-Dokumente. Das ILM/TLM verfügt über mehrere Komponenten, die es erlauben, Hintergrundwissen (wie z.B. nicht Klassifizierte Dokumente, Konzeptontologien oder Enzyklopädien) in den Lernprozess mit einzubeziehen. Diese Methode kann sowohl für induktives als auch für transduktives Lernen eingesetzt werden. Sie schlägt die modernsten Alternativmethoden signifikant bei dünn gesäten Trainingsdaten. Zweitens schlagen wir Strukturierte Logistische Regression (SLR) vor, ein neues Gradientenverfahren zum koordinatenweisen Lernen von logistischer Regression im Raum aller Wortfolgen oder Zeichenfolgen in den Trainingsdaten. SLR nutzt die inhärente Struktur des n-Gram-Raums aus, um automatisch hoch-diskriminative Merkmale zu finden. Unsere detaillierten Experimente zeigen, dass SLR ähnliche Ergebnisse erzielt wie die modernsten Konkurrenzmethoden, allerdings dabei um mehr als eine Größenordnung schneller ist. Die in dieser Arbeit vorgestellten Techniken verbessern das Maschinelle Lernen auf dünn gesäten Trainingsdaten und verringern den Bedarf an manueller Arbeit

    Exploring 3D Data and Beyond in a Low Data Regime

    Get PDF
    3D object classification of point clouds is an essential task as laser scanners, or other depth sensors, producing point clouds are now a commodity on, e.g., autonomous vehicles, surveying vehicles, service robots, and drones. There have been fewer advances using deep learning methods in the area of point clouds compared to 2D images and videos, partially because the data in a point cloud are typically unordered as opposed to the pixels in a 2D image, which implies standard deep learning architectures are not suitable. Additionally, we identify there is a shortcoming of labelled 3D data in many computer vision tasks, as collecting 3D data is significantly more costly and difficult. This implies using zero- or few-shot learning approaches, where some classes have not been observed often or at all during training. As our first objective, we study the problem of 3D object classification of point clouds in a supervised setting where there are labelled samples for each class in the dataset. To this end, we introduce the {3DCapsule}, which is a 3D extension of the recently introduced Capsule concept by Hinton et al. that makes it applicable to unordered point sets. The 3DCapsule is a drop-in replacement of the commonly used fully connected classifier. It is demonstrated that when the 3DCapsule is applied to contemporary 3D point set classification architectures, it consistently shows an improvement, in particular when subjected to noisy data. We then turn our attention to the problem of 3D object classification of point clouds in a Zero-shot Learning (ZSL) setting, where there are no labelled data for some classes. Several recent 3D point cloud recognition algorithms are adapted to the ZSL setting with some necessary changes to their respective architectures. To the best of our knowledge, at the time, this was the first attempt to classify unseen 3D point cloud objects in a ZSL setting. A standard protocol (which includes the choice of datasets and determines the seen/unseen split) to evaluate such systems is also proposed. In the next contribution, we address the hubness problem on 3D point cloud data, which is when a model is biased to predict only a few particular labels for most of the test instances. To this end, we propose a loss function which is useful for both Zero-Shot and Generalized Zero-Shot Learning. Besides, we tackle 3D object classification of point clouds in a different setting, called the transductive setting, wherein the test samples are allowed to be observed during the training stage but then as unlabelled data. We extend, for the first time, transductive Zero-Shot Learning (ZSL) and Generalized Zero-Shot Learning (GZSL) approaches to the domain of 3D point cloud classification by developing a novel triplet loss that takes advantage of the unlabeled test data. While designed for the task of 3D point cloud classification, the method is also shown to be applicable to the more common use-case of 2D image classification. Lastly, we study the Generalized Zero-Shot Learning (GZSL) problem in the 2D image domain. However, we also demonstrate that our proposed method is applicable to 3D point cloud data. We propose using a mixture of subspaces which represents input features and semantic information in a way that reduces the imbalance between seen and unseen prediction scores. Subspaces define the cluster structure of the visual domain and help describe the visual and semantic domain considering the overall distribution of the data
    corecore