4 research outputs found

    Author Name Disambiguation Using Co-training

    Get PDF
    In the community of bibliometrics, author name ambiguity means that author\u27s name is not a reliable identier for associating academic papers with their authors. Author name ambiguity has been the problem in bibliometrics and service providers like Google Scholar, generating a domain of study call Author Name Disambiguation (AND). Author name ambiguity is often tackled using classication techniques, where labeled papers are provided, and papers are assigned to correct authors according to the paper text and paper citations. When applying classication methods to author name disambiguation, two issues stand out: one is that a paper has multiple views (paper text and citation network). The other is the lack of training data: there are not many papers that are labeled. To cope with these two issues, we propose to use the co-training algorithm in AND. The co-training algorithm uses two views to classify papers iteratively and add the top selected papers into the training pool. We demonstrate that the co-training algorithm outperforms the baseline multi-view classication algorithm. We also experiment with hyper-parameters in the co-training algorithm. The experiment is done on the PubMed dataset, where authors are labeled with ORCID. Papers are represented by two embeddings that are learnt from paper content and paper citation network separately. Baseline classiers for comparison are logistic regression and SVM

    Supervised And Semi-supervised Learning Using Informative Feature Subspaces

    Get PDF
    Tez (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 2010Thesis (PhD) -- İstanbul Technical University, Institute of Science and Technology, 2010Web madenciliği, biyoinformatik ve konuşma tanıma gibi birçok farklı alanda çok yüksek miktarda etiketsiz veri ve farklı öznitelik uzayları bulunmaktadır. Birlikte öğrenme (Co-training) algoritması gibi yarı-eğitmenli algoritmalar etiketsiz verinin kullanımını amaçlamaktadır. Rastgele öznitelik alt uzayları (RAS) metodu farklı öznitelik alt uzaylarını kullanarak sınıflandırıcı eğitmeyi ve bu sınıflandırıcıları, topluluklarda birleştirmeyi amaçlamaktadır. Bu tez çalışmasında, sınıflandırıcı toplulukları için ilişkili öznitelik alt uzayları rastgele seçilerek; bilgi içeren ve çeşitliliği sağlanmış öznitelik alt uzaylarının oluşturulması sağlanmıştır. Oluşturulan sınıflandırıcı toplulukları, eğitmenli ve yarı-eğitmenli öğrenme için kullanılmıştır. Önerdiğimiz ilk yöntem, öznitelik alt uzaylarını karşılıklı bilgi miktarına bağlı ilişki değerlerini kullanarak seçmektedir. Bu yöntem Rel-RAS (eğitmenli) ve Rel-RASCO (yarı-eğitmenli) algoritmalarında kullanılmıştır. İkinci yöntem, ilişkili ve artık olmayan öznitelik alt uzaylarını seçmek için, mRMR (en düşük artıklık ve en yüksek ilişkili) öznitelik seçme algoritmasının değiştirilmiş şeklini kullanmaktadır. Bu yöntem mRMR-RAS (eğitmenli) ve mRMR-RASCO (yarı-eğitmenli) algoritmalarında kullanılmıştır. Önerilen yöntemlerin deneysel analizleri belirli sayıda veri kümesinde gerçekleştirilmiş ve mevcut yöntemlerle karşılaştırılmıştır. Aynı zamanda önerilen yöntemlerle oluşturulmuş sınıflandırıcı topluluklarının teorik analizleri; Kohavi Wolpert (KW) varyans, bilgi kuramı tabanlı düşük düzeyli çeşitlilik (LOD) ve bilgi kuramı sayısı (ITS) kullanılarak gerçekleştirilmiştir. LOD ve KW-varyansının davranışları arasında benzerlik bulunmuş ve topluluk sınıflandırma başarımının ITS ile açıklanabileceği görülmüştür.In many different fields, such as web mining, bioinformatics, speech recognition, there is an abundance of unlabeled data and different feature views. Semi-supervised learning algorithms such as Co-training aim to make use of unlabeled data. Random (feature) subspace (RAS) methods aim to use different feature subspaces to train different classifiers and combine them in an ensemble. In this thesis, we obtain informative and diverse feature subspaces for classifier ensembles by means of randomly drawing relevant feature subspaces. We then use these ensembles for supervised and semi-supervised learning. Our first algorithm produces relevant random subspaces using the mutual information based relevance values. This method is used in Rel-RAS (supervised) and Rel-RASCO (semi-supervised) algorithms. The second algorithm modifies the mRMR (Minimum Redundancy Maximum Relevance) feature selection algorithm to produce random feature subsets that are both relevant and non-redundant. This method is used in mRMR-RAS (supervised) and mRMR-RASCO (semi-supervised) algorithms. We perform experimental analysis of our methods on a number of datasets and compare them to existing methods. We also do theoretical analysis of classifier ensembles produced by our methods using Kohavi Wolpert (KW) variance, information theory based low order diversity (LOD) and information theoretic scores (ITS). We find out that LOD has a similar tendency with KW-variance and ensemble accuracy of the algorithms can be explained using ITS.DoktoraPh

    Analyzing co-training style algorithms

    No full text
    Abstract. Co-training is a semi-supervised learning paradigm which trains two learners respectively from two different views and lets the learners label some unlabeled examples for each other. In this paper, we present a new PAC analysis on co-training style algorithms. We show that the co-training process can succeed even without two views, given that the two learners have large difference, which explains the success of some co-training style algorithms that do not require two views. Moreover, we theoretically explain that why the co-training process could not improve the performance further after a number of rounds, and present a rough estimation on the appropriate round to terminate co-training to avoid some wasteful learning rounds.

    Semi-supervised learning with committees: exploiting unlabeled data using ensemble learning algorithms

    No full text
    Supervised machine learning is a branch of artificial intelligence concerned with learning computer programs to automatically improve with experience through knowledge extraction from examples. It builds predictive models from labeled data. Such learning approaches are useful for many interesting real-world applications, but are particularly useful for tasks involving the automatic categorization, retrieval and extraction of knowledge from large collections of data such as text, images and videos. In traditional supervised learning, one uses "labeled" data to build a model. However, labeling the training data for real-world applications is difficult, expensive, or time consuming, as it requires the effort of human annotators sometimes with specific domain experience and training. There are implicit costs associated with obtaining these labels from domain experts, such as limited time and financial resources. This is especially true for applications that involve learning with large number of class labels and sometimes with similarities among them. Semi-supervised learning (SSL) addresses this inherent bottleneck by allowing the model to integrate part or all of the available unlabeled data in its supervised learning. The goal is to maximize the learning performance of the model through such newly-labeled examples while minimizing the work required of human annotators. Exploiting unlabeled data to help improve the learning performance has become a hot topic during the last decade. It is interesting to see that semi-supervised learning and ensemble learning are two important paradigms that were developed almost in parallel and with different philosophies. Semi-supervised learning tries to improve generalization performance by exploiting unlabeled data, while ensemble learning tries to achieve the same objective by using multiple predictors. In this thesis, I concentrate on SSL with committees and especially on co-training style algorithms
    corecore