43 research outputs found

    A corpus-based semantic kernel for text classification by using meaning values of terms

    Get PDF
    Text categorization plays a crucial role in both academic and commercial platforms due to the growing demand for automatic organization of documents. Kernel-based classification algorithms such as Support Vector Machines (SVM) have become highly popular in the task of text mining. This is mainly due to their relatively high classification accuracy on several application domains as well as their ability to handle high dimensional and sparse data which is the prohibitive characteristics of textual data representation. Recently, there is an increased interest in the exploitation of background knowledge such as ontologies and corpus-based statistical knowledge in text categorization. It has been shown that, by replacing the standard kernel functions such as linear kernel with customized kernel functions which take advantage of this background knowledge, it is possible to increase the performance of SVM in the text classification domain. Based on this, we propose a novel semantic smoothing kernel for SVM. The suggested approach is based on a meaning measure, which calculates the meaningfulness of the terms in the context of classes. The documents vectors are smoothed based on these meaning values of the terms in the context of classes. Since we efficiently make use of the class information in the smoothing process, it can be considered a supervised smoothing kernel. The meaning measure is based on the Helmholtz principle from Gestalt theory and has previously been applied to several text mining applications such as document summarization and feature extraction. However, to the best of our knowledge, ours is the first study to use meaning measure in a supervised setting to build a semantic kernel for SVM. We evaluated the proposed approach by conducting a large number of experiments on well-known textual datasets and present results with respect to different experimental conditions. We compare our results with traditional kernels used in SVM such as linear kernel as well as with several corpus-based semantic kernels. Our results show that classification performance of the proposed approach outperforms other kernels

    Higher-order smoothing: a novel semantic smoothing method for text classification

    Get PDF
    It is known that latent semantic indexing (LSI) takes advantage of implicit higher-order (or latent) structure in the association of terms and documents. Higher-order relations in LSI capture "latent semantics". These findings have inspired a novel Bayesian framework for classification named Higher-Order Naive Bayes (HONB), which was introduced previously, that can explicitly make use of these higher-order relations. In this paper, we present a novel semantic smoothing method named Higher-Order Smoothing (HOS) for the Naive Bayes algorithm. HOS is built on a similar graph based data representation of the HONB which allows semantics in higher-order paths to be exploited. We take the concept one step further in HOS and exploit the relationships between instances of different classes. As a result, we move beyond not only instance boundaries, but also class boundaries to exploit the latent information in higher-order paths. This approach improves the parameter estimation when dealing with insufficient labeled data. Results of our extensive experiments demonstrate the value of HOS on several benchmark datasets

    Hyperbolic Centroid Calculations for Text Classification

    Full text link
    A new development in NLP is the construction of hyperbolic word embeddings. As opposed to their Euclidean counterparts, hyperbolic embeddings are represented not by vectors, but by points in hyperbolic space. This makes the most common basic scheme for constructing document representations, namely the averaging of word vectors, meaningless in the hyperbolic setting. We reinterpret the vector mean as the centroid of the points represented by the vectors, and investigate various hyperbolic centroid schemes and their effectiveness at text classification

    Metinsel veri madenciliği için anlamsal yarı-eğitimli algoritmaların geliştirilmesi

    Get PDF
    Ganiz, Murat Can (Dogus Author) -- Zeynep Hilal, Kilimci (Dogus Author)Metinsel veri madenciliği büyük miktarlardaki metinsel verilerden faydalı bilgilerin çıkarılması veya bunların otomatik olarak organize edilmesini içerir. Büyük miktarlarda metinsel belgenin otomatik olarak organize edilmesinde metin sınıflandırma algoritmaları önemli bir rol oynar. Bu alanda kullanılan sınıflandırma algoritmaları “eğitimli” (supervised), kümeleme algoritmaları ise “eğitimsiz” (unsupervised) olarak adlandırılırlar. Bunların ortasında yer alan “yarı-eğitimli” (semisupervised) algoritmalar ise etiketli verinin yanı sıra bol miktarda bulunan etiketsiz veriden faydalanarak sınıflandırma başarımını arttırabilirler. Metinsel veri madenciliği algoritmalarında geleneksel olarak kelime sepeti (bag-of-words) olarak tabir edilen model kullanılmaktadır. Kelime sepeti modeli metinde geçen kelimeleri bulundukları yerden ve birbirinden bağımsız olarak değerlendirir. Ayrıca geleneksel algoritmalardaki bir başka varsayım ise metinlerin birbirinden bağımsız ve eşit olarak dağıldıklarıdır. Sonuç olarak bu yaklaşım tarzı kelimelerin ve metinlerin birbirleri arasındaki anlamsal ilişkileri göz ardı etmektedir. Metinsel veri madenciliği alanında son yıllarda özellikle kelimeler arasındaki anlamsal ilişkilerden faydalanan çalışmalara ilgi artmaktadır. Anlamsal bilginin kullanılması geleneksel makine öğrenmesi algoritmalarının başarımını özellikle eldeki verinin az, seyrek veya gürültülü olduğu durumlarda arttırmaktadır. Gerçek hayat uygulamalarında algoritmaların eğitim için kullanacağı veri genellikle sınırlı ve gürültülüdür. Bu yüzden anlamsal bilgiyi kullanabilen algoritmalar gerçek hayat problemlerinde büyük yarar sağlama potansiyeline sahiptir. Bu projede, ilk aşamada eğitimli metinsel veri madenciliği için anlamsal algoritmalar geliştirdik. Bu anlamsal algoritmalar metin sınıflandırma ve özellik seçimi alanlarında performans artışı sağlamaktadır. Projenin ikinci aşamasında ise bu yöntemlerden yola çıkarak etiketli ve etiketsiz verileri kullanan yarı-eğitimli metin sınıflandırma algoritmaları geliştirme faaliyetleri yürüttük. Proje süresince 5 yüksek lisans tezi tamamlanmış, 1 Doktora tezi tez savunma aşamasına gelmiş, 2 adet SCI dergi makalesi yayınlanmış, 8 adet bildiri ulusal ve uluslararası konferanslar ve sempozyumlarda sunulmuş ve yayınlanmıştır. Hazırlanan 2 adet dergi makalesi ise dergilere gönderilmiş ve değerlendirme aşamasındadır. Projenin son aşamasındaki bulgularımızı içeren 1 adet konferans bildirisi 2 adet dergi makalesi de hazırlık aşamasındadır. Ayrıca proje ile ilgili olarak üniversite çıkışlı bir girişim şirketi (spin-off) kurulmuştur.Textual data mining is the process of extracting useful knowledge from large amount of textual data. In this field, classification algorithms are called supervised and clustering algorithms are called unsupervised algorithms. Between these there are semi supervised algorithms which can improve the accuracy of the classification by making use of the unlabeled data. Traditionally, bag-of-words model is being used in textual data mining algorithms. Bag-of-words model assumes that words independent from each other and their positions in the text. Furthermore, traditional algorithms assume that texts are independent and identically distributed. As a result this approach ignores the semantic relationship between words and between texts. There has been a recent interest in works that make use of the semantic relationships especially between the words. Use of semantic knowledge increase the performance of the systems especially when there are few, sparse and noisy data. In fact, there are very sparse and noisy data in real world settings. As a result, algorithms that can make use of the semantic knowledge have a great potential to increase the performance. In this project, in the first phase, we developed semantic algorithms and methods for supervised classification. These semantic algorithms provide performance improvements on text classification and feature selection. On the second phase of the project we have pursued development activities for semi-supervised classification algorithms that make use of labeled and unlabeled data, based on the methods developed in the first phase. During the project, 5 master’s thesis is completed, the PhD student is advanced to the dissertation defense stage, two articles are published on SCI indexed journals, 8 proceedings are presented in national and international conferences. Two journal articles are sent and 1 conference proceeding and two journal articles are in preparation, which include the findings of the last phase of the project. Furthermore, a spin-off technology company is founded related to the project.TÜBİTA

    Biomedical named entity recognition using transformers with biLSTM + CRF and graph convolutional neural networks

    Get PDF
    © 2022 IEEE.One of the applications of Natural Language Processing (NLP) is to process free text data for extracting information. Information extraction has various forms like Named Entity Recognition (NER) for detecting the named entities in the free text. Biomedical named-entity extraction task is about extracting named entities like drugs, diseases, organs, etc. from texts in medical domain. In our study, we improve commonly used models in this domain, such as biLSTM+CRF model, using transformer based language models like BERT and its domain-specific variant BioBERT in the embedding layer. We conduct several experiments on several different benchmark biomedical datasets using a variety of combination of models and embeddings such as BioBERT+biLSTM+CRF, BERT+biLSTM+CRF, Fasttext+biLSTM+CRF, and Graph Convolutional Networks. Our results show a quite visible, 4% to 13%, improvements when baseline biLSTM+CRF model is initialized with pretrained language models such as BERT and especially with domain specific one like BioBERT on several datasets

    A Feature Based Simple Machine Learning Approach with Word Embeddings to Named Entity Recognition on Tweets

    No full text
    Named Entity Recognition (NER) is a well-studied domain in Natural Language Processing. Traditional NER systems, such as Stanford NER system, achieve high performance with formal and grammatically well-structured texts. However, when these systems are applied to informal and noisy texts, which have mixed language with emoticons or abbreviations, there is a significant degradation in results. We attempt to fill this gap by developing a NER system with using novel term features including Word2vec based features and machine learning based classifier. We describe the features and Word2Vec implementation used in our solution and report the results obtained by our system. The system is quite efficient and scalable in terms of classification time complexity and shows promising results which can be potentially improved with larger training sets or with the use of semi-supervised classifiers

    Helmholtz principle based supervised and unsupervised feature selection methods for text mining

    No full text
    One of the important problems in text classification is the high dimensionality of the feature space. Feature selection methods are used to reduce the dimensionality of the feature space by selecting the most valuable features for classification. Apart from reducing the dimensionality, feature selection methods have potential to improve text classifiers' performance both in terms of accuracy and time. Furthermore, it helps to build simpler and as a result more comprehensible models. In this study we propose new methods for feature selection from textual data, called Meaning Based Feature Selection (MBFS) which is based on the Helmholtz principle from the Gestalt theory of human perception which is used in image processing. The proposed approaches are extensively evaluated by their effect on the classification performance of two well-known classifiers on several datasets and compared with several feature selection algorithms commonly used in text mining. Our results demonstrate the value of the MBFS methods in terms of classification accuracy and execution time. (C) 2016 Elsevier Ltd. All rights reserved

    Individual Stock Price Prediction by Using KAP and Twitter Sentiments with Machine Learning for BIST30

    Get PDF
    © 2022 IEEE.In this study we used machine learning models for predicting individual stock price and volume changes using sentiments from public disclosures and tweets. Public Disclosure Platform (KAP) is the mandated regulatory platform for disclosing news about companies listed in Borsa Istanbul. Investors in Borsa Istanbul use Twitter to express their sentiments about stocks. By combining people\"s sentiment on Twitter and companies\" disclosures, our prediction model predicts the volume and price changes of individual company stocks listed in BIST30. Financial data regarding market conditions consisting of daily price changes of BIST30, DJI, USD, and Gold per Ounce are also added to enhance the prediction accuracy of the model. Our model achieves an maximum of 80% individual stock price prediction accuracy for companies with high social media presence and public disclosure count. We also achieve 74.7% mean volume prediction accuracy across all BIST30 companies

    A new cluster-aware regularization of neural networks

    No full text
    Inherent clusters formed by observations used for the training of a classification model is a frequently encountered case. These clusters differ in certain characteristics, however in classical modelling techniques no information on these differences is fed into the model. Differentiations in purchasing styles of e-commerce customers may be a good example for this case. While some customers like to do research and comparisons on price, functionalities and comments, some others may need a shorter examination to decide on their purchase. In a similar manner, purchasing journey of a deal seeker customer would differ from a luxury buyer customer. In this paper, we propose a neural network model which incorporates different cluster information in its hidden nodes. Within the forward propagation and backpropagation calculations of the network, we use a non-randomized Boolean matrix to assign hidden nodes to different observation clusters. This Boolean matrix shuts down a hidden node for observations which do not belong to the cluster that the node is assigned to. We performed experiments for different settings and network architectures. Also, analyses are conducted to study the influence of alternative application patterns of the Boolean matrix on the results – expressed in terms of iterations and epochs for an Adam (adaptive moment estimation) optimization. Empirical results demonstrate that our proposed method works well in practice and compares favorably to fully randomized alternatives. © 2020, Springer Nature Switzerland AG
    corecore