624 research outputs found

    Metinsel veri madenciliği için anlamsal yarı-eğitimli algoritmaların geliştirilmesi

    Get PDF
    Ganiz, Murat Can (Dogus Author) -- Zeynep Hilal, Kilimci (Dogus Author)Metinsel veri madenciliği büyük miktarlardaki metinsel verilerden faydalı bilgilerin çıkarılması veya bunların otomatik olarak organize edilmesini içerir. Büyük miktarlarda metinsel belgenin otomatik olarak organize edilmesinde metin sınıflandırma algoritmaları önemli bir rol oynar. Bu alanda kullanılan sınıflandırma algoritmaları “eğitimli” (supervised), kümeleme algoritmaları ise “eğitimsiz” (unsupervised) olarak adlandırılırlar. Bunların ortasında yer alan “yarı-eğitimli” (semisupervised) algoritmalar ise etiketli verinin yanı sıra bol miktarda bulunan etiketsiz veriden faydalanarak sınıflandırma başarımını arttırabilirler. Metinsel veri madenciliği algoritmalarında geleneksel olarak kelime sepeti (bag-of-words) olarak tabir edilen model kullanılmaktadır. Kelime sepeti modeli metinde geçen kelimeleri bulundukları yerden ve birbirinden bağımsız olarak değerlendirir. Ayrıca geleneksel algoritmalardaki bir başka varsayım ise metinlerin birbirinden bağımsız ve eşit olarak dağıldıklarıdır. Sonuç olarak bu yaklaşım tarzı kelimelerin ve metinlerin birbirleri arasındaki anlamsal ilişkileri göz ardı etmektedir. Metinsel veri madenciliği alanında son yıllarda özellikle kelimeler arasındaki anlamsal ilişkilerden faydalanan çalışmalara ilgi artmaktadır. Anlamsal bilginin kullanılması geleneksel makine öğrenmesi algoritmalarının başarımını özellikle eldeki verinin az, seyrek veya gürültülü olduğu durumlarda arttırmaktadır. Gerçek hayat uygulamalarında algoritmaların eğitim için kullanacağı veri genellikle sınırlı ve gürültülüdür. Bu yüzden anlamsal bilgiyi kullanabilen algoritmalar gerçek hayat problemlerinde büyük yarar sağlama potansiyeline sahiptir. Bu projede, ilk aşamada eğitimli metinsel veri madenciliği için anlamsal algoritmalar geliştirdik. Bu anlamsal algoritmalar metin sınıflandırma ve özellik seçimi alanlarında performans artışı sağlamaktadır. Projenin ikinci aşamasında ise bu yöntemlerden yola çıkarak etiketli ve etiketsiz verileri kullanan yarı-eğitimli metin sınıflandırma algoritmaları geliştirme faaliyetleri yürüttük. Proje süresince 5 yüksek lisans tezi tamamlanmış, 1 Doktora tezi tez savunma aşamasına gelmiş, 2 adet SCI dergi makalesi yayınlanmış, 8 adet bildiri ulusal ve uluslararası konferanslar ve sempozyumlarda sunulmuş ve yayınlanmıştır. Hazırlanan 2 adet dergi makalesi ise dergilere gönderilmiş ve değerlendirme aşamasındadır. Projenin son aşamasındaki bulgularımızı içeren 1 adet konferans bildirisi 2 adet dergi makalesi de hazırlık aşamasındadır. Ayrıca proje ile ilgili olarak üniversite çıkışlı bir girişim şirketi (spin-off) kurulmuştur.Textual data mining is the process of extracting useful knowledge from large amount of textual data. In this field, classification algorithms are called supervised and clustering algorithms are called unsupervised algorithms. Between these there are semi supervised algorithms which can improve the accuracy of the classification by making use of the unlabeled data. Traditionally, bag-of-words model is being used in textual data mining algorithms. Bag-of-words model assumes that words independent from each other and their positions in the text. Furthermore, traditional algorithms assume that texts are independent and identically distributed. As a result this approach ignores the semantic relationship between words and between texts. There has been a recent interest in works that make use of the semantic relationships especially between the words. Use of semantic knowledge increase the performance of the systems especially when there are few, sparse and noisy data. In fact, there are very sparse and noisy data in real world settings. As a result, algorithms that can make use of the semantic knowledge have a great potential to increase the performance. In this project, in the first phase, we developed semantic algorithms and methods for supervised classification. These semantic algorithms provide performance improvements on text classification and feature selection. On the second phase of the project we have pursued development activities for semi-supervised classification algorithms that make use of labeled and unlabeled data, based on the methods developed in the first phase. During the project, 5 master’s thesis is completed, the PhD student is advanced to the dissertation defense stage, two articles are published on SCI indexed journals, 8 proceedings are presented in national and international conferences. Two journal articles are sent and 1 conference proceeding and two journal articles are in preparation, which include the findings of the last phase of the project. Furthermore, a spin-off technology company is founded related to the project.TÜBİTA

    A corpus-based semantic kernel for text classification by using meaning values of terms

    Get PDF
    Text categorization plays a crucial role in both academic and commercial platforms due to the growing demand for automatic organization of documents. Kernel-based classification algorithms such as Support Vector Machines (SVM) have become highly popular in the task of text mining. This is mainly due to their relatively high classification accuracy on several application domains as well as their ability to handle high dimensional and sparse data which is the prohibitive characteristics of textual data representation. Recently, there is an increased interest in the exploitation of background knowledge such as ontologies and corpus-based statistical knowledge in text categorization. It has been shown that, by replacing the standard kernel functions such as linear kernel with customized kernel functions which take advantage of this background knowledge, it is possible to increase the performance of SVM in the text classification domain. Based on this, we propose a novel semantic smoothing kernel for SVM. The suggested approach is based on a meaning measure, which calculates the meaningfulness of the terms in the context of classes. The documents vectors are smoothed based on these meaning values of the terms in the context of classes. Since we efficiently make use of the class information in the smoothing process, it can be considered a supervised smoothing kernel. The meaning measure is based on the Helmholtz principle from Gestalt theory and has previously been applied to several text mining applications such as document summarization and feature extraction. However, to the best of our knowledge, ours is the first study to use meaning measure in a supervised setting to build a semantic kernel for SVM. We evaluated the proposed approach by conducting a large number of experiments on well-known textual datasets and present results with respect to different experimental conditions. We compare our results with traditional kernels used in SVM such as linear kernel as well as with several corpus-based semantic kernels. Our results show that classification performance of the proposed approach outperforms other kernels

    Unsupervised text Feature Selection using memetic Dichotomous Differential Evolution

    Get PDF
    Feature Selection (FS) methods have been studied extensively in the literature, and there are a crucial component in machine learning techniques. However, unsupervised text feature selection has not been well studied in document clustering problems. Feature selection could be modelled as an optimization problem due to the large number of possible solutions that might be valid. In this paper, a memetic method that combines Differential Evolution (DE) with Simulated Annealing (SA) for unsupervised FS was proposed. Due to the use of only two values indicating the existence or absence of the feature, a binary version of differential evolution is used. A dichotomous DE was used for the purpose of the binary version, and the proposed method is named Dichotomous Differential Evolution Simulated Annealing (DDESA). This method uses dichotomous mutation instead of using the standard mutation DE to be more effective for binary purposes. The Mean Absolute Distance (MAD) filter was used as the feature subset internal evaluation measure in this paper. The proposed method was compared with other state-of-the-art methods including the standard DE combined with SA, which is named DESA in this paper, using five benchmark datasets. The F-micro, F-macro (F-scores) and Average Distance of Document to Cluster (ADDC) measures were utilized as the evaluation measures. The Reduction Rate (RR) was also used as an evaluation measure. Test results showed that the proposed DDESA outperformed the other tested methods in performing the unsupervised text feature selection

    Deducing corticotropin-releasing hormone receptor type 1 signaling networks from gene expression data by usage of genetic algorithms and graphical Gaussian models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis is a hallmark of complex and multifactorial psychiatric diseases such as anxiety and mood disorders. About 50-60% of patients with major depression show HPA axis dysfunction, i.e. hyperactivity and impaired negative feedback regulation. The neuropeptide corticotropin-releasing hormone (CRH) and its receptor type 1 (CRHR1) are key regulators of this neuroendocrine stress axis. Therefore, we analyzed CRH/CRHR1-dependent gene expression data obtained from the pituitary corticotrope cell line AtT-20, a well-established <it>in vitro </it>model for CRHR1-mediated signal transduction. To extract significantly regulated genes from a genome-wide microarray data set and to deduce underlying CRHR1-dependent signaling networks, we combined supervised and unsupervised algorithms.</p> <p>Results</p> <p>We present an efficient variable selection strategy by consecutively applying univariate as well as multivariate methods followed by graphical models. First, feature preselection was used to exclude genes not differentially regulated over time from the dataset. For multivariate variable selection a maximum likelihood (MLHD) discriminant function within GALGO, an R package based on a genetic algorithm (GA), was chosen. The topmost genes representing major nodes in the expression network were ranked to find highly separating candidate genes. By using groups of five genes (chromosome size) in the discriminant function and repeating the genetic algorithm separately four times we found eleven genes occurring at least in three of the top ranked result lists of the four repetitions. In addition, we compared the results of GA/MLHD with the alternative optimization algorithms greedy selection and simulated annealing as well as with the state-of-the-art method random forest. In every case we obtained a clear overlap of the selected genes independently confirming the results of MLHD in combination with a genetic algorithm.</p> <p>With two unsupervised algorithms, principal component analysis and graphical Gaussian models, putative interactions of the candidate genes were determined and reconstructed by literature mining. Differential regulation of six candidate genes was validated by qRT-PCR.</p> <p>Conclusions</p> <p>The combination of supervised and unsupervised algorithms in this study allowed extracting a small subset of meaningful candidate genes from the genome-wide expression data set. Thereby, variable selection using different optimization algorithms based on linear classifiers as well as the nonlinear random forest method resulted in congruent candidate genes. The calculated interacting network connecting these new target genes was bioinformatically mapped to known CRHR1-dependent signaling pathways. Additionally, the differential expression of the identified target genes was confirmed experimentally.</p

    Helmholtz Principle-Based Keyword Extraction

    Get PDF
    In today’s world of evolving technology, everybody wishes to accomplish tasks in least time. As information available online is perpetuating every day, it becomes very difficult to summarize any more than 100 documents in acceptable time. Thus, ”text summarization” is a challenging problem in the area of Natural Language Processing (NLP) especially in the context of global languages. In this thesis, we survey taxonomy of text summarization from different aspects. It briefly explains different approaches to summarization and the evaluation parameters. Also presented are a thorough details and facts about more than fifty automatic text summarization systems to ease the job of researchers and serve as a short encyclopedia for the investigated systems. Keyword extraction methods plays vital role in text mining and document processing. Keywords represent essential content of a document. Text mining applications take the advantage of keywords for processing documents. A quality Keyword is a word that represents the exact content of the text subsetly. It is very difficult to process large number of documents to get high quality keywords in acceptable time. This thesis gives a comparison between the most popular keyword extractions method, tf-idf and the proposed method that is based on Helmholtz Principle. Helmholtz Principle is based on the ideas from image processing and derived from the Gestalt theory of human perception. We also investigate the run time to extract the keywords by both the methods. Experimental results show that keyword extraction method based on Helmholtz Principle outperformancetf-idf
    corecore