1,445 research outputs found

    Knowledge-rich Image Gist Understanding Beyond Literal Meaning

    Full text link
    We investigate the problem of understanding the message (gist) conveyed by images and their captions as found, for instance, on websites or news articles. To this end, we propose a methodology to capture the meaning of image-caption pairs on the basis of large amounts of machine-readable knowledge that has previously been shown to be highly effective for text understanding. Our method identifies the connotation of objects beyond their denotation: where most approaches to image understanding focus on the denotation of objects, i.e., their literal meaning, our work addresses the identification of connotations, i.e., iconic meanings of objects, to understand the message of images. We view image understanding as the task of representing an image-caption pair on the basis of a wide-coverage vocabulary of concepts such as the one provided by Wikipedia, and cast gist detection as a concept-ranking problem with image-caption pairs as queries. To enable a thorough investigation of the problem of gist understanding, we produce a gold standard of over 300 image-caption pairs and over 8,000 gist annotations covering a wide variety of topics at different levels of abstraction. We use this dataset to experimentally benchmark the contribution of signals from heterogeneous sources, namely image and text. The best result with a Mean Average Precision (MAP) of 0.69 indicate that by combining both dimensions we are able to better understand the meaning of our image-caption pairs than when using language or vision information alone. We test the robustness of our gist detection approach when receiving automatically generated input, i.e., using automatically generated image tags or generated captions, and prove the feasibility of an end-to-end automated process

    Locally Non-linear Embeddings for Extreme Multi-label Learning

    Full text link
    The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set. Embedding based approaches make training and prediction tractable by assuming that the training label matrix is low-rank and hence the effective number of labels can be reduced by projecting the high dimensional label vectors onto a low dimensional linear subspace. Still, leading embedding approaches have been unable to deliver high prediction accuracies or scale to large problems as the low rank assumption is violated in most real world applications. This paper develops the X-One classifier to address both limitations. The main technical contribution in X-One is a formulation for learning a small ensemble of local distance preserving embeddings which can accurately predict infrequently occurring (tail) labels. This allows X-One to break free of the traditional low-rank assumption and boost classification accuracy by learning embeddings which preserve pairwise distances between only the nearest label vectors. We conducted extensive experiments on several real-world as well as benchmark data sets and compared our method against state-of-the-art methods for extreme multi-label classification. Experiments reveal that X-One can make significantly more accurate predictions then the state-of-the-art methods including both embeddings (by as much as 35%) as well as trees (by as much as 6%). X-One can also scale efficiently to data sets with a million labels which are beyond the pale of leading embedding methods

    Semi-Supervised Approach to Monitoring Clinical Depressive Symptoms in Social Media

    Get PDF
    With the rise of social media, millions of people are routinely expressing their moods, feelings, and daily struggles with mental health issues on social media platforms like Twitter. Unlike traditional observational cohort studies conducted through questionnaires and self-reported surveys, we explore the reliable detection of clinical depression from tweets obtained unobtrusively. Based on the analysis of tweets crawled from users with self-reported depressive symptoms in their Twitter profiles, we demonstrate the potential for detecting clinical depression symptoms which emulate the PHQ-9 questionnaire clinicians use today. Our study uses a semi-supervised statistical model to evaluate how the duration of these symptoms and their expression on Twitter (in terms of word usage patterns and topical preferences) align with the medical findings reported via the PHQ-9. Our proactive and automatic screening tool is able to identify clinical depressive symptoms with an accuracy of 68% and precision of 72%.Comment: 8 pages, Advances in Social Networks Analysis and Mining (ASONAM), 2017 IEEE/ACM International Conferenc

    Explicit diversification of event aspects for temporal summarization

    Get PDF
    During major events, such as emergencies and disasters, a large volume of information is reported on newswire and social media platforms. Temporal summarization (TS) approaches are used to automatically produce concise overviews of such events by extracting text snippets from related articles over time. Current TS approaches rely on a combination of event relevance and textual novelty for snippet selection. However, for events that span multiple days, textual novelty is often a poor criterion for selecting snippets, since many snippets are textually unique but are semantically redundant or non-informative. In this article, we propose a framework for the diversification of snippets using explicit event aspects, building on recent works in search result diversification. In particular, we first propose two techniques to identify explicit aspects that a user might want to see covered in a summary for different types of event. We then extend a state-of-the-art explicit diversification framework to maximize the coverage of these aspects when selecting summary snippets for unseen events. Through experimentation over the TREC TS 2013, 2014, and 2015 datasets, we show that explicit diversification for temporal summarization significantly outperforms classical novelty-based diversification, as the use of explicit event aspects reduces the amount of redundant and off-topic snippets returned, while also increasing summary timeliness

    Using Learning to Rank Approach to Promoting Diversity for Biomedical Information Retrieval with Wikipedia

    Get PDF
    In most of the traditional information retrieval (IR) models, the independent relevance assumption is taken, which assumes the relevance of a document is independent of other documents. However, the pitfall of this is the high redundancy and low diversity of retrieval result. This has been seen in many scenarios, especially in biomedical IR, where the information need of one query may refer to different aspects. Promoting diversity in IR takes the relationship between documents into account. Unlike previous studies, we tackle this problem in the learning to rank perspective. The main challenges are how to find salient features for biomedical data and how to integrate dynamic features into the ranking model. To address these challenges, Wikipedia is used to detect topics of documents for generating diversity biased features. A combined model is proposed and studied to learn a diversified ranking result. Experiment results show the proposed method outperforms baseline models

    Metinsel veri madenciliği için anlamsal yarı-eğitimli algoritmaların geliştirilmesi

    Get PDF
    Ganiz, Murat Can (Dogus Author) -- Zeynep Hilal, Kilimci (Dogus Author)Metinsel veri madenciliği büyük miktarlardaki metinsel verilerden faydalı bilgilerin çıkarılması veya bunların otomatik olarak organize edilmesini içerir. Büyük miktarlarda metinsel belgenin otomatik olarak organize edilmesinde metin sınıflandırma algoritmaları önemli bir rol oynar. Bu alanda kullanılan sınıflandırma algoritmaları “eğitimli” (supervised), kümeleme algoritmaları ise “eğitimsiz” (unsupervised) olarak adlandırılırlar. Bunların ortasında yer alan “yarı-eğitimli” (semisupervised) algoritmalar ise etiketli verinin yanı sıra bol miktarda bulunan etiketsiz veriden faydalanarak sınıflandırma başarımını arttırabilirler. Metinsel veri madenciliği algoritmalarında geleneksel olarak kelime sepeti (bag-of-words) olarak tabir edilen model kullanılmaktadır. Kelime sepeti modeli metinde geçen kelimeleri bulundukları yerden ve birbirinden bağımsız olarak değerlendirir. Ayrıca geleneksel algoritmalardaki bir başka varsayım ise metinlerin birbirinden bağımsız ve eşit olarak dağıldıklarıdır. Sonuç olarak bu yaklaşım tarzı kelimelerin ve metinlerin birbirleri arasındaki anlamsal ilişkileri göz ardı etmektedir. Metinsel veri madenciliği alanında son yıllarda özellikle kelimeler arasındaki anlamsal ilişkilerden faydalanan çalışmalara ilgi artmaktadır. Anlamsal bilginin kullanılması geleneksel makine öğrenmesi algoritmalarının başarımını özellikle eldeki verinin az, seyrek veya gürültülü olduğu durumlarda arttırmaktadır. Gerçek hayat uygulamalarında algoritmaların eğitim için kullanacağı veri genellikle sınırlı ve gürültülüdür. Bu yüzden anlamsal bilgiyi kullanabilen algoritmalar gerçek hayat problemlerinde büyük yarar sağlama potansiyeline sahiptir. Bu projede, ilk aşamada eğitimli metinsel veri madenciliği için anlamsal algoritmalar geliştirdik. Bu anlamsal algoritmalar metin sınıflandırma ve özellik seçimi alanlarında performans artışı sağlamaktadır. Projenin ikinci aşamasında ise bu yöntemlerden yola çıkarak etiketli ve etiketsiz verileri kullanan yarı-eğitimli metin sınıflandırma algoritmaları geliştirme faaliyetleri yürüttük. Proje süresince 5 yüksek lisans tezi tamamlanmış, 1 Doktora tezi tez savunma aşamasına gelmiş, 2 adet SCI dergi makalesi yayınlanmış, 8 adet bildiri ulusal ve uluslararası konferanslar ve sempozyumlarda sunulmuş ve yayınlanmıştır. Hazırlanan 2 adet dergi makalesi ise dergilere gönderilmiş ve değerlendirme aşamasındadır. Projenin son aşamasındaki bulgularımızı içeren 1 adet konferans bildirisi 2 adet dergi makalesi de hazırlık aşamasındadır. Ayrıca proje ile ilgili olarak üniversite çıkışlı bir girişim şirketi (spin-off) kurulmuştur.Textual data mining is the process of extracting useful knowledge from large amount of textual data. In this field, classification algorithms are called supervised and clustering algorithms are called unsupervised algorithms. Between these there are semi supervised algorithms which can improve the accuracy of the classification by making use of the unlabeled data. Traditionally, bag-of-words model is being used in textual data mining algorithms. Bag-of-words model assumes that words independent from each other and their positions in the text. Furthermore, traditional algorithms assume that texts are independent and identically distributed. As a result this approach ignores the semantic relationship between words and between texts. There has been a recent interest in works that make use of the semantic relationships especially between the words. Use of semantic knowledge increase the performance of the systems especially when there are few, sparse and noisy data. In fact, there are very sparse and noisy data in real world settings. As a result, algorithms that can make use of the semantic knowledge have a great potential to increase the performance. In this project, in the first phase, we developed semantic algorithms and methods for supervised classification. These semantic algorithms provide performance improvements on text classification and feature selection. On the second phase of the project we have pursued development activities for semi-supervised classification algorithms that make use of labeled and unlabeled data, based on the methods developed in the first phase. During the project, 5 master’s thesis is completed, the PhD student is advanced to the dissertation defense stage, two articles are published on SCI indexed journals, 8 proceedings are presented in national and international conferences. Two journal articles are sent and 1 conference proceeding and two journal articles are in preparation, which include the findings of the last phase of the project. Furthermore, a spin-off technology company is founded related to the project.TÜBİTA

    A topic modeling based approach to novel document automatic summarization

    Full text link
    © 2017 Elsevier Ltd Most of existing text automatic summarization algorithms are targeted for multi-documents of relatively short length, thus difficult to be applied immediately to novel documents of structure freedom and long length. In this paper, aiming at novel documents, we propose a topic modeling based approach to extractive automatic summarization, so as to achieve a good balance among compression ratio, summarization quality and machine readability. First, based on topic modeling, we extract the candidate sentences associated with topic words from a preprocessed novel document. Second, with the goals of compression ratio and topic diversity, we design an importance evaluation function to select the most important sentences from the candidate sentences and thus generate an initial novel summary. Finally, we smooth the initial summary to overcome the semantic confusion caused by ambiguous or synonymous words, so as to improve the summary readability. We evaluate experimentally our proposed approach on a real novel dataset. The experiment results show that compared to those from other candidate algorithms, each automatic summary generated by our approach has not only a higher compression ratio, but also better summarization quality
    corecore