3,726 research outputs found

    10471 Abstracts Collection -- Scalable Visual Analytics

    Get PDF
    From 21.11. to 26.11.2010, the Dagstuhl Seminar 10471 ``Scalable Visual Analytics\u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    Application of Common Sense Computing for the Development of a Novel Knowledge-Based Opinion Mining Engine

    Get PDF
    The ways people express their opinions and sentiments have radically changed in the past few years thanks to the advent of social networks, web communities, blogs, wikis and other online collaborative media. The distillation of knowledge from this huge amount of unstructured information can be a key factor for marketers who want to create an image or identity in the minds of their customers for their product, brand, or organisation. These online social data, however, remain hardly accessible to computers, as they are specifically meant for human consumption. The automatic analysis of online opinions, in fact, involves a deep understanding of natural language text by machines, from which we are still very far. Hitherto, online information retrieval has been mainly based on algorithms relying on the textual representation of web-pages. Such algorithms are very good at retrieving texts, splitting them into parts, checking the spelling and counting their words. But when it comes to interpreting sentences and extracting meaningful information, their capabilities are known to be very limited. Existing approaches to opinion mining and sentiment analysis, in particular, can be grouped into three main categories: keyword spotting, in which text is classified into categories based on the presence of fairly unambiguous affect words; lexical affinity, which assigns arbitrary words a probabilistic affinity for a particular emotion; statistical methods, which calculate the valence of affective keywords and word co-occurrence frequencies on the base of a large training corpus. Early works aimed to classify entire documents as containing overall positive or negative polarity, or rating scores of reviews. Such systems were mainly based on supervised approaches relying on manually labelled samples, such as movie or product reviews where the opinionist’s overall positive or negative attitude was explicitly indicated. However, opinions and sentiments do not occur only at document level, nor they are limited to a single valence or target. Contrary or complementary attitudes toward the same topic or multiple topics can be present across the span of a document. In more recent works, text analysis granularity has been taken down to segment and sentence level, e.g., by using presence of opinion-bearing lexical items (single words or n-grams) to detect subjective sentences, or by exploiting association rule mining for a feature-based analysis of product reviews. These approaches, however, are still far from being able to infer the cognitive and affective information associated with natural language as they mainly rely on knowledge bases that are still too limited to efficiently process text at sentence level. In this thesis, common sense computing techniques are further developed and applied to bridge the semantic gap between word-level natural language data and the concept-level opinions conveyed by these. In particular, the ensemble application of graph mining and multi-dimensionality reduction techniques on two common sense knowledge bases was exploited to develop a novel intelligent engine for open-domain opinion mining and sentiment analysis. The proposed approach, termed sentic computing, performs a clause-level semantic analysis of text, which allows the inference of both the conceptual and emotional information associated with natural language opinions and, hence, a more efficient passage from (unstructured) textual information to (structured) machine-processable data. The engine was tested on three different resources, namely a Twitter hashtag repository, a LiveJournal database and a PatientOpinion dataset, and its performance compared both with results obtained using standard sentiment analysis techniques and using different state-of-the-art knowledge bases such as Princeton’s WordNet, MIT’s ConceptNet and Microsoft’s Probase. Differently from most currently available opinion mining services, the developed engine does not base its analysis on a limited set of affect words and their co-occurrence frequencies, but rather on common sense concepts and the cognitive and affective valence conveyed by these. This allows the engine to be domain-independent and, hence, to be embedded in any opinion mining system for the development of intelligent applications in multiple fields such as Social Web, HCI and e-health. Looking ahead, the combined novel use of different knowledge bases and of common sense reasoning techniques for opinion mining proposed in this work, will, eventually, pave the way for development of more bio-inspired approaches to the design of natural language processing systems capable of handling knowledge, retrieving it when necessary, making analogies and learning from experience

    Access to recorded interviews: A research agenda

    Get PDF
    Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed

    Pictures in words : indexing, folksonomy and representation of subject content in historic photographs

    Get PDF
    Subject access to images is a major issue for image collections. Research is needed to understand how indexing and tagging contribute to make the subjects of historic photographs accessible. This thesis firstly investigates the evidence of cognitive dissonance between indexers and users in the way they attribute subjects to historic photographs, and, secondly, how indexers and users might work together to enhance subject description. It analyses how current indexing and social tagging represent the subject content of historic photographs. It also suggests a practical way indexers can work with taggers to deal with the classic problem of resource constraints and to enhance metadata to make photo collections more accessible. In an original application of the Shatford/Panofsky classification matrix within the applications domain of historic images, patterns of subject attribution are explored between taggers and professional indexers. The study was conducted in two stages. The first stage (Studies A to D) investigated how professional indexers and taggers represent the subject content of historic photographs and revealed differences based on Shatford/Panofsky. The indexers (Study A) demonstrated a propensity for specific and generic subjects and almost complete avoidance of abstracts. In contrast, a pilot study with users (Study B) and with baseline taggers (Studies C and D) showed their propensity for generics and equal inclination to specifics and abstracts. The evidence supports the conclusion that indexers and users approach the subject content of historic photographs differently, demonstrating cognitive dissonance, a conflict between how they appear to think about and interpret images. The second stage (Study E) demonstrated that an online training intervention affected tagging behaviour. The intervention resulted in increased tagging and fuller representation of all subject facets according to the Shatford/Panofsky classification matrix. The evidence showed that trained taggers tagged more generic and abstract facets than untrained taggers. Importantly, this suggests that training supports the annotation of the higher levels of subject content and so potentially provides enhanced intellectual access. The research demonstrated a practical way institutions can work with taggers to extend the representation of subject content in historic photographs. Improved subject description is critical for intellectual access and retrieval in the cultural heritage space. Through systematic application of the training method a richer corpus of descriptors might be created that enhances machine based information retrieval via automatic extraction

    Emotion Embeddings \unicode{x2014} Learning Stable and Homogeneous Abstractions from Heterogeneous Affective Datasets

    Full text link
    Human emotion is expressed in many communication modalities and media formats and so their computational study is equally diversified into natural language processing, audio signal analysis, computer vision, etc. Similarly, the large variety of representation formats used in previous research to describe emotions (polarity scales, basic emotion categories, dimensional approaches, appraisal theory, etc.) have led to an ever proliferating diversity of datasets, predictive models, and software tools for emotion analysis. Because of these two distinct types of heterogeneity, at the expressional and representational level, there is a dire need to unify previous work on increasingly diverging data and label types. This article presents such a unifying computational model. We propose a training procedure that learns a shared latent representation for emotions, so-called emotion embeddings, independent of different natural languages, communication modalities, media or representation label formats, and even disparate model architectures. Experiments on a wide range of heterogeneous affective datasets indicate that this approach yields the desired interoperability for the sake of reusability, interpretability and flexibility, without penalizing prediction quality. Code and data are archived under https://doi.org/10.5281/zenodo.7405327 .Comment: 18 pages, 6 figure

    Web Scale Image Retrieval based on Image Text Query Pair and Click Data

    Get PDF
    The growing importance of traditional text-based image retrieval is due to its popularity through web image search engines. Google, Yahoo, Bing etc. are some of search engines that use this technique. Text-based image retrieval is based on the assumption that surrounding text describes the image. For text-based image retrieval systems, input is a text query and output is a ranking set of images in which most relevant results appear first. The limitation of text-based image retrieval is that most of the times query text is not able to describe the content of the image perfectly since visual information is full of variety. Microsoft Research Bing Image retrieval Challenge aims to achieve cross-modal retrieval by ranking the relevance of the query text terms and the images. This thesis addresses the approaches of our team MUVIS for Microsoft research Bing image retrieval challenge to measure the relevance of web images and the query given in text form. This challenge is to develop an image-query pair scoring system to assess the effectiveness of query terms in describing the images. The provided dataset included a training set containing more than 23 million clicked image-query pairs collected from the web (One year). Also, a development set was collected which had been manually labelled. On each image-query pair, a floating-point score was produced. The floating-point score reflected the relevancy of the query to describe the given image, with higher number including higher relevance and vice versa. Sorting its corresponding score for all its associated images produced the retrieval ranking for the images of any query. The system developed by MUVIS team consisted of five modules. Two main modules were text processing module and principal component analysis assisted perceptron regression with random sub-space selection. To enhance evaluation accuracy, three complementary modules i.e. face bank, duplicate image detector and optical character recognition were also developed. Both main module and complementary modules relied on results returned by text processing module. OverFeat features extracted over text processing module results acted as input for principal component analysis assisted perceptron regression with random sub-space selection module which further transformed the features vector. The relevance score for each query-image pair was achieved by comparing the feature of the query image and the relevant training images. For features extraction, used in the face bank and duplicate image detector modules, we used CMUVIS framework. CMUVIS framework is a distributed computing framework for big data developed by the MUVIS group. Three runs were submitted for evaluation: “Master”, “Sub2”, and “Sub3”. The cumulative similarity was returned as the requested images relevance. Using the proposed approach we reached the value of 0.5099 in terms of discounted cumulative gain on the development set. On the test set we gained 0.5116. Our solution achieved fourth place in Microsoft Research Bing grand challenge 2014 for master submission and second place for overall submission
    • …
    corecore