2,210 research outputs found

    Adding Context to Social Tagging Systems

    Get PDF
    Many of the features of Web 2.0 encourage users to actively interact with each other. Social tagging systems represent one of the good examples that reflect this trend on the Web. The primary purpose of social tagging systems is to facilitate shared access to resources. Our focus in this paper is on the attempts to overcome some of the limitations in social tagging systems such as the flat structure of folksonomies and the absence of semantics in terms of information retrieval. We propose and develop an integrated approach, social tagging systems with directory facility, which can overcome the limitations of both traditional taxonomies and folksonomies. Our preliminary experiments indicate that this approach is promising and that the context provided by the directory facility improves the precision of information retrieval. As well, our synonym detection algorithm is capable of finding synonyms in social tagging systems without any external inputs

    Towards Cleaning-up Open Data Portals: A Metadata Reconciliation Approach

    Full text link
    This paper presents an approach for metadata reconciliation, curation and linking for Open Governamental Data Portals (ODPs). ODPs have been lately the standard solution for governments willing to put their public data available for the society. Portal managers use several types of metadata to organize the datasets, one of the most important ones being the tags. However, the tagging process is subject to many problems, such as synonyms, ambiguity or incoherence, among others. As our empiric analysis of ODPs shows, these issues are currently prevalent in most ODPs and effectively hinders the reuse of Open Data. In order to address these problems, we develop and implement an approach for tag reconciliation in Open Data Portals, encompassing local actions related to individual portals, and global actions for adding a semantic metadata layer above individual portals. The local part aims to enhance the quality of tags in a single portal, and the global part is meant to interlink ODPs by establishing relations between tags.Comment: 8 pages,10 Figures - Under Revision for ICSC201

    A Fuzzy-Based Multimedia Content Retrieval Method Using Mood Tags and Their Synonyms in Social Networks

    Get PDF
    The preferences of Web information purchasers are rapidly evolving. Cost-effectiveness is now becoming less regarded than cost-satisfaction, which emphasizes the purchaser’s psychological satisfaction. One method to improve a user’s cost-satisfaction in multimedia content retrieval is to utilize the mood inherent in multimedia items. An example of applications using this method is SNS (Social Network Services), which is based on folksonomy, but its applications encounter problems due to synonyms. In order to solve the problem of synonyms in our previous study, the mood of multimedia content is represented with arousal and valence (AV) in Thayer’s two-dimensional model as its internal tag. Although some problems of synonyms could now be solved, the retrieval performance of the previous study was less than that of a keyword-based method. In this paper, a new method that can solve the synonym problem is proposed, while simultaneously maintaining the same performance as the keyword-based approach. In the proposed method, a mood of multimedia content is represented with a fuzzy set of 12 moods of the Thayer model. For the analysis, the proposed method is compared with two methods, one based on AV value and the other based on keyword. The analysis results demonstrate that the proposed method is superior to the two methods

    Aspect-Controlled Neural Argument Generation

    Full text link
    We rely on arguments in our daily lives to deliver our opinions and base them on evidence, making them more convincing in turn. However, finding and formulating arguments can be challenging. In this work, we train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect. We define argument aspect detection as a necessary method to allow this fine-granular control and crowdsource a dataset with 5,032 arguments annotated with aspects. Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments. Moreover, these arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments. We publish all datasets and code to fine-tune the language model

    Web 2.0 and folksonomies in a library context

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2011 ElsevierLibraries have a societal purpose and this role has become increasingly important as new technologies enable organizations to support, enable and enhance the participation of users in assuming an active role in the creation and communication of information. Folksonomies, a Web 2.0 technology, represent such an example. Folksonomies result from individuals freely tagging resources available to them on a computer network. In a library environment folksonomies have the potential of overcoming certain limitations of traditional classification systems such as the Library of Congress Subject Headings (LCSH). Typical limitations of this type of classification systems include, for example, the rigidity of the underlying taxonomical structures and the difficulty of introducing change in the categories. Folksonomies represent a supporting technology to existing classification systems helping to describe library resources more flexibly, dynamically and openly. As a review of the current literature shows, the adoption of folksonomies in libraries is novel and limited research has been carried out in the area. This paper presents research into the adoption of folksonomies for a University library. A Web 2.0 system was developed, based on the requirements collected from library stakeholders, and integrated with the existing library computer system. An evaluation of the work was carried out in the form of a survey in order to understand the possible reactions of users to folksonomies as well as the effects on their behavior. The broad conclusion of this work is that folksonomies seem to have a beneficial effect on users’ involvement as active library participants as well as encourage users to browse the catalogue in more depth

    A survey of data mining techniques for social media analysis

    Get PDF
    Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors

    Evaluation of Automatic Video Captioning Using Direct Assessment

    Full text link
    We present Direct Assessment, a method for manually assessing the quality of automatically-generated captions for video. Evaluating the accuracy of video captions is particularly difficult because for any given video clip there is no definitive ground truth or correct answer against which to measure. Automatic metrics for comparing automatic video captions against a manual caption such as BLEU and METEOR, drawn from techniques used in evaluating machine translation, were used in the TRECVid video captioning task in 2016 but these are shown to have weaknesses. The work presented here brings human assessment into the evaluation by crowdsourcing how well a caption describes a video. We automatically degrade the quality of some sample captions which are assessed manually and from this we are able to rate the quality of the human assessors, a factor we take into account in the evaluation. Using data from the TRECVid video-to-text task in 2016, we show how our direct assessment method is replicable and robust and should scale to where there many caption-generation techniques to be evaluated.Comment: 26 pages, 8 figure
    • …
    corecore