5,158 research outputs found

    Fast & Confident Probabilistic Categorization

    Get PDF
    We describe NRC's submission to the Anomaly Detection/Text Mining competition organised at the Text Mining Workshop 2007. This submission relies on a straightforward implementation of the probabilistic categoriser described in (Gaussier et al., ECIR'02). This categoriser is adapted to handle multiple labelling and a piecewise-linear confidence estimation layer is added to provide an estimate of the labelling confidence. This technique achieves a score of 1.689 on the test data

    SVM categorizer: a generic categorization tool using support vector machines

    Get PDF
    Supervised text categorisation is a significant tool considering the vast amount of structured, unstruc-tured, or semi-structured texts that are available from internal or external enterprise resources. The goal of supervised text categorisation is to assign text documents to finite pre-specified categories in order to extract and automatically organise information coming from these resources. This paper pro-poses the implementation of a generic application – SVM Categorizer using the Support Vector Ma-chines algorithm with an innovative statistical adjustment that improves its performance. The algo-rithm is able to learn from a pre-categorised document corpus and it is tested on another uncatego-rized one based on a business intelligence case study. This paper discusses the requirements, design and implementation and describes every aspect of the application that will be developed. The final output of the SVM Categorizer is evaluated using commonly accepted metrics so as to measure its per-formance and contrast it with other classification tools

    Semantic user profiling techniques for personalised multimedia recommendation

    Get PDF
    Due to the explosion of news materials available through broadcast and other channels, there is an increasing need for personalised news video retrieval. In this work, we introduce a semantic-based user modelling technique to capture users’ evolving information needs. Our approach exploits implicit user interaction to capture long-term user interests in a profile. The organised interests are used to retrieve and recommend news stories to the users. In this paper, we exploit the Linked Open Data Cloud to identify similar news stories that match the users’ interest. We evaluate various recommendation parameters by introducing a simulation-based evaluation scheme

    Prototype/topic based Clustering Method for Weblogs

    Full text link
    [EN] In the last 10 years, the information generated on weblog sites has increased exponentially, resulting in a clear need for intelligent approaches to analyse and organise this massive amount of information. In this work, we present a methodology to cluster weblog posts according to the topics discussed therein, which we derive by text analysis. We have called the methodology Prototype/Topic Based Clustering, an approach which is based on a generative probabilistic model in conjunction with a Self-Term Expansion methodology. The usage of the Self-Term Expansion methodology is to improve the representation of the data and the generative probabilistic model is employed to identify relevant topics discussed in the weblogs. We have modified the generative probabilistic model in order to exploit predefined initialisations of the model and have performed our experiments in narrow and wide domain subsets. The results of our approach have demonstrated a considerable improvement over the pre-defined baseline and alternative state of the art approaches, achieving an improvement of up to 20% in many cases. The experiments were performed on both narrow and wide domain datasets, with the latter showing better improvement. However in both cases, our results outperformed the baseline and state of the art algorithms.The work of the third author was carried out in the framework of the WIQ-EI IRSES project (Grant No. 269180) within the FP7 Marie Curie, the DIANA APPLICATIONS Finding Hidden Knowledge in Texts: Applications (TIN2012-38603-C02-01) project and the VLC/CAMPUS Microcluster on Multimodal Interaction in Intelligent Systems.Perez-Tellez, F.; Cardiff, J.; Rosso, P.; Pinto Avendaño, DE. (2016). Prototype/topic based Clustering Method for Weblogs. Intelligent Data Analysis. 20(1):47-65. https://doi.org/10.3233/IDA-150793S476520

    Towards Personalized and Human-in-the-Loop Document Summarization

    Full text link
    The ubiquitous availability of computing devices and the widespread use of the internet have generated a large amount of data continuously. Therefore, the amount of available information on any given topic is far beyond humans' processing capacity to properly process, causing what is known as information overload. To efficiently cope with large amounts of information and generate content with significant value to users, we require identifying, merging and summarising information. Data summaries can help gather related information and collect it into a shorter format that enables answering complicated questions, gaining new insight and discovering conceptual boundaries. This thesis focuses on three main challenges to alleviate information overload using novel summarisation techniques. It further intends to facilitate the analysis of documents to support personalised information extraction. This thesis separates the research issues into four areas, covering (i) feature engineering in document summarisation, (ii) traditional static and inflexible summaries, (iii) traditional generic summarisation approaches, and (iv) the need for reference summaries. We propose novel approaches to tackle these challenges, by: i)enabling automatic intelligent feature engineering, ii) enabling flexible and interactive summarisation, iii) utilising intelligent and personalised summarisation approaches. The experimental results prove the efficiency of the proposed approaches compared to other state-of-the-art models. We further propose solutions to the information overload problem in different domains through summarisation, covering network traffic data, health data and business process data.Comment: PhD thesi

    The Computer Science Ontology: A Large-Scale Taxonomy of Research Areas

    Get PDF
    Ontologies of research areas are important tools for characterising, exploring, and analysing the research landscape. Some fields of research are comprehensively described by large-scale taxonomies, e.g., MeSH in Biology and PhySH in Physics. Conversely, current Computer Science taxonomies are coarse-grained and tend to evolve slowly. For instance, the ACM classification scheme contains only about 2K research topics and the last version dates back to 2012. In this paper, we introduce the Computer Science Ontology (CSO), a large-scale, automatically generated ontology of research areas, which includes about 26K topics and 226K semantic relationships. It was created by applying the Klink-2 algorithm on a very large dataset of 16M scientific articles. CSO presents two main advantages over the alternatives: i) it includes a very large number of topics that do not appear in other classifications, and ii) it can be updated automatically by running Klink-2 on recent corpora of publications. CSO powers several tools adopted by the editorial team at Springer Nature and has been used to enable a variety of solutions, such as classifying research publications, detecting research communities, and predicting research trends. To facilitate the uptake of CSO we have developed the CSO Portal, a web application that enables users to download, explore, and provide granular feedback on CSO at different levels. Users can use the portal to rate topics and relationships, suggest missing relationships, and visualise sections of the ontology. The portal will support the publication of and access to regular new releases of CSO, with the aim of providing a comprehensive resource to the various communities engaged with scholarly data

    The state of research on folksonomies in the field of Library and Information Science : a Systematic Literature Review

    Get PDF
    Purpose – The purpose of this thesis is to provide an overview of all relevant peer-reviewed articles on folksonomies, social tagging and social bookmarking as knowledge organisation systems within the field of Library and Information Science by reviewing the current state of research on these systems of managing knowledge. Method – I use the systematic literature review method in order to systematically and transparently review and synthesise data extracted from 39 articles found through the discovery system LUBsearch in order to find out which, and to which degree different methods, theories and systems are represented, which subfields can be distinguished, how present research within these subfields is and which larger conclusions can be drawn from research conducted between 2003-2013 on folksonomies. Findings – There have been done many studies which are exploratory or reviewing literature discussions, and other frequently used methods which have been used are questionnaires or surveys, although often in conjunction with other methods. Furthermore, out of the 39 studies, 22 were quantitative, 15 were qualitative and 2 used mixed methods. I also found that there were an underwhelming number of theories being explicitly used, where merely 11 articles explicitly used theories, and only one theory was used twice. No key authors on the topic were identified, though Knowledge Organization, Information Processing & Management and Journal of the American Society for Information Science and Technology were recognised as key journals for research on folksonomies. There have been plenty of studies on how tags and folksonomies have effected other knowledge organisation systems, or how pre-existing have been used to create new systems. Other well represented subfields include studies on the quality or characteristics of tags or text, and studies aiming to improve folksonomies, search methods or tags. Value – I provide an overview on what has been researched and where the focus on said research has been during the last decade and present future research suggestions and identify possible dangers to be wary of which I argue will benefit folksonomies and knowledge organisation as a whole
    • …
    corecore