11 research outputs found

    An Ontology-Based Recommender System with an Application to the Star Trek Television Franchise

    Full text link
    Collaborative filtering based recommender systems have proven to be extremely successful in settings where user preference data on items is abundant. However, collaborative filtering algorithms are hindered by their weakness against the item cold-start problem and general lack of interpretability. Ontology-based recommender systems exploit hierarchical organizations of users and items to enhance browsing, recommendation, and profile construction. While ontology-based approaches address the shortcomings of their collaborative filtering counterparts, ontological organizations of items can be difficult to obtain for items that mostly belong to the same category (e.g., television series episodes). In this paper, we present an ontology-based recommender system that integrates the knowledge represented in a large ontology of literary themes to produce fiction content recommendations. The main novelty of this work is an ontology-based method for computing similarities between items and its integration with the classical Item-KNN (K-nearest neighbors) algorithm. As a study case, we evaluated the proposed method against other approaches by performing the classical rating prediction task on a collection of Star Trek television series episodes in an item cold-start scenario. This transverse evaluation provides insights into the utility of different information resources and methods for the initial stages of recommender system development. We found our proposed method to be a convenient alternative to collaborative filtering approaches for collections of mostly similar items, particularly when other content-based approaches are not applicable or otherwise unavailable. Aside from the new methods, this paper contributes a testbed for future research and an online framework to collaboratively extend the ontology of literary themes to cover other narrative content.Comment: 25 pages, 6 figures, 5 tables, minor revision

    Toward better data veracity in mobile cloud computing: A context-aware and incentive-based reputation mechanism

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.As a promising next-generation computing paradigm, Mobile Cloud Computing (MCC) enables the large-scale collection and big data processing of personal private data. An important but often overlooked V of big data is data veracity, which ensures that the data used are trusted, authentic, accurate and protected from unauthorized access and modification. In order to realize the veracity of data in MCC, specific trust models and approaches must be developed. In this paper, a Category-based Context-aware and Recommendation incentive-based reputation Mechanism (CCRM) is proposed to defend against internal attacks and enhance data veracity in MCC. In the CCRM, innovative methods, including a data category and context sensing technology, a security relevance evaluation model, and a Vickrey-Clark-Groves (VCG)-based recommendation incentive scheme, are integrated into the process of reputation evaluation. Cost analysis indicates that the CCRM has a linear communication and computation complexity. Simulation results demonstrate the superior performance of the CCRM compared to existing reputation mechanisms under internal collusion attacks and bad mouthing attacks.This work is supported by the National Natural Science Foundation of China (61363068, 61472083, 61671360), the Pilot Project of Fujian Province (formal industry key project) (2016Y0031), the Foundation of Science and Technology on Information Assurance Laboratory (KJ-14-109) and the Fujian Provincial Key Laboratory of Network Security and Cryptology Research Fund

    Towards consensus-based group decision making for co-owned data sharing in online social networks

    Get PDF

    Prioritization of the launch of ICT products and services through linguistic multi-criteria decision-making

    Get PDF
    The market launch of new products and services is a basic pillar for large and medium-sized companies in the ICT (Information and Communications Technology) sector. Choosing the right moment for it is usually a differentiating factor in terms of competition, since it is a source of competitive advantage. There are several mechanisms and strategies to address this problem from the market perspective. However, the criteria of the different actors involved – managers, sales representatives, experts, etc. – coexist in the corporate sphere and they often differ, causing difficulties in priority setting processes in the launch of a product or service. The assessment of the prioritization of these criteria is usually expressed in natural language, thus adding a great deal of uncertainty. Fuzzy linguistic models have proved to be an efficient tool for managing the intrinsic uncertainty of this type of information. This paper presents a linguistic multi-criteria decision-making model, able to reconcile the different requirements and viewpoints existing in the corporate sector when planning the launch of new products and services. The proposed model is based on the fuzzy 2-tuple linguistic model, aimed at managing linguistic data expressing different corporate criteria, without compromising accuracy in the calculation of said data. In order to illustrate this, a practical case study is presented, in which the model is applied for scheduling the launch prioritization of several new products and services by a telecommunications company, within the deadlines set in its strategic planning.The authors would like to acknowledge the financial support received from the European Regional Development Fund (ERDF) for the Research Projects TIN2016-75850-R, TIN2016-79484-R and TIN2013-40658-P

    Fuzzy rule based profiling approach for enterprise information seeking and retrieval

    Get PDF
    With the exponential growth of information available on the Internet and various organisational intranets there is a need for profile based information seeking and retrieval (IS&R) systems. These systems should be able to support users with their context-aware information needs. This paper presents a new approach for enterprise IS&R systems using fuzzy logic to develop task, user and document profiles to model user information seeking behaviour. Relevance feedback was captured from real users engaged in IS&R tasks. The feedback was used to develop a linear regression model for predicting document relevancy based on implicit relevance indicators. Fuzzy relevance profiles were created using Term Frequency and Inverse Document Frequency (TF/IDF) analysis for the successful user queries. Fuzzy rule based summarisation was used to integrate the three profiles into a unified index reflecting the semantic weight of the query terms related to the task, user and document. The unified index was used to select the most relevant documents and experts related to the query topic. The overall performance of the system was evaluated based on standard precision and recall metrics which show significant improvements in retrieving relevant documents in response to user queries

    Design of an E-learning system using semantic information and cloud computing technologies

    Get PDF
    Humanity is currently suffering from many difficult problems that threaten the life and survival of the human race. It is very easy for all mankind to be affected, directly or indirectly, by these problems. Education is a key solution for most of them. In our thesis we tried to make use of current technologies to enhance and ease the learning process. We have designed an e-learning system based on semantic information and cloud computing, in addition to many other technologies that contribute to improving the educational process and raising the level of students. The design was built after much research on useful technology, its types, and examples of actual systems that were previously discussed by other researchers. In addition to the proposed design, an algorithm was implemented to identify topics found in large textual educational resources. It was tested and proved to be efficient against other methods. The algorithm has the ability of extracting the main topics from textual learning resources, linking related resources and generating interactive dynamic knowledge graphs. This algorithm accurately and efficiently accomplishes those tasks even for bigger books. We used Wikipedia Miner, TextRank, and Gensim within our algorithm. Our algorithm‘s accuracy was evaluated against Gensim, largely improving its accuracy. Augmenting the system design with the implemented algorithm will produce many useful services for improving the learning process such as: identifying main topics of big textual learning resources automatically and connecting them to other well defined concepts from Wikipedia, enriching current learning resources with semantic information from external sources, providing student with browsable dynamic interactive knowledge graphs, and making use of learning groups to encourage students to share their learning experiences and feedback with other learners.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Luis Sánchez Fernández.- Secretario: Luis de la Fuente Valentín.- Vocal: Norberto Fernández Garcí
    corecore