363 research outputs found

    The Design of Trust Networks

    Get PDF
    One can use trust networks to find trustworthy information, people, products, and services on public networks. Hence, they have the potential to combine the advantages of search, recommendation systems, and social networks. But proper design and correct incentives are critical to the success of such networks. In this paper, I propose a trust network architecture that emphasizes simplicity and robustness. I propose a trust network with constrained trust relationships and design a decentralized search and recommendation process. I create both informational and monetary incentives to encourage joining the network, to investigate and discover other trustworthy agents, and to make commitments to them by trusting them, by insuring them, or even by directly investing in them. I show that making the correct judgments about trustworthiness of others and reporting it truthfully are the optimum strategies since they reward the agents both with information by providing access to more of the network and with monetary payments by paying them for their services as information intermediaries. The extensive income potential from the trust connections creates strong incentives to join the network, to create reliable trust connections, and to report them truthfully

    The Lowlands team at TRECVID 2007

    Get PDF
    In this report we summarize our methods and results for the search tasks in\ud TRECVID 2007. We employ two different kinds of search: purely ASR based and\ud purely concept based search. However, there is not significant difference of the\ud performance of the two systems. Using neighboring shots for the combination of\ud two concepts seems to be beneficial. General preprocessing of queries increased\ud the performance and choosing detector sources helped. However, for all automatic\ud search components we need to perform further investigations

    Reducing semantic complexity in distributed Digital Libraries: treatment of term vagueness and document re-ranking

    Get PDF
    The purpose of the paper is to propose models to reduce the semantic complexity in heterogeneous DLs. The aim is to introduce value-added services (treatment of term vagueness and document re-ranking) that gain a certain quality in DLs if they are combined with heterogeneity components established in the project "Competence Center Modeling and Treatment of Semantic Heterogeneity". Empirical observations show that freely formulated user terms and terms from controlled vocabularies are often not the same or match just by coincidence. Therefore, a value-added service will be developed which rephrases the natural language searcher terms into suggestions from the controlled vocabulary, the Search Term Recommender (STR). Two methods, which are derived from scientometrics and network analysis, will be implemented with the objective to re-rank result sets by the following structural properties: the ranking of the results by core journals (so-called Bradfordizing) and ranking by centrality of authors in co-authorship networks.Comment: 12 pages, 4 figure

    Lightweight Tag-Aware Personalized Recommendation on the Social Web Using Ontological Similarity

    Get PDF
    With the rapid growth of social tagging systems, many research efforts are being put intopersonalized search and recommendation using social tags (i.e., folksonomies). As users can freely choosetheir own vocabulary, social tags can be very ambiguous (for instance, due to the use of homonymsor synonyms). Machine learning techniques (such as clustering and deep neural networks) are usuallyapplied to overcome this tag ambiguity problem. However, the machine-learning-based solutions alwaysneed very powerful computing facilities to train recommendation models from a large amount of data,so they are inappropriate to be used in lightweight recommender systems. In this work, we propose anontological similarity to tackle the tag ambiguity problem without the need of model training by usingcontextual information. The novelty of this ontological similarity is that it first leverages external domainontologies to disambiguate tag information, and then semantically quantifies the relevance between userand item profiles according to the semantic similarity of the matching concepts of tags in the respectiveprofiles. Our experiments show that the proposed ontological similarity is semantically more accurate thanthe state-of-the-art similarity metrics, and can thus be applied to improve the performance of content-based tag-aware personalized recommendation on the Social Web. Consequently, as a model-training-freesolution, ontological similarity is a good disambiguation choice for lightweight recommender systems anda complement to machine-learning-based recommendation solutions.Fil: Xu, Zhenghua. University of Oxford; Reino UnidoFil: Tifrea-Marciuska, Oana. Bloomberg; Reino UnidoFil: Lukasiewicz, Thomas. University of Oxford; Reino UnidoFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - BahĂ­a Blanca. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂ­a de la ComputaciĂłn. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn; ArgentinaFil: Simari, Gerardo. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - BahĂ­a Blanca. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂ­a de la ComputaciĂłn. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn; ArgentinaFil: Chen, Cheng. China Academy of Electronics and Information Technology; Chin

    The Ubiquity of Large Graphs and Surprising Challenges of Graph Processing: Extended Survey

    Full text link
    Graph processing is becoming increasingly prevalent across many application domains. In spite of this prevalence, there is little research about how graphs are actually used in practice. We performed an extensive study that consisted of an online survey of 89 users, a review of the mailing lists, source repositories, and whitepapers of a large suite of graph software products, and in-person interviews with 6 users and 2 developers of these products. Our online survey aimed at understanding: (i) the types of graphs users have; (ii) the graph computations users run; (iii) the types of graph software users use; and (iv) the major challenges users face when processing their graphs. We describe the participants' responses to our questions highlighting common patterns and challenges. Based on our interviews and survey of the rest of our sources, we were able to answer some new questions that were raised by participants' responses to our online survey and understand the specific applications that use graph data and software. Our study revealed surprising facts about graph processing in practice. In particular, real-world graphs represent a very diverse range of entities and are often very large, scalability and visualization are undeniably the most pressing challenges faced by participants, and data integration, recommendations, and fraud detection are very popular applications supported by existing graph software. We hope these findings can guide future research

    From Keyword Search to Exploration: How Result Visualization Aids Discovery on the Web

    No full text
    A key to the Web's success is the power of search. The elegant way in which search results are returned is usually remarkably effective. However, for exploratory search in which users need to learn, discover, and understand novel or complex topics, there is substantial room for improvement. Human computer interaction researchers and web browser designers have developed novel strategies to improve Web search by enabling users to conveniently visualize, manipulate, and organize their Web search results. This monograph offers fresh ways to think about search-related cognitive processes and describes innovative design approaches to browsers and related tools. For instance, while key word search presents users with results for specific information (e.g., what is the capitol of Peru), other methods may let users see and explore the contexts of their requests for information (related or previous work, conflicting information), or the properties that associate groups of information assets (group legal decisions by lead attorney). We also consider the both traditional and novel ways in which these strategies have been evaluated. From our review of cognitive processes, browser design, and evaluations, we reflect on the future opportunities and new paradigms for exploring and interacting with Web search results

    Using the Semantic Web in digital humanities : Shift from data publishing to data-analysis and serendipitous knowledge discovery

    Get PDF
    This paper discusses a shift of focus in research on Cultural Heritage semantic portals, based on Linked Data, and envisions and proposes new directions of research. Three generations of portals are identified: Ten years ago the research focus in semantic portal development was on data harmonization, aggregation, search, and browsing ('first generation systems'). At the moment, the rise of Digital Humanities research has started to shift the focus to providing the user with integrated tools for solving research problems in interactive ways ('second generation systems'). This paper envisions and argues that the next step ahead to 'third generation systems' is based on Artificial Intelligence: future portals not only provide tools for the human to solve problems but are used for finding research problems in the first place, for addressing them, and even for solving them automatically under the constraints set by the human researcher. Such systems should preferably be able to explain their reasoning, which is an important aspect in the source critical humanities research tradition. The second and third generation systems set new challenges for both computer scientists and humanities researchers.Peer reviewe

    Proceedings of the 3rd Workshop on Social Information Retrieval for Technology-Enhanced Learning

    Get PDF
    Learning and teaching resource are available on the Web - both in terms of digital learning content and people resources (e.g. other learners, experts, tutors). They can be used to facilitate teaching and learning tasks. The remaining challenge is to develop, deploy and evaluate Social information retrieval (SIR) methods, techniques and systems that provide learners and teachers with guidance in potentially overwhelming variety of choices. The aim of the SIRTEL’09 workshop is to look onward beyond recent achievements to discuss specific topics, emerging research issues, new trends and endeavors in SIR for TEL. The workshop will bring together researchers and practitioners to present, and more importantly, to discuss the current status of research in SIR and TEL and its implications for science and teaching

    Génération automatique d'alignements complexes d'ontologies

    Get PDF
    Le web de données liées (LOD) est composé de nombreux entrepÎts de données. Ces données sont décrites par différents vocabulaires (ou ontologies). Chaque ontologie a une terminologie et une modélisation propre ce qui les rend hétérogÚnes. Pour lier et rendre les données du web de données liées interopérables, les alignements d'ontologies établissent des correspondances entre les entités desdites ontologies. Il existe de nombreux systÚmes d'alignement qui génÚrent des correspondances simples, i.e., ils lient une entité à une autre entité. Toutefois, pour surmonter l'hétérogénéité des ontologies, des correspondances plus expressives sont parfois nécessaires. Trouver ce genre de correspondances est un travail fastidieux qu'il convient d'automatiser. Dans le cadre de cette thÚse, une approche d'alignement complexe basée sur des besoins utilisateurs et des instances communes est proposée. Le domaine des alignements complexes est relativement récent et peu de travaux adressent la problématique de leur évaluation. Pour pallier ce manque, un systÚme d'évaluation automatique basé sur de la comparaison d'instances est proposé. Ce systÚme est complété par un jeu de données artificiel sur le domaine des conférences.The Linked Open Data (LOD) cloud is composed of data repositories. The data in the repositories are described by vocabularies also called ontologies. Each ontology has its own terminology and model. This leads to heterogeneity between them. To make the ontologies and the data they describe interoperable, ontology alignments establish correspondences, or links between their entities. There are many ontology matching systems which generate simple alignments, i.e., they link an entity to another. However, to overcome the ontology heterogeneity, more expressive correspondences are sometimes needed. Finding this kind of correspondence is a fastidious task that can be automated. In this thesis, an automatic complex matching approach based on a user's knowledge needs and common instances is proposed. The complex alignment field is still growing and little work address the evaluation of such alignments. To palliate this lack, we propose an automatic complex alignment evaluation system. This system is based on instances. A famous alignment evaluation dataset has been extended for this evaluation

    Context-based matching revisited

    Get PDF
    hoffmann2010aMatching ontologies can be achieved by first recontextualising ontologies and then using this context information in order to deduce the relations between ontology entities. In Deliverable 3.3.1, we introduced the Scarlet system which uses ontologies on the web as context for matching ontologies. In this deliverable, we push this further by systematising the parameterisation of Scarlet. We develop a framework for expressing context-based matching parameters and implement most of them within Scarlet. This allows for evaluating the impact of each of these parameters on the actual results of context-based matching
    • 

    corecore