4,395 research outputs found

    Users' trust in information resources in the Web environment: a status report

    Get PDF
    This study has three aims; to provide an overview of the ways in which trust is either assessed or asserted in relation to the use and provision of resources in the Web environment for research and learning; to assess what solutions might be worth further investigation and whether establishing ways to assert trust in academic information resources could assist the development of information literacy; to help increase understanding of how perceptions of trust influence the behaviour of information users

    Computing word-of-mouth trust relationships in social networks from Semantic Web and Web 2.0 data sources

    Get PDF
    Social networks can serve as both a rich source of new information and as a filter to identify the information most relevant to our specific needs. In this paper we present a methodology and algorithms that, by exploiting existing Semantic Web and Web2.0 data sources, help individuals identify who in their social network knows what, and who is the most trustworthy source of information on that topic. Our approach improves upon previous work in a number of ways, such as incorporating topic-specific rather than global trust metrics. This is achieved by generating topic experience profiles for each network member, based on data from Revyu and del.icio.us, to indicate who knows what. Identification of the most trustworthy sources is enabled by a rich trust model of information and recommendation seeking in social networks. Reviews and ratings created on Revyu provide source data for algorithms that generate topic expertise and person to person affinity metrics. Combining these metrics, we are implementing a user-oriented application for searching and automated ranking of information sources within social networks

    Closing the loop: assisting archival appraisal and information retrieval in one sweep

    Get PDF
    In this article, we examine the similarities between the concept of appraisal, a process that takes place within the archives, and the concept of relevance judgement, a process fundamental to the evaluation of information retrieval systems. More specifically, we revisit selection criteria proposed as result of archival research, and work within the digital curation communities, and, compare them to relevance criteria as discussed within information retrieval's literature based discovery. We illustrate how closely these criteria relate to each other and discuss how understanding the relationships between the these disciplines could form a basis for proposing automated selection for archival processes and initiating multi-objective learning with respect to information retrieval

    From Data Fusion to Knowledge Fusion

    Get PDF
    The task of {\em data fusion} is to identify the true values of data items (eg, the true date of birth for {\em Tom Cruise}) among multiple observed values drawn from different sources (eg, Web sites) of varying (and unknown) reliability. A recent survey\cite{LDL+12} has provided a detailed comparison of various fusion methods on Deep Web data. In this paper, we study the applicability and limitations of different fusion techniques on a more challenging problem: {\em knowledge fusion}. Knowledge fusion identifies true subject-predicate-object triples extracted by multiple information extractors from multiple information sources. These extractors perform the tasks of entity linkage and schema alignment, thus introducing an additional source of noise that is quite different from that traditionally considered in the data fusion literature, which only focuses on factual errors in the original sources. We adapt state-of-the-art data fusion techniques and apply them to a knowledge base with 1.6B unique knowledge triples extracted by 12 extractors from over 1B Web pages, which is three orders of magnitude larger than the data sets used in previous data fusion papers. We show great promise of the data fusion approaches in solving the knowledge fusion problem, and suggest interesting research directions through a detailed error analysis of the methods.Comment: VLDB'201

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    A Framework for the Analysis and User-Driven Evaluation of Trust on the Semantic Web

    Get PDF
    This project will examine the area of trust on the Semantic Web and develop a framework for publishing and verifying trusted Linked Data. Linked Data describes a method of publishing structured data, automatically readable by computers, which can linked to other heterogeneous data with the purpose of becoming more useful. Trust plays a significant role in the adoption of new technologies and even more so in a sphere with such vast amounts of publicly-created data. Trust is paramount to the effective sharing and communication of tacit knowledge (Hislop, 2013). Up to now, the area of trust in Linked Data has not been adequately addressed, despite the Semantic Web stack having included a trust layer from the very beginning (Artz and Gil, 2007). Some of the most accurate data on the Semantic Web lies practically unused, while some of the most used linked data has high numbers of errors (Zaveri et al., 2013). Many of the datasets and links that exist on the Semantic Web are out of date and/or invalid and this undermines the credibility and validity, and ultimately, the trustworthiness of both the dataset and the data provider (Rajabi et al., 2012). This research will examine a number of datasets to determine the quality metrics that a dataset is required to meet to be considered ā€˜trustedā€™. The key findings will be assessed and utilized in the creation of a learning tool and a framework for creating trusted Linked Data

    A Survey of Provenance Leveraged Trust in Wireless Sensor Networks

    Get PDF
    A wireless sensor network is a collection of self-organized sensor nodes. WSNs have many challenges such as lack of a centralized network administration, absence of infrastructure, low data transmission capacity, low bandwidth, mobility, lack of connectivity, limited power supply and dynamic network topology. Due to this vulnerable nature, WSNs need a trust architecture to keep the quality of the network data high for a longer time. In this work, we aim to survey the proposed trust architectures for WSNs. Provenance can play a key role in assessing trust in these architectures. However not many research have leveraged provenance for trust in WSNs. We also aim to point out this gap in the field and encourage researchers to invest in this topic. To our knowledge our work is unique and provenance leveraged trust work in WSNs has not been surveyed before. Keywords:Provenance, Trust, Wireless Sensor Networks Ā 
    • ā€¦
    corecore