38,876 research outputs found

    Trust in social machines: the challenges

    No full text
    The World Wide Web has ushered in a new generation of applications constructively linking people and computers to create what have been called ‘social machines.’ The ‘components’ of these machines are people and technologies. It has long been recognised that for people to participate in social machines, they have to trust the processes. However, the notions of trust often used tend to be imported from agent-based computing, and may be too formal, objective and selective to describe human trust accurately. This paper applies a theory of human trust to social machines research, and sets out some of the challenges to system designers

    Users' trust in information resources in the Web environment: a status report

    Get PDF
    This study has three aims; to provide an overview of the ways in which trust is either assessed or asserted in relation to the use and provision of resources in the Web environment for research and learning; to assess what solutions might be worth further investigation and whether establishing ways to assert trust in academic information resources could assist the development of information literacy; to help increase understanding of how perceptions of trust influence the behaviour of information users

    The relationship of (perceived) epistemic cognition to interaction with resources on the internet

    Get PDF
    Information seeking and processing are key literacy practices. However, they are activities that students, across a range of ages, struggle with. These information seeking processes can be viewed through the lens of epistemic cognition: beliefs regarding the source, justification, complexity, and certainty of knowledge. In the research reported in this article we build on established research in this area, which has typically used self-report psychometric and behavior data, and information seeking tasks involving closed-document sets. We take a novel approach in applying established self-report measures to a large-scale, naturalistic, study environment, pointing to the potential of analysis of dialogue, web-navigation – including sites visited – and other trace data, to support more traditional self-report mechanisms. Our analysis suggests that prior work demonstrating relationships between self-report indicators is not paralleled in investigation of the hypothesized relationships between self-report and trace-indicators. However, there are clear epistemic features of this trace data. The article thus demonstrates the potential of behavioral learning analytic data in understanding how epistemic cognition is brought to bear in rich information seeking and processing tasks

    The onus on us? Stage one in developing an i-Trust model for our users.

    Get PDF
    This article describes a Joint Information Systems Committee (JISC)-funded project, conducted by a cross-disciplinary team, examining trust in information resources in the web environment employing a literature review and online Delphi study with follow-up community consultation. The project aimed to try to explain how users assess or assert trust in their use of resources in the web environment; to examine how perceptions of trust influence the behavior of information users; and to consider whether ways of asserting trust in information resources could assist the development of information literacy. A trust model was developed from the analysis of the literature and discussed in the consultation. Elements comprising the i-Trust model include external factors, internal factors and user's cognitive state. This article gives a brief overview of the JISC funded project which has now produced the i-Trust model (Pickard et. al. 2010) and focuses on issues of particular relevance for information providers and practitioners

    A collective intelligence approach for building student's trustworthiness profile in online learning

    Get PDF
    (c) 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Information and communication technologies have been widely adopted in most of educational institutions to support e-Learning through different learning methodologies such as computer supported collaborative learning, which has become one of the most influencing learning paradigms. In this context, e-Learning stakeholders, are increasingly demanding new requirements, among them, information security is considered as a critical factor involved in on-line collaborative processes. Information security determines the accurate development of learning activities, especially when a group of students carries out on-line assessment, which conducts to grades or certificates, in these cases, IS is an essential issue that has to be considered. To date, even most advances security technological solutions have drawbacks that impede the development of overall security e-Learning frameworks. For this reason, this paper suggests enhancing technological security models with functional approaches, namely, we propose a functional security model based on trustworthiness and collective intelligence. Both of these topics are closely related to on-line collaborative learning and on-line assessment models. Therefore, the main goal of this paper is to discover how security can be enhanced with trustworthiness in an on-line collaborative learning scenario through the study of the collective intelligence processes that occur on on-line assessment activities. To this end, a peer-to-peer public student's profile model, based on trustworthiness is proposed, and the main collective intelligence processes involved in the collaborative on-line assessments activities, are presented.Peer ReviewedPostprint (author's final draft

    The mechanics of trust: a framework for research and design

    Get PDF
    With an increasing number of technologies supporting transactions over distance and replacing traditional forms of interaction, designing for trust in mediated interactions has become a key concern for researchers in human computer interaction (HCI). While much of this research focuses on increasing users’ trust, we present a framework that shifts the perspective towards factors that support trustworthy behavior. In a second step, we analyze how the presence of these factors can be signalled. We argue that it is essential to take a systemic perspective for enabling well-placed trust and trustworthy behavior in the long term. For our analysis we draw on relevant research from sociology, economics, and psychology, as well as HCI. We identify contextual properties (motivation based on temporal, social, and institutional embeddedness) and the actor's intrinsic properties (ability, and motivation based on internalized norms and benevolence) that form the basis of trustworthy behavior. Our analysis provides a frame of reference for the design of studies on trust in technology-mediated interactions, as well as a guide for identifying trust requirements in design processes. We demonstrate the application of the framework in three scenarios: call centre interactions, B2C e-commerce, and voice-enabled on-line gaming

    On the emergent Semantic Web and overlooked issues

    Get PDF
    The emergent Semantic Web, despite being in its infancy, has already received a lotof attention from academia and industry. This resulted in an abundance of prototype systems and discussion most of which are centred around the underlying infrastructure. However, when we critically review the work done to date we realise that there is little discussion with respect to the vision of the Semantic Web. In particular, there is an observed dearth of discussion on how to deliver knowledge sharing in an environment such as the Semantic Web in effective and efficient manners. There are a lot of overlooked issues, associated with agents and trust to hidden assumptions made with respect to knowledge representation and robust reasoning in a distributed environment. These issues could potentially hinder further development if not considered at the early stages of designing Semantic Web systems. In this perspectives paper, we aim to help engineers and practitioners of the Semantic Web by raising awareness of these issues
    corecore