1,230,099 research outputs found

    Academic Panel: Can Self-Managed Systems be trusted?

    Get PDF
    Trust can be defined as to have confidence or faith in; a form of reliance or certainty based on past experience; to allow without fear; believe; hope: expect and wish; and extend credit to. The issue of trust in computing has always been a hot topic, especially notable with the proliferation of services over the Internet, which has brought the issue of trust and security right into the ordinary home. Autonomic computing brings its own complexity to this. With systems that self-manage, the internal decision making process is less transparent and the ‘intelligence’ possibly evolving and becoming less tractable. Such systems may be used from anything from environment monitoring to looking after Granny in the home and thus the issue of trust is imperative. To this end, we have organised this panel to examine some of the key aspects of trust. The first section discusses the issues of self-management when applied across organizational boundaries. The second section explores predictability in self-managed systems. The third part examines how trust is manifest in electronic service communities. The final discussion demonstrates how trust can be integrated into an autonomic system as the core intelligence with which to base adaptivity choices upon

    Bounded rationality, value systems and time-inconsistency of preferences as rational foundations for the concept of trust

    Get PDF
    This paper intends to contribute to the (bounded rationality) foundations of trust. After reviewing the extant definitions, I establish the formal structure of situations involving trust. In that context, I examine the paradoxical situation of (calculative) trust in simple settings. Then I show how bounded rationality provides a rationale for a concept of trust that goes beyond that calculative notion. Value systems and possible inconsistency of time preferences are shown to be crucial elements.Trust; Bounded rationality; Value systems; Behavioral decision-making;

    Beyond the Hype: On Using Blockchains in Trust Management for Authentication

    Full text link
    Trust Management (TM) systems for authentication are vital to the security of online interactions, which are ubiquitous in our everyday lives. Various systems, like the Web PKI (X.509) and PGP's Web of Trust are used to manage trust in this setting. In recent years, blockchain technology has been introduced as a panacea to our security problems, including that of authentication, without sufficient reasoning, as to its merits.In this work, we investigate the merits of using open distributed ledgers (ODLs), such as the one implemented by blockchain technology, for securing TM systems for authentication. We formally model such systems, and explore how blockchain can help mitigate attacks against them. After formal argumentation, we conclude that in the context of Trust Management for authentication, blockchain technology, and ODLs in general, can offer considerable advantages compared to previous approaches. Our analysis is, to the best of our knowledge, the first to formally model and argue about the security of TM systems for authentication, based on blockchain technology. To achieve this result, we first provide an abstract model for TM systems for authentication. Then, we show how this model can be conceptually encoded in a blockchain, by expressing it as a series of state transitions. As a next step, we examine five prevalent attacks on TM systems, and provide evidence that blockchain-based solutions can be beneficial to the security of such systems, by mitigating, or completely negating such attacks.Comment: A version of this paper was published in IEEE Trustcom. http://ieeexplore.ieee.org/document/8029486

    The onus on us? Stage one in developing an i-Trust model for our users.

    Get PDF
    This article describes a Joint Information Systems Committee (JISC)-funded project, conducted by a cross-disciplinary team, examining trust in information resources in the web environment employing a literature review and online Delphi study with follow-up community consultation. The project aimed to try to explain how users assess or assert trust in their use of resources in the web environment; to examine how perceptions of trust influence the behavior of information users; and to consider whether ways of asserting trust in information resources could assist the development of information literacy. A trust model was developed from the analysis of the literature and discussed in the consultation. Elements comprising the i-Trust model include external factors, internal factors and user's cognitive state. This article gives a brief overview of the JISC funded project which has now produced the i-Trust model (Pickard et. al. 2010) and focuses on issues of particular relevance for information providers and practitioners

    IT Project Management from a Systems Thinking Perspective: A Position Paper

    Get PDF
    We proposes a Systems Thinking approach to the study of IT project management and show how this approach helps project managers in controlling their projects. To illustrate our proposal, we present an example model of the dynamics of IT out-sourcing projects. The example model explains these dynamics in terms of feedback loops consisting of causal relations re-ported in the literature. The model provides insight in how coordination, trust, information exchange and possibilities for op-portunistic behaviour influence each other and together influence delivery quality, which in turn influences trust. The integra-tion of these insights provided by applying the Systems Thinking perspective helps project managers to reason about how their choices influence project outcome. The Systems Thinking perspective can serve as an additional tool in the academic study of IT project management. Applying the Systems Thinking perspective also calls for additional research in which this perspective is itself the object of study

    Trust-based model for privacy control in context aware systems

    Get PDF
    In context-aware systems, there is a high demand on providing privacy solutions to users when they are interacting and exchanging personal information. Privacy in this context encompasses reasoning about trust and risk involved in interactions between users. Trust, therefore, controls the amount of information that can be revealed, and risk analysis allows us to evaluate the expected benefit that would motivate users to participate in these interactions. In this paper, we propose a trust-based model for privacy control in context-aware systems based on incorporating trust and risk. Through this approach, it is clear how to reason about trust and risk in designing and implementing context-aware systems that provide mechanisms to protect users' privacy. Our approach also includes experiential learning mechanisms from past observations in reaching better decisions in future interactions. The outlined model in this paper serves as an attempt to solve the concerns of privacy control in context-aware systems. To validate this model, we are currently applying it on a context-aware system that tracks users' location. We hope to report on the performance evaluation and the experience of implementation in the near future
    corecore