445,878 research outputs found

    Trustee: A Trust Management System for Fog-enabled Cyber Physical Systems

    Get PDF
    In this paper, we propose a lightweight trust management system (TMS) for fog-enabled cyber physical systems (Fog-CPS). Trust computation is based on multi-factor and multi-dimensional parameters, and formulated as a statistical regression problem which is solved by employing random forest regression model. Additionally, as the Fog-CPS systems could be deployed in open and unprotected environments, the CPS devices and fog nodes are vulnerable to numerous attacks namely, collusion, self-promotion, badmouthing, ballot-stuffing, and opportunistic service. The compromised entities can impact the accuracy of trust computation model by increasing/decreasing the trust of other nodes. These challenges are addressed by designing a generic trust credibility model which can countermeasures the compromise of both CPS devices and fog nodes. The credibility of each newly computed trust value is evaluated and subsequently adjusted by correlating it with a standard deviation threshold. The standard deviation is quantified by computing the trust in two configurations of hostile environments and subsequently comparing it with the trust value in a legitimate/normal environment. Our results demonstrate that credibility model successfully countermeasures the malicious behaviour of all Fog-CPS entities i.e. CPS devices and fog nodes. The multi-factor trust assessment and credibility evaluation enable accurate and precise trust computation and guarantee a dependable Fog-CPS system

    The mechanics of trust: a framework for research and design

    Get PDF
    With an increasing number of technologies supporting transactions over distance and replacing traditional forms of interaction, designing for trust in mediated interactions has become a key concern for researchers in human computer interaction (HCI). While much of this research focuses on increasing users’ trust, we present a framework that shifts the perspective towards factors that support trustworthy behavior. In a second step, we analyze how the presence of these factors can be signalled. We argue that it is essential to take a systemic perspective for enabling well-placed trust and trustworthy behavior in the long term. For our analysis we draw on relevant research from sociology, economics, and psychology, as well as HCI. We identify contextual properties (motivation based on temporal, social, and institutional embeddedness) and the actor's intrinsic properties (ability, and motivation based on internalized norms and benevolence) that form the basis of trustworthy behavior. Our analysis provides a frame of reference for the design of studies on trust in technology-mediated interactions, as well as a guide for identifying trust requirements in design processes. We demonstrate the application of the framework in three scenarios: call centre interactions, B2C e-commerce, and voice-enabled on-line gaming

    Designing for Appropriate Trust in Automated Vehicles

    Get PDF
    Automated vehicles (AVs) have become a popular area of research due to, among others, claims of increased traffic safety and user comfort. However, before a user can reap the benefits, they must first trust the AV. Trust in AVs has gained a greater interest in recent years due to being a prerequisite for user acceptance, adoption as well as important for good user experience. However, it is not about creating trust in AVs, as much as creating an appropriate level of trust in relation to the actual performance of the AV. However, little research has presented a systematic and holistic approach that may assist developers in the design process to understand what to primarily focus on and how, when developing AVs that assist users to generate an appropriate level of trust.\ua0This thesis presents two mixed-method studies (Study I and II). The first study considers what factors affect users trust in the AV and is primarily based on a literature review as well as a complementary user study. The second study, a user study, is built upon Study I and uses a Wizard of Oz (WOz) approach with the purpose to understand how the behaviour of an AV affects users trust in a simulated but realistic context, including seven day-to-day traffic situations.The results show that trust is primarily affected by information from and about the AV. Furthermore, results also show that trust in AVs have primarily four different phases, before the user’s first physical interaction with the AV (i), during usage and whilst learning how the AV performs (ii), after the user has learned how the AV performs in a specific context (iii) and after the user has learned how the AV performs in a specific context but that context changes (iv). It was also found that driving behaviour affects the user’s trust in the AV during usage and whilst learning how the AV performs. This was primarily due to how well the driving behaviour communicated intentions for the users’ to be able to predict upcoming AV actions. The users’ were also affected by the perceived benevolence of the AV, that is how respectful the driving behaviour was interpreted by the user. Finally, the results also showed that the user’s trust in the AV also is affected by aspects relating to different traffic situations such as perceived task difficulty, perceived risk for oneself (and others) and how well the AV conformed to the user’s expectations. Thus, it is not only how the AV performs but rather how the AV performs in relation to different traffic situations. Finally, since design research not only considers how things are, but also how things ought to be, a tentative explanatory and prescriptive model was developed based on the results presented above. The model of trust information exchange and gestalt explains how information affecting user trust, travels from a trust information sender to a trust information receiver and highlights the important aspects for developers to consider designing for appropriate trust in AVs, such as the design space and related variables. The design variables are a) the message (the type and amount of information), b) the artefact (the AV, including communication channels and properties) and c) the information gestalt, which is based on the combination of signals communicated from the properties (and communication channels). In this case, the gestalt is what the user ultimately perceives; the combined result of all signals. Therefore, developers need to consider not only how individual signals are perceived and interpreted, but also how different signals are perceived and interpreted together, as a whole, an information gestalt

    Trust-based model for privacy control in context aware systems

    Get PDF
    In context-aware systems, there is a high demand on providing privacy solutions to users when they are interacting and exchanging personal information. Privacy in this context encompasses reasoning about trust and risk involved in interactions between users. Trust, therefore, controls the amount of information that can be revealed, and risk analysis allows us to evaluate the expected benefit that would motivate users to participate in these interactions. In this paper, we propose a trust-based model for privacy control in context-aware systems based on incorporating trust and risk. Through this approach, it is clear how to reason about trust and risk in designing and implementing context-aware systems that provide mechanisms to protect users' privacy. Our approach also includes experiential learning mechanisms from past observations in reaching better decisions in future interactions. The outlined model in this paper serves as an attempt to solve the concerns of privacy control in context-aware systems. To validate this model, we are currently applying it on a context-aware system that tracks users' location. We hope to report on the performance evaluation and the experience of implementation in the near future

    Trust, distrust and the paradox of democracy

    Get PDF
    According to the three-dimensional theory of trust which the author develops in his recent work, the measure of trust that people vest on their fellow citizens or institutions depends on three factors: the reflected trustworthiness as estimated by themselves in more or less rational manner, the attitude of basic trustfulness deriving from socialization, and the culture of trust pervading their society and normatively constraining for each member. The culture of trust is shaped by historical experiences of a society - the tradition of trust, and by the current structural context -the trust-inspiring milieu. The author presents a model of a structural context conducive for the emergence of the culture of trust, and then argues that the democratic organization contributes to the trust-generating conditions, like normative certainty, transparency, stability, accountability etc. The mechanism of this influence is found to be doubly paradoxical. First, democracy breeds the culture of trust by institutionalizing distrust, at many levels of democratic organization. And second, the strongest influence of democracy on the culture of trust may be expected when the institutionalized distrust remains only the potential insurance of trustworthiness, a resource used sparingly and only when there appear significant breaches of trust. Of all three components in the three-dimensional model of trust, the cultural dimension is most susceptible to practical, political measures. And the most promising method to elicit the culture of trust is designing democratic institutions and safeguarding their viable functioning. --

    Trust, Reciprocity and Institutional Design: Lessons from Behavioural Economics

    Get PDF
    Trust and reciprocity are the bond of society (Locke), but economic agents are both self-interested and intrinsically untrustworthy. These assumptions impair severely economists' accounts of social relationships. The paper examines strategies to escape this paradox by enlarging our conception of rationality: the assumptions of self-interest and consequentialism are critically discussed as well as relational behavioural principles (e.g. trust and reciprocity). The implications of this enlarged kind of rationality are particularly important for agency theory. The paper analyses, within this framework, the working of two different kinds of incentive mechanisms, namely intra-personal and interpersonal, and discusses experimental results that emphasise the empirical relevance of the latter. Besides providing a more descriptively adequate picture of agency, such mechanisms have important normative implications. In this respect some of the conditions that affect the process of accumulation and erosion of trust and social capital are explored. The tension between rules and trust turns out to be not inescapable, though it calls for a changing in the designing logic of institutions and contracts. I shall discuss what are the changes needed in order to implement a trust-enhancing activity of institutional design.Incentives; reciprocity; trust; crowding-out; institutional design

    Can Intelligent Environments be Trustworthy? Designing for trust in Scientific Communication: Concept of Habitable Interfaces (Position Paper)

    Get PDF
    In this paper we propose the concept of Habitable Interfaces. This concept builds on values-based trust. We hypothesise that representations and interactions built on scientific metaphors and concepts and organised based on the knowledge whithin particular scientific domain will enable values-based trust. Habitable interfaces may provide better information exchange within scientific communities
    • 

    corecore