48,362 research outputs found

    A Trust Model Based on Cloud Model and Bayesian Networks

    Get PDF
    Abstractthe Internet has been becoming the most important infrastructure for distributed applications which are composed of online services. In such open and dynamic environment, service selection becomes a challenge. The approaches based on subjective trust models are more adaptive and efficient than traditional binary logic based approaches. Most well known trust models use probability or fuzzy set theory to hold randomness or fuzziness respectively. Only cloud model based models consider both aspects of uncertainty. Although cloud model is ideal for representing trust degrees, it is not efficient for context aware trust evaluation and dynamic updates. By contrast, Bayesian network as an uncertain reasoning tool is more efficient for dynamic trust evaluation. An uncertain trust model that combines cloud model and Bayesian network is proposed in this paper

    Human-Robot Trust Integrated Task Allocation and Symbolic Motion planning for Heterogeneous Multi-robot Systems

    Full text link
    This paper presents a human-robot trust integrated task allocation and motion planning framework for multi-robot systems (MRS) in performing a set of tasks concurrently. A set of task specifications in parallel are conjuncted with MRS to synthesize a task allocation automaton. Each transition of the task allocation automaton is associated with the total trust value of human in corresponding robots. Here, the human-robot trust model is constructed with a dynamic Bayesian network (DBN) by considering individual robot performance, safety coefficient, human cognitive workload and overall evaluation of task allocation. Hence, a task allocation path with maximum encoded human-robot trust can be searched based on the current trust value of each robot in the task allocation automaton. Symbolic motion planning (SMP) is implemented for each robot after they obtain the sequence of actions. The task allocation path can be intermittently updated with this DBN based trust model. The overall strategy is demonstrated by a simulation with 5 robots and 3 parallel subtask automata

    Quantitative Measures of Regret and Trust in Human-Robot Collaboration Systems

    Get PDF
    Human-robot collaboration (HRC) systems integrate the strengths of both humans and robots to improve the joint system performance. In this thesis, we focus on social human-robot interaction (sHRI) factors and in particular regret and trust. Humans experience regret during decision-making under uncertainty when they feel that a better result could be obtained if chosen differently. A framework to quantitatively measure regret is proposed in this thesis. We embed quantitative regret analysis into Bayesian sequential decision-making (BSD) algorithms for HRC shared vision tasks in both domain search and assembly tasks. The BSD method has been used for robot decision-making tasks, which however is proved to be very different from human decision-making patterns. Instead, regret theory qualitatively models human\u27s rational decision-making behaviors under uncertainty. Moreover, it has been shown that joint performance of a team will improve if all members share the same decision-making logic. Trust plays a critical role in determining the level of a human\u27s acceptance and hence utilization of a robot. A dynamic network based trust model combing the time series trust model is first implemented in a multi-robot motion planning task with a human-in-the-loop. However, in this model, the trust estimates for each robot is independent, which fails to model the correlative trust in multi-robot collaboration. To address this issue, the above model is extended to interdependent multi-robot Dynamic Bayesian Networks

    An efficient and versatile approach to trust and reputation using hierarchical Bayesian modelling

    No full text
    In many dynamic open systems, autonomous agents must interact with one another to achieve their goals. Such agents may be self-interested and, when trusted to perform an action, may betray that trust by not performing the action as required. Due to the scale and dynamism of these systems, agents will often need to interact with other agents with which they have little or no past experience. Each agent must therefore be capable of assessing and identifying reliable interaction partners, even if it has no personal experience with them. To this end, we present HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information. This model is robust in environments in which third party information is malicious, noisy, or otherwise inaccurate. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based exclusively on principled statistical techniques: it can cope with multiple discrete or continuous aspects of trustee behaviour; it does not restrict agents to using a single shared representation of behaviour; it can improve assessment by using any observed correlation between the behaviour of similar trustees or information sources; and it provides a pragmatic solution to the whitewasher problem (in which unreliable agents assume a new identity to avoid bad reputation). In this paper, we describe the theoretical aspects of HABIT, and present experimental results that demonstrate its ability to predict agent behaviour in both a simulated environment, and one based on data from a real-world webserver domain. In particular, these experiments show that HABIT can predict trustee performance based on multiple representations of behaviour, and is up to twice as accurate as BLADE, an existing state-of-the-art trust model that is both statistically principled and has been previously shown to outperform a number of other probabilistic trust models

    Trust model for certificate revocation in Ad hoc networks

    Get PDF
    In this paper we propose a distributed trust model for certificate revocation in Adhoc networks. The proposed model allows trust to be built over time as the number of interactions between nodes increase. Furthermore, trust in a node is defined not only in terms of its potential for maliciousness, but also in terms of the quality of the service it provides. Trust in nodes where there is little or no history of interactions is determined by recommendations from other nodes. If the nodes in the network are selfish, trust is obtained by an exchange of portfolios. Bayesian networks form the underlying basis for this model

    Sequential Decision Making with Untrustworthy Service Providers

    No full text
    In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run
    • …
    corecore