8 research outputs found

    A Survey of Provenance Leveraged Trust in Wireless Sensor Networks

    Get PDF
    A wireless sensor network is a collection of self-organized sensor nodes. WSNs have many challenges such as lack of a centralized network administration, absence of infrastructure, low data transmission capacity, low bandwidth, mobility, lack of connectivity, limited power supply and dynamic network topology. Due to this vulnerable nature, WSNs need a trust architecture to keep the quality of the network data high for a longer time. In this work, we aim to survey the proposed trust architectures for WSNs. Provenance can play a key role in assessing trust in these architectures. However not many research have leveraged provenance for trust in WSNs. We also aim to point out this gap in the field and encourage researchers to invest in this topic. To our knowledge our work is unique and provenance leveraged trust work in WSNs has not been surveyed before. Keywords:Provenance, Trust, Wireless Sensor Networks  

    Improving Recommender Systems Using Knowledge Obsolescence as a Predictor of Trust

    Get PDF
    In the current context of the Social Web, trust has emerged as a concept and mechanism to differentiate users of this Social Web and the content they generate. Much effort has been devoted to study trust predictors with the aim to provide some operational use of the concept. We propose in this work a new predictor for trust: knowledge obsolescence. We provide a characterization of the concept and a description of the relation between trust and knowledge obsolescence. We applied the concept to a generic recommender system. For this purpose, we have developed a software simulator that allow us to test trust and knowledge obsolescence networks in the recommender systems context. Interesting results were obtained. We found that recommender systems success is augmented. Moreover, we found an improvement in some cases for the coverage of potential recommendable items. We did not find statistical significant benefit on the quality of recommendations.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Metamodelo para adaptação de confiança e reputação em sistemas multiagente dinâmicos

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2013.Modelos computacionais de confiança e reputação são elementos-chave no projeto de sistemas multiagente abertos. Eles oferecem um meio de avaliar e reduzir o risco de cooperação na presença de incerteza. No entanto, os modelos propostos na literatura não consideram os custos envolvidos na sua aplicação e como os modelos são afetados pela dinamicidade do ambiente. Neste trabalho, um metamodelo para adaptação de confiança a e reputação em sistemas multiagente dinâmicos é proposto. O metamodelo tem como finalidade complementar os modelos de confiança e reputação já existentes, permitindo que agentes deliberativos possam raciocinar sobre os componentes do modelo em uso e reagir a mudança as no ambiente. O processo de adaptação é realizado ajustando a configuração do modelo adotado para melhor se adequar às condições atuais. É demonstrado como o metamodelo pode ser aplicado a modelos propostos na literatura e como planos de adaptação podem ser utilizados para ajustar seus componentes dinamicamente para melhorar seu desempenho. Um mecanismo de aprendizagem, incluindo uma prova de conceito baseada em algoritmos genéticos, é proposto para identificar novos planos de adaptação para cenários similares. Por fim, a avaliação experimental da aplicação do metamodelo e do mecanismo de aprendizagem mostra melhorias significativas em comparação com o uso de modelos não adaptáveis, o que contribui para a melhoria do projeto de agentes autônomos para sistemas multiagente dinâmicos. _______________________________________________________________________________________ ABSTRACTComputational trust and reputation models are key elements in the design of open multi-agent systems. They offer a way of evaluating and reducing risks of cooperation in the presence of uncertainty. However, the models proposed in the literature do not consider the costs they introduce and how they are affected by dynamic environments. In this work, a meta-model for trust and reputation adaptation in dynamic multi-agent systems is proposed. The meta-model acts as a complement to trust and reputation models, by allowing deliberative agents to reason about the components of the model being used, and to react to changes in the environment. The adaptation process is made by adjusting the model's configuration to better ft the current conditions. It is demonstrated how the meta-model can be applied to existing models proposed in the literature, and how adaptation plans can be used to adjust its components dynamically to improve its performance. A learning mechanism, along with a proof of concept implementation based on genetic algorithms, is proposed to identify new adaptation plans for similar scenarios. Finally, the experimental evaluation of the meta-model application and its learning mechanism shows significant improvements in comparison to the use of non-adaptable models. This contributes to improving the design of autonomous agents for dynamic multi-agent systems

    A Risk And Trust Security Framework For The Pervasive Mobile Environment

    Get PDF
    A pervasive mobile computing environment is typically composed of multiple fixed and mobile entities that interact autonomously with each other with very little central control. Many of these interactions may occur between entities that have not interacted with each other previously. Conventional security models are inadequate for regulating access to data and services, especially when the identities of a dynamic and growing community of entities are not known in advance. In order to cope with this drawback, entities may rely on context data to make security and trust decisions. However, risk is introduced in this process due to the variability and uncertainty of context information. Moreover, by the time the decisions are made, the context data may have already changed and, in which case, the security decisions could become invalid.With this in mind, our goal is to develop mechanisms or models, to aid trust decision-making by an entity or agent (the truster), when the consequences of its decisions depend on context information from other agents (the trustees). To achieve this, in this dissertation, we have developed ContextTrust a framework to not only compute the risk associated with a context variable, but also to derive a trust measure for context data producing agents. To compute the context data risk, ContextTrust uses Monte Carlo based method to model the behavior of a context variable. Moreover, ContextTrust makes use of time series classifiers and other simple statistical measures to derive an entity trust value.We conducted empirical analyses to evaluate the performance of ContextTrust using two real life data sets. The evaluation results show that ContextTrust can be effective in helping entities render security decisions

    High Quality P2P Service Provisioning via Decentralized Trust Management

    Get PDF
    Trust management is essential to fostering cooperation and high quality service provisioning in several peer-to-peer (P2P) applications. Among those applications are customer-to-customer (C2C) trading sites and markets of services implemented on top of centralized infrastructures, P2P systems, or online social networks. Under these application contexts, existing work does not adequately address the heterogeneity of the problem settings in practice. This heterogeneity includes the different approaches employed by the participants to evaluate trustworthiness of their partners, the diversity in contextual factors that influence service provisioning quality, as well as the variety of possible behavioral patterns of the participants. This thesis presents the design and usage of appropriate computational trust models to enforce cooperation and ensure high quality P2P service provisioning, considering the above heterogeneity issues. In this thesis, first I will propose a graphical probabilistic framework for peers to model and evaluate trustworthiness of the others in a highly heterogeneous setting. The framework targets many important issues in trust research literature: the multi-dimensionality of trust, the reliability of different rating sources, and the personalized modeling and computation of trust in a participant based on the quality of services it provides. Next, an analysis on the effective usage of computational trust models in environments where participants exhibit various behaviors, e.g., honest, rational, and malicious, will be presented. I provide theoretical results showing the conditions under which cooperation emerges when using trust learning models with a given detecting accuracy and how cooperation can still be sustained while reducing the cost and accuracy of those models. As another contribution, I also design and implement a general prototyping and simulation framework for reputation-based trust systems. The developed simulator can be used for many purposes, such as to discover new trust-related phenomena or to evaluate performance of a trust learning algorithm in complex settings. Two potential applications of computational trust models are then discussed: (1) the selection and ranking of (Web) services based on quality ratings from reputable users, and (2) the use of a trust model to choose reliable delegates in a key recovery scenario in a distributed online social network. Finally, I will identify a number of various issues in building next-generation, open reputation-based trust management systems as well as propose several future research directions starting from the work in this thesis
    corecore