9,511 research outputs found

    Partial observable update for subjective logic and its application for trust estimation

    Get PDF
    Subjective Logic (SL) is a type of probabilistic logic, which is suitable for reasoning about situations with uncertainty and incomplete knowledge. In recent years, SL has drawn a significant amount of attention from the multi-agent systems community as it connects beliefs and uncertainty in propositions to a rigorous statistical characterization via Dirichlet distributions. However, one serious limitation of SL is that the belief updates are done only based on completely observable evidence. This work extends SL to incorporate belief updates from partially observable evidence. Normally, the belief updates in SL presume that the current evidence for a proposition points to only one of its mutually exclusive attribute states. Instead, this work considers that the current attribute state may not be completely observable, and instead, one is only able to obtain a measurement that is statistically related to this state. In other words, the SL belief is updated based upon the likelihood that one of the attributes was observed. The paper then illustrates properties of the partial observable updates as a function of the state likelihood and illustrates the use of these likelihoods for a trust estimation application. Finally, the utility of the partial observable updates is demonstrated via various simulations including the trust estimation case.U.S. Army Research Laboratory ; U.K. Ministry of Defence ; TÜBİTAKpre-prin

    Partial observable update for subjective logic and its application for trust estimation

    Get PDF
    Subjective Logic (SL) is a type of probabilistic logic, which is suitable for reasoning about situations with uncertainty and incomplete knowledge. In recent years, SL has drawn a significant amount of attention from the multi-agent systems community as it connects beliefs and uncertainty in propositions to a rigorous statistical characterization via Dirichlet distributions. However, one serious limitation of SL is that the belief updates are done only based on completely observable evidence. This work extends SL to incorporate belief updates from partially observable evidence. Normally, the belief updates in SL presume that the current evidence for a proposition points to only one of its mutually exclusive attribute states. Instead, this work considers that the current attribute state may not be completely observable, and instead, one is only able to obtain a measurement that is statistically related to this state. In other words, the SL belief is updated based upon the likelihood that one of the attributes was observed. The paper then illustrates properties of the partial observable updates as a function of the state likelihood and illustrates the use of these likelihoods for a trust estimation application. Finally, the utility of the partial observable updates is demonstrated via various simulations including the trust estimation case.U.S. Army Research Laboratory ; U.K. Ministry of Defence ; TÜBİTAKpre-prin

    Decision support for choice of security solution: the Aspect-Oriented Risk Driven Development (AORDD)framework

    Get PDF
    In security assessment and management there is no single correct solution to the identified security problems or challenges. Instead there are only choices and tradeoffs. The main reason for this is that modern information systems and security critical information systems in particular must perform at the contracted or expected security level, make effective use of available resources and meet end-users' expectations. Balancing these needs while also fulfilling development, project and financial perspectives, such as budget and TTM constraints, mean that decision makers have to evaluate alternative security solutions.\ud \ud This work describes parts of an approach that supports decision makers in choosing one or a set of security solutions among alternatives. The approach is called the Aspect-Oriented Risk Driven Development (AORDD) framework, combines Aspect-Oriented Modeling (AOM) and Risk Driven Development (RDD) techniques and consists of the seven components: (1) An iterative AORDD process. (2) Security solution aspect repository. (3) Estimation repository to store experience from estimation of security risks and security solution variables involved in security solution decisions. (4) RDD annotation rules for security risk and security solution variable estimation. (5) The AORDD security solution trade-off analysis and trade-o€ tool BBN topology. (6) Rule set for how to transfer RDD information from the annotated UML diagrams into the trad-off tool BBN topology. (7) Trust-based information aggregation schema to aggregate disparate information in the trade-o€ tool BBN topology. This work focuses on components 5 and 7, which are the two core components in the AORDD framework

    On the Statistics of Trustworthiness Prediction

    Get PDF
    Trust and trustworthiness facilitate interactions between human beings worldwide, every day. They enable the formation of friendships, making of profits and the adoption of new technologies, making life not only more pleasant, but furthering the societal development. Trust, for lack of a better word, is good. When human beings trust, they rely on the trusted party to be trustworthy, that is, literally worthy of the trust that is being placed in them. If it turns out that the trusted party is unworthy of the trust placed into it, the truster has misplaced its trust, has unwarrantedly relied and is liable to experience possibly unpleasant consequences. Human social evolution has equipped us with tools for determining another’s trustworthiness through experience, cues and observations with which we aim to minimise the risk of misplacing our trust. Social adaptation, however, is a slow process and the cues that are helpful in real, physical environments where we can observe and hear our interlocutors are less helpful in interactions that are conducted over data networks with other humans or computers, or even between two computers. This presents a challenge in a world where the virtual and the physical intermesh increasingly. A challenge that computational trust models seek to address by applying computational evidence-based methods to estimate trustworthiness. In this thesis, the state-of-the-art in evidence-based trust models is extended and improved upon – in particular with regard to their statistical modelling. The statistics behind (Bayesian) trustworthiness estimation will receive special attention, their extension bringing about improvements in trustworthiness estimation that encompass the fol- lowing aspects: (i.) statistically well-founded estimators for binomial and multinomial models of trust that can accurately estimate the trustworthiness of another party and those that can express the inher- ent uncertainty of the trustworthiness estimate in a statistically meaningful way, (ii.) better integration of recommendations by third parties using advanced methods for determining the reliability of the received recommendations, (iii.) improved responsiveness to changes in the behaviour of trusted parties, and (iv.) increasing the generalisability of trust-relevant information over a set of trusted parties. Novel estimators, methods for combining recommendations and other trust- relevant information, change detectors, as well as a mapping for integrating stereotype-based trustworthiness estimates, are bundled in an improved Bayesian trust model, Multinomial CertainTrust. Specific scientific contributions are structured into three distinct categories: 1. A Model for Trustworthiness Estimation: The statistics of trustworthiness estimation are investigated to design fully multinomial trustworthiness estimation model. Leveraging the assumptions behind the Bayesian estimation of binomial and multinomial proportions, accurate trustworthiness and certainty estimators are presented, and the integration of subjectivity via informed and non-informed Bayesian priors is discussed. 2. Methods for Trustworthiness Information Processing: Methods for facilitating trust propagation and accounting for concept drift in the behaviour of the trusted parties are introduced. All methods are applicable, by design, to both the binomial case and the multinomial case of trustworthiness estimation. 3. Further extension for trustworthiness estimation: Two methods for addressing the potential lack of direct experiences with new trustee in feedback-based trust models are presented. For one, the dedicated modelling of particular roles and the trust delegation between them is shown to be principally possible as an extension to existing feedback- based trust models. For another, a more general approach for feature-based generalisation using model-free, supervised machine-learners, is introduced. The general properties of the trustworthiness and certainty estimators are derived formally from the basic assumptions underlying binomial and multinomial estimation problems, harnessing fundamentals of Bayesian statistics. Desired properties for the introduced certainty estimators, first postulated by Wang & Singh, are shown to hold through formal argument. The general soundness and applicability of the proposed certainty estimators is founded on the statistical properties of interval estimation techniques discussed in the related statistics work and formally and rigorously shown there. The core estimation system and additional methods, in their entirety constituting the Multinomial CertainTrust model, are implemented in R, along with competing methods from the related work, specifically for determining recommender trustworthiness and coping with changing behaviour through ageing. The performance of the novel methods introduced in this thesis was tested against established methods from the related work in simulations. Methods for hardcoding indicators of trustworthiness were implemented within a multi-agent framework and shown to be functional in an agent-based simulation. Furthermore, supervised machine-learners were tested for their applicability by collecting a real-world data set of reputation data from a hotel booking site and evaluating their capabilities against this data set. The hotel data set exhibits properties, such as a high imbalance in the ratings, that appears typical of data that is generated from reputation systems, as these are also present in other data sets

    An Investigation into Trust & Reputation for Agent-Based Virtual Organisations

    No full text
    Trust is a prevalent concept in human society. In essence, it concerns our reliance on the actions of our peers, and the actions of other entities within our environment. For example, we may rely on our car starting in the morning to get to work on time, and on the actions of our fellow drivers, so that we may get there safely. For similar reasons, trust is becoming increasingly important in computing, as systems, such as the Grid, require computing resources to work together seamlessly, across organisational and geographical boundaries (Foster et al., 2001). In this context, the reliability of resources in one organisation cannot be assumed from the point of view of another. Moreover, certain resources may fail more often than others, and for this reason, we argue that software systems must be able to assess the reliability of different resources, so that they may choose which resources to rely upon. With this in mind, our goal here is to develop a mechanism by which software entities can automatically assess the trustworthiness of a given entity (the trustee). In achieving this goal, we have developed a probabilistic framework for assessing trust based on observations of a trustee's past behaviour. Such observations may be accounted for either when they are made directly by the assessing party (the truster), or by a third party (reputation source). In the latter case, our mechanism can cope with the possibility that third party information is unreliable, either because the sender is lying, or because it has a different world view. In this document, we present our framework, and show how it can be applied to cases in which a trustee's actions are represented as binary events; for example, a trustee may cooperate with the truster, or it may defect. We place our work in context, by showing how it constitutes part of a system for managing coalitions of agents, operating in a grid computing environment. We then give an empirical evaluation of our method, which shows that it outperforms the most similar system in the literature, in many important scenarios

    From Wald to Savage: homo economicus becomes a Bayesian statistician

    Get PDF
    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic importance. The story begins with Abraham Wald’s behaviorist approach to statistics and culminates with Leonard J. Savage’s elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. It is the latter’s acknowledged fiasco to achieve its planned goal, the reinterpretation of traditional inferential techniques along subjectivist and behaviorist lines, which raises the puzzle of how a failed project in statistics could turn into such a tremendous hit in economics. A couple of tentative answers are also offered, involving the role of the consistency requirement in neoclassical analysis and the impact of the postwar transformation of US business schools.Savage, Wald, rational behavior, Bayesian decision theory, subjective probability, minimax rule, statistical decision functions, neoclassical economics

    REPUTATION COMPUTATION IN SOCIAL NETWORKS AND ITS APPLICATIONS

    Get PDF
    This thesis focuses on a quantification of reputation and presents models which compute reputation within networked environments. Reputation manifests past behaviors of users and helps others to predict behaviors of users and therefore reduce risks in future interactions. There are two approaches in computing reputation on networks- namely, the macro-level approach and the micro-level approach. A macro-level assumes that there exists a computing entity outside of a given network who can observe the entire network including degree distributions and relationships among nodes. In a micro-level approach, the entity is one of the nodes in a network and therefore can only observe the information local to itself, such as its own neighbors behaviors. In particular, we study reputation computation algorithms in online distributed environments such as social networks and develop reputation computation algorithms to address limitations of existing models. We analyze and discuss some properties of reputation values of a large number of agents including power-law distribution and their diffusion property. Computing reputation of another within a network requires knowledge of degrees of its neighbors. We develop an algorithm for estimating degrees of each neighbor. The algorithm considers observations associated with neighbors as a Bernoulli trial and repeatedly estimate degrees of neighbors as a new observation occurs. We experimentally show that the algorithm can compute the degrees of neighbors more accurately than a simple counting of observations. Finally, we design a bayesian reputation game where reputation is used as payoffs. The game theoretic view of reputation computation reflects another level of reality in which all agents are rational in sharing reputation information of others. An interesting behavior of agents within such a game theoretic environment is that cooperation- i.e., sharing true reputation information- emerges without an explicit punishment mechanism nor a direct reward mechanisms

    The corporate-fund manager interface: objectives, information and valuation

    Get PDF
    Fund managers are the primary investment decision-makers in the stock market, and corporate executives are their primary sources of information. Meetings between the two are therefore central to stock market investment decisions but are surprisingly under-researched. There is little in the academic literature concerning their aims, content and outcomes. We report findings from interview research conducted with chief financial officers (CFOs) and investor relations managers from FTSE 100 companies and with chief investment officers (CIOs) and fund managers (FMs) from large institutional investors. Of particular interest we note that FMs place great reliance on discounted cash flow valuation models (despite informational asymmetry in favour of CFOs). This leads the former to seek to control encounters with the latter and to place great store on the clarity and consistency of corporate messages, ultimately relying on them for purposes other than estimating fundamental value. We consider some of the consequences of this usage.valuation, institutional shareholders, investor relations
    • 

    corecore