450 research outputs found

    Common learning

    Get PDF
    Consider two agents who learn the value of an unknown parameter by observing a sequence of private signals. The signals are independent and identically distributed across time but not necessarily across agents. We show that when each agent's signal space is finite, the agents will commonly learn the value of the parameter, that is, that the true value of the parameter will become approximate common knowledge. The essential step in this argument is to express the expectation of one agent's signals, conditional on those of the other agent, in terms of a Markov chain. This allows us to invoke a contraction mapping principle ensuring that if one agent's signals are close to those expected under a particular value of the parameter, then that agent expects the other agent's signals to be even closer to those expected under the parameter value. In contrast, if the agents' observations come from a countably infinite signal space, then this contraction mapping property fails. We show by example that common learning can fail in this case

    Rational Trust Modeling

    Get PDF
    Trust models are widely used in various computer science disciplines. The main purpose of a trust model is to continuously measure trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of "rational trust modeling" is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is where the novelty of our approach comes from. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer's perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivise trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict behavior of the players in subsequent steps by game theoretical analyses

    Time-to-birth prediction models and the influence of expert opinions

    Get PDF
    Preterm birth is the leading cause of death among children under five years old. The pathophysiology and etiology of preterm labor are not yet fully understood. This causes a large number of unnecessary hospitalizations due to high--sensitivity clinical policies, which has a significant psychological and economic impact. In this study, we present a predictive model, based on a new dataset containing information of 1,243 admissions, that predicts whether a patient will give birth within a given time after admission. Such a model could provide support in the clinical decision-making process. Predictions for birth within 48 h or 7 days after admission yield an Area Under the Curve of the Receiver Operating Characteristic (AUC) of 0.72 for both tasks. Furthermore, we show that by incorporating predictions made by experts at admission, which introduces a potential bias, the prediction effectiveness increases to an AUC score of 0.83 and 0.81 for these respective tasks

    Sequential two-player games with ambiguity

    Get PDF
    Author's pre-printIf players' beliefs are strictly nonadditive, the Dempster–Shafer updating rule can be used to define beliefs off the equilibrium path. We define an equilibrium concept in sequential two-person games where players update their beliefs with the Dempster–Shafer updating rule. We show that in the limit as uncertainty tends to zero, our equilibrium approximates Bayesian Nash equilibrium. We argue that our equilibrium can be used to define a refinement of Bayesian Nash equilibrium by imposing context-dependent constraints on beliefs under uncertainty.ESRC senior research fellowship scheme, H5242750259
    corecore