318 research outputs found

    Buying Private Data without Verification

    Get PDF
    We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot verify the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, \ie, they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanism's outcome---the computed statistic as well as the payments---to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information? We design a differentially private peer-prediction mechanism that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits bib_i, but does not need full knowledge of the marginal distribution on the costs cic_i, instead requiring only an approximate upper bound. Our mechanism guarantees Ï”\epsilon-differential privacy to each agent ii against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the n−1n-1 other agents j≠ij\neq i. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent, the cost of running the survey goes to 00 as the number of agents diverges.Comment: Appears in EC 201

    Bayesian markets to elicit private information

    Get PDF
    Financial markets reveal what investors think about the future, and prediction markets are used to forecast election results. Could markets also encourage people to reveal private information, such as subjective judgments (e.g., “Are you satisfied with your life?”) or unverifiable facts? This paper shows how to design such markets, called Bayesian markets. People trade an asset whose value represents the proportion of affirmative answers to a question. Their trading position then reveals their own answer to the question. The results of this paper are based on a Bayesian setup in which people use their private information (their “type”) as a signal. Hence, beliefs about others’ types are correlated with one’s own type. Bayes

    Insurance based lie detection: Enhancing the verifiability approach with a model statement component

    Get PDF
    Purpose - The Verifiability Approach (VA) is verbal lie detection tool that has shown promise when applied to insurance claims settings. This study examined the effectiveness of incorporating a Model Statement comprised of checkable information to the VA protocol for enhancing the verbal differences between liars and truth tellers.Method - The study experimentally manipulated supplementing (or withholding) the VA with a Model Statement. It was hypothesised that such a manipulation would (i) encourage truth tellers to provide more verifiable details than liars and (ii) encourage liars to report more unverifiable details than truth tellers (compared to the no model statement control). As a result, it was hypothesized that (iii) the model statement would improve classificatory accuracy of the VA. Participants reported 40 genuine and 40 fabricated insurance claim statements, in which half the liars and truth tellers where provided with a model statement as part of the VA procedure, and half where provide no model statement.Results - All three hypotheses were supported. In terms of accuracy, the model statement increased classificatory rates by the VA considerably from 65.0% to 90.0%.Conclusion - Providing interviewee’s with a model statement prime consisting of checkable detail appears to be a useful refinement to the VA procedure

    Optimal Contracts for Lenient Supervisors

    Get PDF
    We consider a situation where an agent's effort is monitored by a supervisor who cares for the agent's well being. This is modeled by incorporating the agent's utility into the utility function of the supervisor. The first best solution can be implemented even if the supervisor's preferences are unknown. The corresponding optimal contract is similar to what we observe in practice: The supervisor's wage is constant and independent of his report. It induces one type of supervisor to report the agent's performance truthfully, while all others report favorably independent of performance. This implies that overstated performance (leniency bias) may be the outcome of optimal contracts under informational asymmetries

    Extending the verifiability approach framework: The effect of initial questioning

    Get PDF
    The verifiability approach (VA) is a lie‐detection tool that examines reported checkable details. Across two studies, we attempt to exploit liar's preferred strategy of repeating information by examining the effect of questioning adult interviewees before the VA. In Study 1, truth tellers (n = 34) and liars (n = 33) were randomly assigned to either an initial open or closed questioning condition. After initial questioning, participants were interviewed using the VA. In Study 2, truth tellers (n = 48) and liars (n = 48) were interviewed twice, with half of each veracity group randomly assigned to either the Information Protocol (an instruction describing the importance of reporting verifiable details) or control condition. Only truth tellers revised their initial statement to include verifiable detail. This pattern was most pronounced when initial questioning was open (Study 1) and when the information protocol was employed (Study 2). Thus, liar's preferred strategy of maintaining consistency between statements appears exploitable using the VA

    Information Elicitation from Decentralized Crowd Without Verification

    Full text link
    Information Elicitation Without Verification (IEWV) refers to the problem of eliciting high-accuracy solutions from crowd members when the ground truth is unverifiable. A high-accuracy team solution (aggregated from members' solutions) requires members' effort exertion, which should be incentivized properly. Previous research on IEWV mainly focused on scenarios where a central entity (e.g., the crowdsourcing platform) provides incentives to motivate crowd members. Still, the proposed designs do not apply to practical situations where no central entity exists. This paper studies the overlooked decentralized IEWV scenario, where crowd members act as both incentive contributors and task solvers. We model the interactions among members with heterogeneous team solution accuracy valuations as a two-stage game, where each member decides her incentive contribution strategy in Stage 1 and her effort exertion strategy in Stage 2. We analyze members' equilibrium behaviors under three incentive allocation mechanisms: Equal Allocation (EA), Output Agreement (OA), and Shapley Value (SV). We show that at an equilibrium under any allocation mechanism, a low-valuation member exerts no more effort than a high-valuation member. Counter-intuitively, a low-valuation member provides incentives to the collaboration while a high-valuation member does not at an equilibrium under SV. This is because a high-valuation member who values the aggregated team solution more needs fewer incentives to exert effort. In addition, when members' valuations are sufficiently heterogeneous, SV leads to team solution accuracy and social welfare no smaller than EA and OA
    • 

    corecore