14,247 research outputs found

    The effects of cheating on deception detection during a social dilemma

    Get PDF
    Research by social psychologists and others consistently finds that people are poor at detecting attempted deception by others. However, Tooby and Cosmides (cognitive psychologists who favor evolutionary analyses of behavior) have argued and shown that humans have evolved a special “cognitive module” for detecting cheaters. Their research suggests that people are good at detecting cheating by group members. These two literatures seem to be at odds with one another. The hypothesis of this research was that when participants are told a lie by a fellow group member whose attempted deception involves cheating on a task that affects their outcomes, they will be good at detecting deception. In this experiment, participants played blackjack in groups using a social dilemma paradigm. Participants’ outcomes were either interdependent or independent with a confederate’s outcomes. It was predicted that participants whose outcomes were interdependent with the confederate would be better at detecting deception by the confederate than those participants whose outcomes were independent from the confederate’s outcomes. Results indicate that when judging other participants’ lies interdependent players were more successful at deception detection than independent players but were not more sensitive to the lies. This effect may be driven by the truth bias, people assume that their interaction partners are truthful which would explain why sensitivity measures (which remove response biases) did not show the hypothesized effect. Independent players were not more successful or sensitive when judging the confederate’s lies. The failure to find the hypothesized effect may be due to methodological factors. Both participants heard may have had their cheating detection modules activated when hearing the instructions for the experiment which implied that cheating could occur. Overall success rates support this idea because they were significantly higher than success rates reached by most deception detection research (50%) which may be indicative that both participants cheating detection modules were active. Results also indicate that as the number of lies told increases overall success decreases but success at detecting lies and sensitivity increase. Thus the more lies that are told the better people are at catching them

    Honesty Without Truth: Lies, Accuracy, and the Criminal Justice Process

    Get PDF
    Focusing on “lying” is a natural response to uncertainty but too narrow of a concern. Honesty and truth are not the same thing and conflating them can actually inhibit accuracy. In several settings across investigations and trials, the criminal justice system elevates compliant statements, misguided beliefs, and confident opinions while excluding more complex evidence. Error often results. Some interrogation techniques, for example, privilege cooperation over information. Those interactions can yield incomplete or false statements, confessions, and even guilty pleas. Because of the impeachment rules that purportedly prevent perjury, the most knowledgeable witnesses may be precluded from taking the stand. The current construction of the Confrontation Clause right also excludes some reliable evidence—especially from victim witnesses—because it favors face-to-face conflict even though overrated demeanor cues can mislead. And courts permit testimony from forensic experts about pattern matches, such as bite-marks and ballistics, if those witnesses find their own methodologies persuasive despite recent studies discrediting their techniques. Exploring the points of disconnect between honesty and truth exposes some flaws in the criminal justice process and some opportunities to advance fact-finding, truth-seeking, and accuracy instead. At a time when “post-truth” challenges to shared baselines beyond the courtroom grow more pressing, scaffolding legal institutions, so they can provide needed structure and helpful models, seems particularly important. Assessing the legitimacy of legal outcomes and fostering the engagement necessary to reach just conclusions despite adversarial positions could also have an impact on declining facts and decaying trust in broader public life

    On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection

    Full text link
    Humans are the final decision makers in critical tasks that involve ethical and legal concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake news. Although machine learning models can sometimes achieve impressive performance in these tasks, these tasks are not amenable to full automation. To realize the potential of machine learning for improving human decisions, it is important to understand how assistance from machine learning models affects human performance and human agency. In this paper, we use deception detection as a testbed and investigate how we can harness explanations and predictions of machine learning models to improve human performance while retaining human agency. We propose a spectrum between full human agency and full automation, and develop varying levels of machine assistance along the spectrum that gradually increase the influence of machine predictions. We find that without showing predicted labels, explanations alone slightly improve human performance in the end task. In comparison, human performance is greatly improved by showing predicted labels (>20% relative improvement) and can be further improved by explicitly suggesting strong machine performance. Interestingly, when predicted labels are shown, explanations of machine predictions induce a similar level of accuracy as an explicit statement of strong machine performance. Our results demonstrate a tradeoff between human performance and human agency and show that explanations of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo available at https://deception.machineintheloop.co

    How private is private information?:The ability to spot deception in an economic game

    Get PDF
    We provide experimental evidence on the ability to detect deceit in a buyer-seller game with asymmetric information. Sellers have private information about the buyer's valuation of a good and sometimes have incentives to mislead buyers. We examine if buyers can spot deception in face-to-face encounters. We vary (1) whether or not the buyer can interrogate the seller, and (2) the contextual richness of the situation. We find that the buyers' prediction accuracy is above chance levels, and that interrogation and contextual richness are important factors determining the accuracy. These results show that there are circumstances in which part of the information asymmetry is eliminated by people's ability to spot deception

    What science can teach us about “Enhanced Interrogation”

    Get PDF
    No abstract available
    corecore