10 research outputs found

    Emotions and Deception Detection

    Get PDF
    Humans have developed a complex social structure which relies heavily on communication between members. However, not all communication is honest. Distinguishing honest from deceptive information is clearly a useful skills, but individuals do not possess a strong ability to discriminate veracity. As others will not willingly admit they are lying, one must rely on different information to discern veracity. In deception detection, individuals are told to rely on behavioural indices to discriminate lies and truths. A source of such indices are the emotions displayed by another. This thesis focuses on the role that emotions have on the ability to detect deception, exploring the reasons for low judgemental accuracy when individuals focus on emotion information. I aim to demonstrate that emotion recognition does not aid the detection of deception, and can result in decreased accuracy. This is attributed to the biasing relationship of emotion recognition on veracity judgements, stemming from the inability of decoders to separate the authenticity of emotional cues. To support my claims, I will demonstrate the lack of ability of decoders to make rational judgements regarding veracity, even if allowed to pool the knowledge of multiple decoders, and disprove the notion that decoders can utilise emotional cues, both innately and through training, to detect deception. I assert, and find, that decoders are poor at discriminating between genuine and deceptive emotional displays, advocating for a new conceptualisation of emotional cues in veracity judgements. Finally, I illustrate the importance of behavioural information in detecting deception using two approaches aimed at improving the process of separating lies and truths. First, I address the role of situational factors in detecting deception, demonstrating their impact on decoding ability. Lastly, I introduce a new technique for improving accuracy, passive lie detection, utilising body postures that aid decoders in processing behavioural information. The research will conclude suggesting deception detection should focus on improving information processing and accurate classification of emotional information

    Acting Surprised: Comparing Perceptions of Different Dynamic Deliberate Expressions

    Get PDF
    People are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed

    Judgments in the Sharing Economy: The Effect of User-Generated Trust and Reputation Information on Decision-Making Accuracy and Bias

    Get PDF
    The growing ecosystem of peer-to-peer enterprise – the Sharing Economy (SE) – has brought with it a substantial change in how we access and provide goods and services. Within the SE, individuals make decisions based mainly on user-generated trust and reputation information (TRI). Recent research indicates that the use of such information tends to produce a positivity bias in the perceived trustworthiness of fellow users. Across two experimental studies performed on an artificial SE accommodation platform, we test whether users’ judgments can be accurate when presented with diagnostic information relating to the quality of the profiles they see or if these overly positive perceptions persist. In study 1, we find that users are quite accurate overall (70%) at determining the quality of a profile, both when presented with full profiles or with profiles where they selected three TRI elements they considered useful for their decisionmaking. However, users tended to exhibit an “upward quality bias” when making errors. In study 2, we leveraged patterns of frequently vs. infrequently selected TRI elements to understand whether users have insights into which are more diagnostic and find that presenting frequently selected TRI elements improved users’ accuracy. Overall, our studies demonstrate that – positivity bias notwithstanding – users can be remarkably accurate in their online SE judgments

    Bayesian generalized linear mixed effects models for deception detection analyses or:How I learned to stop aggregating veracity judgments and embraced Bayesian Mixed Effects models

    Get PDF
    Historically, deception detection research has relied on factorial analyses of response accuracy to make inferences. But this practice overlooks important sources of variability resulting in potentially misleading estimates and may conflate response bias with participants’ underlying sensitivity to detect lies from truths. We offer an alternative approach using Bayesian Generalized Linear Mixed Models (BGLMMs) within a Signal Detection Theory (SDT) framework to address these limitations. Our approach incorporates individual differences from both judges and senders, which are a principal source of spurious findings in deception research. By avoiding data transformations and aggregations, this methodology outperforms traditional methods and provides more informative and reliable effect estimates. The proposed framework offers researchers a powerful tool for analyzing deception data and advances our understanding of veracity judgments. All code and data are openly available

    COS Ambassadors

    No full text
    A collection of materials and resources for COS ambassadors
    corecore