202 research outputs found

    Quantifying social performance: A review with implications for further work

    Get PDF
    Human social performance has been a focus of theory and investigation for more than a century. Attempts to quantify social performance have focused on self-report and non-social performance measures grounded in intelligence-based theories. An expertise framework, when applied to individual differences in social interaction performance, offers novel insights and methods of quantification that could address limitations of prior approaches. The purposes of this review are 3-fold. First, to define the central concepts related to individual differences in social performance, with a particular focus on the intelligence-based framework that has dominated the field. Second, to make an argument for a revised conceptualization of individual differences in social–emotional performance as a social expertise. In support of this second aim, the putative components of a social–emotional expertise and the potential means for their assessment will be outlined. To end, the implications of an expertise-based conceptual framework for the application of computational modeling approaches in this area will be discussed. Taken together, expertise theory and computational modeling methods have the potential to advance quantitative assessment of social interaction performance

    Auditory communication in domestic dogs: vocal signalling in the extended social environment of a companion animal

    Get PDF
    Domestic dogs produce a range of vocalisations, including barks, growls, and whimpers, which are shared with other canid species. The source–filter model of vocal production can be used as a theoretical and applied framework to explain how and why the acoustic properties of some vocalisations are constrained by physical characteristics of the caller, whereas others are more dynamic, influenced by transient states such as arousal or motivation. This chapter thus reviews how and why particular call types are produced to transmit specific types of information, and how such information may be perceived by receivers. As domestication is thought to have caused a divergence in the vocal behaviour of dogs as compared to the ancestral wolf, evidence of both dog–human and human–dog communication is considered. Overall, it is clear that domestic dogs have the potential to acoustically broadcast a range of information, which is available to conspecific and human receivers. Moreover, dogs are highly attentive to human speech and are able to extract speaker identity, emotional state, and even some types of semantic information

    Laugh Like You Mean It:Authenticity Modulates Acoustic, Physiological and Perceptual Properties of Laughter

    Get PDF
    Several authors have recently presented evidence for perceptual and neural distinctions between genuine and acted expressions of emotion. Here, we describe how differences in authenticity affect the acoustic and perceptual properties of laughter. In an acoustic analysis, we contrasted spontaneous, authentic laughter with volitional, fake laughter, finding that spontaneous laughter was higher in pitch, longer in duration, and had different spectral characteristics from volitional laughter that was produced under full voluntary control. In a behavioral experiment, listeners perceived spontaneous and volitional laughter as distinct in arousal, valence, and authenticity. Multiple regression analyses further revealed that acoustic measures could significantly predict these affective and authenticity judgements, with the notable exception of authenticity ratings for spontaneous laughter. The combination of acoustic predictors differed according to the laughter type, where volitional laughter ratings were uniquely predicted by harmonics-to-noise ratio (HNR). To better understand the role of HNR in terms of the physiological effects on vocal tract configuration as a function of authenticity during laughter production, we ran an additional experiment in which phonetically trained listeners rated each laugh for breathiness, nasality, and mouth opening. Volitional laughter was found to be significantly more nasal than spontaneous laughter, and the item-wise physiological ratings also significantly predicted affective judgements obtained in the first experiment. Our findings suggest that as an alternative to traditional acoustic measures, ratings of phonatory and articulatory features can be useful descriptors of the acoustic qualities of nonverbal emotional vocalizations, and of their perceptual implications

    Estimation of origin-destination matrix from traffic counts: the state of the art

    Get PDF
    The estimation of up-to-date origin-destination matrix (ODM) from an obsolete trip data, using current available information is essential in transportation planning, traffic management and operations. Researchers from last 2 decades have explored various methods of estimating ODM using traffic count data. There are two categories of ODM; static and dynamic ODM. This paper presents studies on both the issues of static and dynamic ODM estimation, the reliability measures of the estimated matrix and also the issue of determining the set of traffic link count stations required to acquire maximum information to estimate a reliable matrix

    Speaker Sex Perception from Spontaneous and Volitional Nonverbal Vocalizations.

    Get PDF
    In two experiments, we explore how speaker sex recognition is affected by vocal flexibility, introduced by volitional and spontaneous vocalizations. In Experiment 1, participants judged speaker sex from two spontaneous vocalizations, laughter and crying, and volitionally produced vowels. Striking effects of speaker sex emerged: For male vocalizations, listeners' performance was significantly impaired for spontaneous vocalizations (laughter and crying) compared to a volitional baseline (repeated vowels), a pattern that was also reflected in longer reaction times for spontaneous vocalizations. Further, performance was less accurate for laughter than crying. For female vocalizations, a different pattern emerged. In Experiment 2, we largely replicated the findings of Experiment 1 using spontaneous laughter, volitional laughter and (volitional) vowels: here, performance for male vocalizations was impaired for spontaneous laughter compared to both volitional laughter and vowels, providing further evidence that differences in volitional control over vocal production may modulate our ability to accurately perceive speaker sex from vocal signals. For both experiments, acoustic analyses showed relationships between stimulus fundamental frequency (F0) and the participants' responses. The higher the F0 of a vocal signal, the more likely listeners were to perceive a vocalization as being produced by a female speaker, an effect that was more pronounced for vocalizations produced by males. We discuss the results in terms of the availability of salient acoustic cues across different vocalizations

    Neural correlates of the affective properties of spontaneous and volitional laughter types

    Get PDF
    Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative

    Domestic horses (Equus caballus) discriminate between negative and positive human nonverbal vocalisations

    Get PDF
    The ability to discriminate between emotion in vocal signals is highly adaptive in social species. It may also be adaptive for domestic species to distinguish such signals in humans. Here we present a playback study investigating whether horses spontaneously respond in a functionally relevant way towards positive and negative emotion in human nonverbal vocalisations. We presented horses with positively- and negatively-valenced human vocalisations (laughter and growling, respectively) in the absence of all other emotional cues. Horses were found to adopt a freeze posture for significantly longer immediately after hearing negative versus positive human vocalisations, suggesting that negative voices promote vigilance behaviours and may therefore be perceived as more threatening. In support of this interpretation, horses held their ears forwards for longer and performed fewer ear movements in response to negative voices, which further suggest increased vigilance. In addition, horses showed a right-ear/left-hemisphere bias when attending to positive compared with negative voices, suggesting that horses perceive laughter as more positive than growling. These findings raise interesting questions about the potential for universal discrimination of vocal affect and the role of lifetime learning versus other factors in interspecific communication

    Evaluation and training of Executive Functions in genocide survivors. The case of Yazidi children

    Get PDF
    Executive Functions (EFs) development is critically affected by stress and trauma, as well as the socio-economic context in which children grow up (Welsh, Nix, Blair, Bierman, & Nelson, 2010). Research in this field is surprisingly lacking in relation to war contexts. This study represents a \ufb01rst attempt at addressing this topic by evaluating EFs in Yazidi children. The Yazidi community is an ethnic and religious minority living in Iraq. From August 2014 onwards, the Yazidi community has been the target of several atrocities perpetrated by ISIS and described as genocide by the international community at large. The University of Trieste, thanks to a program financed by the Friuli Venezia Giulia Region, developed a study aimed at (a) evaluating hot and cool EFs in children living in a war context and (b) developing a specific training method to enhance hot and cool EFs in Yazidi children of preschool age (N= 53). Data related to this group of children were compared with a sample of typically developing Italian children randomly assigned to either an EFs training group (N=55) or a passive control group (N= 51). Results indicate different baselines in hot EFs in Yazidi and Italian samples and a significant effect of the program on both trained groups, especially in tasks measuring hot EFs. Data are discussed in terms of hot and cool EFs in children growing in adverse environments, as well as the evaluation of educational and developmental opportunities to prevent children who survived genocide from becoming a "lost generation"

    An exploratory study of the relationship between four candidate genes and neurocognitive performance in adult ADHD

    Get PDF
    Since neurocognitive performance is a possible endophenotype for Attention Deficit Hyperactivity Disorder (ADHD) we explored the relationship between four genetic polymorphisms and neurocognitive performance in adults with ADHD. We genotyped a sample of 45 adults with ADHD at four candidate polymorphisms for the disorder (DRD4 48 base pair (bp) repeat, DRD4 120 bp duplicated repeat, SLC6A3 (DAT1) 40 bp variable number of tandem repeats (VNTR), and COMT Val158Met). We then sub-grouped the sample for each polymorphism by genotype or by the presence of the (putative) ADHD risk allele and compared the performance of the subgroups on a large battery of neurocognitive tests. The COMT Val158Met polymorphism was related to differences in IQ and reaction time, both of the DRD4 polymorphisms (48 bp repeat and 120 bp duplication) showed an association with verbal memory skills, and the SLC6A3 40 bp VNTR polymorphism could be linked to differences in inhibition. Interestingly, the presence of the risk alleles in DRD4 and SLC6A3 was related to better cognitive performance. Our findings contribute to an improved understanding of the functional implications of risk genes for ADHD
    corecore