2,742 research outputs found

    Enactivism, other minds, and mental disorders

    Get PDF
    Although enactive approaches to cognition vary in terms of their character and scope, all endorse several core claims. The first is that cognition is tied to action. The second is that cognition is composed of more than just in-the-head processes; cognitive activities are externalized via features of our embodiment and in our ecological dealings with the people and things around us. I appeal to these two enactive claims to consider a view called “direct social perception” : the idea that we can sometimes perceive features of other minds directly in the character of their embodiment and environmental interactions. I argue that if DSP is true, we can probably also perceive certain features of mental disorders as well. I draw upon the developmental psychologist Daniel Stern’s notion of “forms of vitality”—largely overlooked in these debates—to develop this idea, and I use autism as a case study. I argue further that an enactive approach to DSP can clarify some ways we play a regulative role in shaping the temporal and phenomenal character of the disorder in question, and it may therefore have practical significance for both the clinical and therapeutic encounter

    Neural Correlates of Intentional Communication

    Get PDF
    We know a great deal about the neurophysiological mechanisms supporting instrumental actions, i.e., actions designed to alter the physical state of the environment. In contrast, little is known about our ability to select communicative actions, i.e., actions directly designed to modify the mental state of another agent. We have recently provided novel empirical evidence for a mechanism in which a communicator selects his actions on the basis of a prediction of the communicative intentions that an addressee is most likely to attribute to those actions. The main novelty of those findings was that this prediction of intention recognition is cerebrally implemented within the intention recognition system of the communicator, is modulated by the ambiguity in meaning of the communicative acts, and not by their sensorimotor complexity. The characteristics of this predictive mechanism support the notion that human communicative abilities are distinct from both sensorimotor and linguistic processes

    Associations Between Sympathetic Nervous System Synchrony, Movement Synchrony, and Speech in Couple Therapy

    Get PDF
    Background: Research on interpersonal synchrony has mostly focused on a single modality, and hence little is known about the connections between different types of social attunement. In this study, the relationship between sympathetic nervous system synchrony, movement synchrony, and the amount of speech were studied in couple therapy. Methods: Data comprised 12 couple therapy cases (24 clients and 10 therapists working in pairs as co-therapists). Synchrony in electrodermal activity, head and body movement, and the amount of speech and simultaneous speech during the sessions were analyzed in 12 sessions at the start of couple therapy (all 72 dyads) and eight sessions at the end of therapy (48 dyads). Synchrony was calculated from cross-correlations using time lags and compared to segment-shuffled pseudo synchrony. The associations between the synchrony modalities and speech were analyzed using complex modeling (Mplus). Findings: Couple therapy participants’ synchrony mostly occurred in-phase (positive synchrony). Anti-phase (negative) synchrony was more common in movement than in sympathetic nervous system activity. Synchrony in sympathetic nervous system activity only correlated with movement synchrony between the client-therapist dyads (r = 0.66 body synchrony, r = 0.59 head synchrony). Movement synchrony and the amount of speech correlated negatively between spouses (r = −0.62 body synchrony, r = −0.47 head synchrony) and co-therapists (r = −0.39 body synchrony, r = −0.28 head synchrony), meaning that the more time the dyad members talked during the session, the less bodily synchrony they exhibited. Conclusion: The different roles and relationships in couple therapy were associated with the extent to which synchrony modalities were linked with each other. In the relationship between clients and therapists, synchrony in arousal levels and movement “walked hand in hand”, whereas in the other relationships (spouse or colleague) they were not linked. Generally, more talk time by the therapy participants was associated with anti-phase movement synchrony. If, as suggested, emotions prepare us for motor action, an important finding of this study is that sympathetic nervous system activity can also synchronize with that of others independently of motor action.publishedVersio

    Audio-as-Data Tools: Replicating Computational Data Processing

    Get PDF
    The rise of audio-as-data in social science research accentuates a fundamental challenge: establishing reproducible and reliable methodologies to guide this emerging area of study. In this study, we focus on the reproducibility of audio-as-data preparation methods in computational communication research and evaluate the accuracy of popular audio-as-data tools. We analyze automated transcription and computational phonology tools applied to 200 episodes of conservative talk shows hosted by Rush Limbaugh and Alex Jones. Our findings reveal that the tools we tested are highly accurate. However, despite different transcription and audio signal processing tools yield similar results, subtle yet significant variations could impact the findings’ reproducibility. Specifically, we find that discrepancies in automated transcriptions and auditory features such as pitch and intensity underscore the need for meticulous reproduction of data preparation procedures. These insights into the variability introduced by different tools stress the importance of detailed methodological reporting and consistent processing techniques to ensure the replicability of research outcomes. Our study contributes to the broader discourse on replicability and reproducibility by highlighting the nuances of audio data preparation and advocating for more transparent and standardized practices in this area

    Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    Get PDF
    International audienceThe fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and tense, Alan Cienki presented an ongoing researchproject on narratives in French, German and Russian, a project that focuses especially on the verbal andgestural expression of grammatical tense and aspect in narratives in the three languages. Jean-MarcColletta's talk, entitled Gesture and Language Development: towards a unified theoretical framework,described the joint acquisition and development of speech and early conventional and representationalgestures. In Grammar, deixis, and multimodality between code-manifestation and code-integration or whyKendon's Continuum should be transformed into a gestural circle, Ellen Fricke proposed a revisitedgrammar of noun phrases that integrates gestures as part of the semiotic and typological codes of individuallanguages. From a pragmatic and cognitive perspective, Judith Holler explored the use ofgaze and hand gestures as means of organizing turns at talk as well as establishing common ground in apresentation entitled On the pragmatics of multi-modal face-to-face communication: Gesture, speech andgaze in the coordination of mental states and social interaction.Among the talks and posters presented at the conference, the vast majority of topics related, quitenaturally, to gesture and speech in interaction - understood both in terms of mapping of units in differentsemiotic modes and of the use of gesture and speech in social interaction. Several presentations explored the effects of impairments(such as diseases or the natural ageing process) on gesture and speech. The communicative relevance ofgesture and speech and audience-design in natural interactions, as well as in more controlled settings liketelevision debates and reports, was another topic addressed during the conference. Some participantsalso presented research on first and second language learning, while others discussed the relationshipbetween gesture and intonation. While most participants presented research on gesture and speech froman observer's perspective, be it in semiotics or pragmatics, some nevertheless focused on another importantaspect: the cognitive processes involved in language production and perception. Last but not least,participants also presented talks and posters on the computational analysis of gestures, whether involvingexternal devices (e.g. mocap, kinect) or concerning the use of specially-designed computer software forthe post-treatment of gestural data. Importantly, new links were made between semiotics and mocap data

    Multimodaalsel emotsioonide tuvastamisel pÔhineva inimese-roboti suhtluse arendamine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneÜks afektiivse arvutiteaduse peamistest huviobjektidest on mitmemodaalne emotsioonituvastus, mis leiab rakendust peamiselt inimese-arvuti interaktsioonis. Emotsiooni Ă€ratundmiseks uuritakse nendes sĂŒsteemides nii inimese nĂ€oilmeid kui kakĂ”net. KĂ€esolevas töös uuritakse inimese emotsioonide ja nende avaldumise visuaalseid ja akustilisi tunnuseid, et töötada vĂ€lja automaatne multimodaalne emotsioonituvastussĂŒsteem. KĂ”nest arvutatakse mel-sageduse kepstri kordajad, helisignaali erinevate komponentide energiad ja prosoodilised nĂ€itajad. NĂ€oilmeteanalĂŒĂŒsimiseks kasutatakse kahte erinevat strateegiat. Esiteks arvutatakse inimesenĂ€o tĂ€htsamate punktide vahelised erinevad geomeetrilised suhted. Teiseks vĂ”etakse emotsionaalse sisuga video kokku vĂ€hendatud hulgaks pĂ”hikaadriteks, misantakse sisendiks konvolutsioonilisele tehisnĂ€rvivĂ”rgule emotsioonide visuaalsekseristamiseks. Kolme klassifitseerija vĂ€ljunditest (1 akustiline, 2 visuaalset) koostatakse uus kogum tunnuseid, mida kasutatakse Ă”ppimiseks sĂŒsteemi viimasesetapis. Loodud sĂŒsteemi katsetati SAVEE, Poola ja Serbia emotsionaalse kĂ”neandmebaaside, eNTERFACE’05 ja RML andmebaaside peal. Saadud tulemusednĂ€itavad, et vĂ”rreldes olemasolevatega vĂ”imaldab kĂ€esoleva töö raames loodudsĂŒsteem suuremat tĂ€psust emotsioonide Ă€ratundmisel. Lisaks anname kĂ€esolevastöös ĂŒlevaate kirjanduses vĂ€ljapakutud sĂŒsteemidest, millel on vĂ”imekus tunda Ă€raemotsiooniga seotud ̆zeste. Selle ĂŒlevaate eesmĂ€rgiks on hĂ”lbustada uute uurimissuundade leidmist, mis aitaksid lisada töö raames loodud sĂŒsteemile ̆zestipĂ”hiseemotsioonituvastuse vĂ”imekuse, et veelgi enam tĂ”sta sĂŒsteemi emotsioonide Ă€ratundmise tĂ€psust.Automatic multimodal emotion recognition is a fundamental subject of interest in affective computing. Its main applications are in human-computer interaction. The systems developed for the foregoing purpose consider combinations of different modalities, based on vocal and visual cues. This thesis takes the foregoing modalities into account, in order to develop an automatic multimodal emotion recognition system. More specifically, it takes advantage of the information extracted from speech and face signals. From speech signals, Mel-frequency cepstral coefficients, filter-bank energies and prosodic features are extracted. Moreover, two different strategies are considered for analyzing the facial data. First, facial landmarks' geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames. Then they are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to the key-frames summarizing the videos. Afterward, the output confidence values of all the classifiers from both of the modalities are used to define a new feature space. Lastly, the latter values are learned for the final emotion label prediction, in a late fusion. The experiments are conducted on the SAVEE, Polish, Serbian, eNTERFACE'05 and RML datasets. The results show significant performance improvements by the proposed system in comparison to the existing alternatives, defining the current state-of-the-art on all the datasets. Additionally, we provide a review of emotional body gesture recognition systems proposed in the literature. The aim of the foregoing part is to help figure out possible future research directions for enhancing the performance of the proposed system. More clearly, we imply that incorporating data representing gestures, which constitute another major component of the visual modality, can result in a more efficient framework
    • 

    corecore