12 research outputs found

    Social signalling as a framework for second-person neuroscience

    Get PDF
    Despite the recent increase in second-person neuroscience research, it is still hard to understand which neurocognitive mechanisms underlie real-time social behaviours. Here, we propose that social signalling can help us understand social interactions both at the single- and two-brain level in terms of social signal exchanges between senders and receivers. First, we show how subtle manipulations of being watched provide an important tool to dissect meaningful social signals. We then focus on how social signalling can help us build testable hypotheses for second-person neuroscience with the example of imitation and gaze behaviour. Finally, we suggest that linking neural activity to specific social signals will be key to fully understand the neurocognitive systems engaged during face-to-face interactions

    Judgment of the Humanness of an Interlocutor Is in the Eye of the Beholder

    Get PDF
    Despite tremendous advances in artificial language synthesis, no machine has so far succeeded in deceiving a human. Most research focused on analyzing the behavior of “good” machine. We here choose an opposite strategy, by analyzing the behavior of “bad” humans, i.e., humans perceived as machine. The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer-mediated communication. Using this setting as a model, we investigated here whether the linguistic behavior of human subjects perceived as non-human would enable us to identify some of the core parameters involved in the judgment of an agents' humanness. We analyzed descriptive and semantic aspects of dialogues in which subjects succeeded or failed to convince judges of their humanness. Using cognitive and emotional dimensions in a global behavioral characterization, we demonstrate important differences in the patterns of behavioral expressiveness of the judges whether they perceived their interlocutor as being human or machine. Furthermore, the indicators of interest displayed by the judges were predictive of the final judgment of humanness. Thus, we show that the judgment of an interlocutor's humanness during a social interaction depends not only on his behavior, but also on the judge himself. Our results thus demonstrate that the judgment of humanness is in the eye of the beholder

    From movement kinematics to social cognition:the case of autism

    Get PDF
    The way in which we move influences our ability to perceive, interpret and predict the actions of others. Thus movements play an important role in social cognition. This review article will appraise the literature concerning movement kinematics and motor control in individuals with autism, and will argue that movement differences between typical and autistic individuals may contribute to bilateral difficulties in reciprocal social cognition

    Vocal accommodation in human-computer interaction : modeling and integration into spoken dialogue systems

    Get PDF
    With the rapidly increasing usage of voice-activated devices worldwide, verbal communication with computers is steadily becoming more common. Although speech is the principal natural manner of human communication, it is still challenging for computers, and users had been growing accustomed to adjusting their speaking style for computers. Such adjustments occur naturally, and typically unconsciously, in humans during an exchange to control the social distance between the interlocutors and improve the conversation’s efficiency. This phenomenon is called accommodation and it occurs on various modalities in human communication, like hand gestures, facial expressions, eye gaze, lexical and grammatical choices, and others. Vocal accommodation deals with phonetic-level changes occurring in segmental and suprasegmental features. A decrease in the difference between the speakers’ feature realizations results in convergence, while an increasing distance leads to divergence. The lack of such mutual adjustments made naturally by humans in computers’ speech creates a gap between human-human and human-computer interactions. Moreover, voice-activated systems currently speak in exactly the same manner to all users, regardless of their speech characteristics or realizations of specific features. Detecting phonetic variations and generating adaptive speech output would enhance user personalization, offer more human-like communication, and ultimately should improve the overall interaction experience. Thus, investigating these aspects of accommodation will help to understand and improving human-computer interaction. This thesis provides a comprehensive overview of the required building blocks for a roadmap toward the integration of accommodation capabilities into spoken dialogue systems. These include conducting human-human and human-computer interaction experiments to examine the differences in vocal behaviors, approaches for modeling these empirical findings, methods for introducing phonetic variations in synthesized speech, and a way to combine all these components into an accommodative system. While each component is a wide research field by itself, they depend on each other and hence should be jointly considered. The overarching goal of this thesis is therefore not only to show how each of the aspects can be further developed, but also to demonstrate and motivate the connections between them. A special emphasis is put throughout the thesis on the importance of the temporal aspect of accommodation. Humans constantly change their speech over the course of a conversation. Therefore, accommodation processes should be treated as continuous, dynamic phenomena. Measuring differences in a few discrete points, e.g., beginning and end of an interaction, may leave many accommodation events undiscovered or overly smoothed. To justify the effort of introducing accommodation in computers, it should first be proven that humans even show any phonetic adjustments when talking to a computer as they do with a human being. As there is no definitive metric for measuring accommodation and evaluating its quality, it is important to empirically study humans productions to later use as references for possible behaviors. In this work, this investigation encapsulates different experimental configurations to achieve a better picture of accommodation effects. First, vocal accommodation was inspected where it naturally occurs, namely in spontaneous human-human conversations. For this purpose, a collection of real-world sales conversations, each with a different representative-prospect pair, was collected and analyzed. These conversations offer a glance into accommodation effects in authentic, unscripted interactions with the common goal of negotiating a deal on the one hand, but with the individual facet of each side of trying to get the best terms on the other hand. The conversations were analyzed using cross-correlation and time series techniques to capture the change dynamics over time. It was found that successful conversations are distinguishable from failed ones by multiple measures. Furthermore, the sales representative proved to be better at leading the vocal changes, i.e., making the prospect follow their speech styles rather than the other way around. They also showed a stronger tendency to take that lead at an earlier stage, all the more so in successful conversations. The fact that accommodation occurs more by trained speakers and improves their performances fits anecdotal best practices of sales experts, which are now also proven scientifically. Following these results, the next experiment came closer to the final goal of this work and investigated vocal accommodation effects in human-computer interaction. This was done via a shadowing experiment, which offers a controlled setting for examining phonetic variations. As spoken dialogue systems with such accommodation capabilities (like this work aims to achieve) do not exist yet, a simulated system was used to introduce these changes to the participants, who believed they help with the testing of a language learning tutoring system. After determining their preference concerning three segmental phonetic features, participants were listen-ing to either natural or synthesized voices of male and female speakers, which produced the participants’ dispreferred variation of the aforementioned features. Accommodation occurred in all cases, but the natural voices triggered stronger effects. Nevertheless, it can be concluded that participants were accommodating toward synthetic voices as well, which means that social mechanisms are applied in humans also when speaking with computer-based interlocutors. The shadowing paradigm was utilized also to test whether accommodation is a phenomenon associated only with speech or with other vocal productions as well. To that end, accommodation in the singing of familiar and novel music was examined. Interestingly, accommodation was found in both cases, though in different ways. While participants seemed to use the familiar piece merely as a reference for singing more accurately, the novel piece became the goal for complete replicate. For example, one difference was that mostly pitch corrections were introduced in the former case, while in the latter also key and rhythmic patterns were adopted. Some of those findings were expected and they show that people’s more salient features are also harder to modify using external auditory influence. Lastly, a multiparty experiment with spontaneous human-human-computer interactions was carried out to compare accommodation in human-directed and computer-directed speech. The participants solved tasks for which they needed to talk both with a confederate and with an agent. This allows a direct comparison of their speech based on the addressee within the same conversation, which has not been done so far. Results show that some participants’ vocal behavior changed similarly when talking to the confederate and the agent, while others’ speech varied only with the confederate. Further analysis found that the greatest factor for this difference was the order in which the participants talked with the interlocutors. Apparently, those who first talked to the agent alone saw it more as a social actor in the conversation, while those who interacted with it after talking to the confederate treated it more as a means to achieve a goal, and thus behaved differently with it. In the latter case, the variations in the human-directed speech were much more prominent. Differences were also found between the analyzed features, but the task type did not influence the degree of accommodation effects. The results of these experiments lead to the conclusion that vocal accommodation does occur in human-computer interactions, even if often to lesser degrees. With the question of whether people accommodate to computer-based interlocutors as well answered, the next step would be to describe accommodative behaviors in a computer-processable manner. Two approaches are proposed here: computational and statistical. The computational model aims to capture the presumed cognitive process associated with accommodation in humans. This comprises various steps, such as detecting the variable feature’s sound, adding instances of it to the feature’s mental memory, and determining how much the sound will change while taking into account both its current representation and the external input. Due to its sequential nature, this model was implemented as a pipeline. Each of the pipeline’s five steps corresponds to a specific part of the cognitive process and can have one or more parameters to control its output (e.g., the size of the feature’s memory or the accommodation pace). Using these parameters, precise accommodative behaviors can be crafted while applying expert knowledge to motivate the chosen parameter values. These advantages make this approach suitable for experimentation with pre-defined, deterministic behaviors where each step can be changed individually. Ultimately, this approach makes a system vocally responsive to users’ speech input. The second approach grants more evolved behaviors, by defining different core behaviors and adding non-deterministic variations on top of them. This resembles human behavioral patterns, as each person has a base way of accommodating (or not accommodating), which may arbitrarily change based on the specific circumstances. This approach offers a data-driven statistical way to extract accommodation behaviors from a given collection of interactions. First, the target feature’s values of each speaker in an interaction are converted into continuous interpolated lines by drawing one sample from the posterior distribution of a Gaussian process conditioned on the given values. Then, the gradients of these lines, which represent rates of mutual change, are used to defined discrete levels of change based on their distribution. Finally, each level is assigned a symbol, which ultimately creates a symbol sequence representation for each interaction. The sequences are clustered so that each cluster stands for a type of behavior. The sequences of a cluster can then be used to calculate n-gram probabilities that enable the generation of new sequences of the captured behavior. The specific output value is sampled from the range corresponding to the generated symbol. With this approach, accommodation behaviors are extracted directly from data, as opposed to manually crafting them. However, it is harder to describe what exactly these behaviors represent and motivate the use of one of them over the other. To bridge this gap between these two approaches, it is also discussed how they can be combined to benefit from the advantages of both. Furthermore, to generate more structured behaviors, a hierarchy of accommodation complexity levels is suggested here, from a direct adoption of users’ realizations, via specified responsiveness, and up to independent core behaviors with non-deterministic variational productions. Besides a way to track and represent vocal changes, an accommodative system also needs a text-to-speech component that is able to realize those changes in the system’s speech output. Speech synthesis models are typically trained once on data with certain characteristics and do not change afterward. This prevents such models from introducing any variation in specific sounds and other phonetic features. Two methods for directly modifying such features are explored here. The first is based on signal modifications applied to the output signal after it was generated by the system. The processing is done between the timestamps of the target features and uses pre-defined scripts that modify the signal to achieve the desired values. This method is more suitable for continuous features like vowel quality, especially in the case of subtle changes that do not necessarily lead to a categorical sound change. The second method aims to capture phonetic variations in the training data. To that end, a training corpus with phonemic representations is used, as opposed to the regular graphemic representations. This way, the model can learn more direct relations between phonemes and sound instead of surface forms and sound, which, depending on the language, might be more complex and depend on their surrounding letters. The target variations themselves don’t necessarily need to be explicitly present in the training data, all time the different sounds are naturally distinguishable. In generation time, the current target feature’s state determines the phoneme to use for generating the desired sound. This method is suitable for categorical changes, especially for contrasts that naturally exist in the language. While both methods have certain limitations, they provide a proof of concept for the idea that spoken dialogue systems may phonetically adapt their speech output in real-time and without re-training their text-to-speech models. To combine the behavior definitions and the speech manipulations, a system is required, which can connect these elements to create a complete accommodation capability. The architecture suggested here extends the standard spoken dialogue system with an additional module, which receives the transcribed speech signal from the speech recognition component without influencing the input to the language understanding component. While language the understanding component uses only textual transcription to determine the user’s intention, the added component process the raw signal along with its phonetic transcription. In this extended architecture, the accommodation model is activated in the added module and the information required for speech manipulation is sent to the text-to-speech component. However, the text-to-speech component now has two inputs, viz. the content of the system’s response coming from the language generation component and the states of the defined target features from the added component. An implementation of a web-based system with this architecture is introduced here, and its functionality is showcased by demonstrating how it can be used to conduct a shadowing experiment automatically. This has two main advantage: First, since the system recognizes the participants’ phonetic variations and automatically selects the appropriate variation to use in its response, the experimenter saves time and prevents manual annotation errors. The experimenter also automatically gains additional information, like exact timestamps of utterances, real-time visualization of the interlocutors’ productions, and the possibility to replay and analyze the interaction after the experiment is finished. The second advantage is scalability. Multiple instances of the system can run on a server and be accessed by multiple clients at the same time. This not only saves time and the logistics of bringing participants into a lab, but also allows running the experiment with different configurations (e.g., other parameter values or target features) in a controlled and reproducible way. This completes a full cycle from examining human behaviors to integrating accommodation capabilities. Though each part of it can undoubtedly be further investigated, the emphasis here is on how they depend and connect to each other. Measuring changes features without showing how they can be modeled or achieving flexible speech synthesis without considering the desired final output might not lead to the final goal of introducing accommodation capabilities into computers. Treating accommodation in human-computer interaction as one large process rather than isolated sub-problems lays the ground for more comprehensive and complete solutions in the future.Heutzutage wird die verbale Interaktion mit Computern immer gebrĂ€uchlicher, was der rasant wachsenden Anzahl von sprachaktivierten GerĂ€ten weltweit geschuldet ist. Allerdings stellt die computerseitige Handhabung gesprochener Sprache weiterhin eine große Herausforderung dar, obwohl sie die bevorzugte Art zwischenmenschlicher Kommunikation reprĂ€sentiert. Dieser Umstand führt auch dazu, dass Benutzer ihren Sprachstil an das jeweilige GerĂ€t anpassen, um diese Handhabung zu erleichtern. Solche Anpassungen kommen in menschlicher gesprochener Sprache auch in der zwischenmenschlichen Kommunikation vor. Üblicherweise ereignen sie sich unbewusst und auf natürliche Weise wĂ€hrend eines GesprĂ€chs, etwa um die soziale Distanz zwischen den GesprĂ€chsteilnehmern zu kontrollieren oder um die Effizienz des GesprĂ€chs zu verbessern. Dieses PhĂ€nomen wird als Akkommodation bezeichnet und findet auf verschiedene Weise wĂ€hrend menschlicher Kommunikation statt. Sie Ă€ußert sich zum Beispiel in der Gestik, Mimik, Blickrichtung oder aber auch in der Wortwahl und dem verwendeten Satzbau. Vokal- Akkommodation beschĂ€ftigt sich mit derartigen Anpassungen auf phonetischer Ebene, die sich in segmentalen und suprasegmentalen Merkmalen zeigen. Werden AusprĂ€gungen dieser Merkmale bei den GesprĂ€chsteilnehmern im Laufe des GesprĂ€chs Ă€hnlicher, spricht man von Konvergenz, vergrĂ¶ĂŸern sich allerdings die Unterschiede, so wird dies als Divergenz bezeichnet. Dieser natürliche gegenseitige Anpassungsvorgang fehlt jedoch auf der Seite des Computers, was zu einer Lücke in der Mensch-Maschine-Interaktion führt. Darüber hinaus verwenden sprachaktivierte Systeme immer dieselbe Sprachausgabe und ignorieren folglich etwaige Unterschiede zum Sprachstil des momentanen Benutzers. Die Erkennung dieser phonetischen Abweichungen und die Erstellung von anpassungsfĂ€higer Sprachausgabe würden zur Personalisierung dieser Systeme beitragen und könnten letztendlich die insgesamte Benutzererfahrung verbessern. Aus diesem Grund kann die Erforschung dieser Aspekte von Akkommodation helfen, Mensch-Maschine-Interaktion besser zu verstehen und weiterzuentwickeln. Die vorliegende Dissertation stellt einen umfassenden Überblick zu Bausteinen bereit, die nötig sind, um AkkommodationsfĂ€higkeiten in Sprachdialogsysteme zu integrieren. In diesem Zusammenhang wurden auch interaktive Mensch-Mensch- und Mensch- Maschine-Experimente durchgeführt. In diesen Experimenten wurden Differenzen der vokalen Verhaltensweisen untersucht und Methoden erforscht, wie phonetische Abweichungen in synthetische Sprachausgabe integriert werden können. Um die erhaltenen Ergebnisse empirisch auswerten zu können, wurden hierbei auch verschiedene ModellierungsansĂ€tze erforscht. Fernerhin wurde der Frage nachgegangen, wie sich die betreffenden Komponenten kombinieren lassen, um ein Akkommodationssystem zu konstruieren. Jeder dieser Aspekte stellt für sich genommen bereits einen überaus breiten Forschungsbereich dar. Allerdings sind sie voneinander abhĂ€ngig und sollten zusammen betrachtet werden. Aus diesem Grund liegt ein übergreifender Schwerpunkt dieser Dissertation darauf, nicht nur aufzuzeigen, wie sich diese Aspekte weiterentwickeln lassen, sondern auch zu motivieren, wie sie zusammenhĂ€ngen. Ein weiterer Schwerpunkt dieser Arbeit befasst sich mit der zeitlichen Komponente des Akkommodationsprozesses, was auf der Beobachtung fußt, dass Menschen im Laufe eines GesprĂ€chs stĂ€ndig ihren Sprachstil Ă€ndern. Diese Beobachtung legt nahe, derartige Prozesse als kontinuierliche und dynamische Prozesse anzusehen. Fasst man jedoch diesen Prozess als diskret auf und betrachtet z.B. nur den Beginn und das Ende einer Interaktion, kann dies dazu führen, dass viele Akkommodationsereignisse unentdeckt bleiben oder übermĂ€ĂŸig geglĂ€ttet werden. Um die Entwicklung eines vokalen Akkommodationssystems zu rechtfertigen, muss zuerst bewiesen werden, dass Menschen bei der vokalen Interaktion mit einem Computer ein Ă€hnliches Anpassungsverhalten zeigen wie bei der Interaktion mit einem Menschen. Da es keine eindeutig festgelegte Metrik für das Messen des Akkommodationsgrades und für die Evaluierung der AkkommodationsqualitĂ€t gibt, ist es besonders wichtig, die Sprachproduktion von Menschen empirisch zu untersuchen, um sie als Referenz für mögliche Verhaltensweisen anzuwenden. In dieser Arbeit schließt diese Untersuchung verschiedene experimentelle Anordnungen ein, um einen besseren Überblick über Akkommodationseffekte zu erhalten. In einer ersten Studie wurde die vokale Akkommodation in einer Umgebung untersucht, in der sie natürlich vorkommt: in einem spontanen Mensch-Mensch GesprĂ€ch. Zu diesem Zweck wurde eine Sammlung von echten VerkaufsgesprĂ€chen gesammelt und analysiert, wobei in jedem dieser GesprĂ€che ein anderes Handelsvertreter-Neukunde Paar teilgenommen hatte. Diese GesprĂ€che verschaffen einen Einblick in Akkommodationseffekte wĂ€hrend spontanen authentischen Interaktionen, wobei die GesprĂ€chsteilnehmer zwei Ziele verfolgen: zum einen soll ein GeschĂ€ft verhandelt werden, zum anderen möchte aber jeder Teilnehmer für sich die besten Bedingungen aushandeln. Die Konversationen wurde durch das Kreuzkorrelation-Zeitreihen-Verfahren analysiert, um die dynamischen Änderungen im Zeitverlauf zu erfassen. Hierbei kam zum Vorschein, dass sich erfolgreiche Konversationen von fehlgeschlagenen GesprĂ€chen deutlich unterscheiden lassen. Überdies wurde festgestellt, dass die Handelsvertreter die treibende Kraft von vokalen Änderungen sind, d.h. sie können die Neukunden eher dazu zu bringen, ihren Sprachstil anzupassen, als andersherum. Es wurde auch beobachtet, dass sie diese Akkommodation oft schon zu einem frühen Zeitpunkt auslösen, was besonders bei erfolgreichen GesprĂ€chen beobachtet werden konnte. Dass diese Akkommodation stĂ€rker bei trainierten Sprechern ausgelöst wird, deckt sich mit den meist anekdotischen Empfehlungen von erfahrenen Handelsvertretern, die bisher nie wissenschaftlich nachgewiesen worden sind. Basierend auf diesen Ergebnissen beschĂ€fti

    Interactions en environnements virtuels: aspects multimodaux

    Get PDF
    Tableau d’honneur de la FacultĂ© des Ă©tudes supĂ©rieures et postdoctorales, 2012-2013.Les environnements virtuels prennent jour aprĂšs jour une importance grandissante dans nos sociĂ©tĂ©s modernes. Leurs applications sont de plus en plus nombreuses, allant du rĂ©crĂ©atif et de l'Ă©ducatif au thĂ©rapeutique et Ă  la tĂ©lĂ©mĂ©decine. Cependant, les paramĂštres exacts qui influencent l'organisation sociale dans les espaces virtuels sont encore grandement mĂ©connus. Les rĂ©sultats prĂ©sentĂ©s dans ce mĂ©moire visent Ă  approfondir nos connaissances des interactions sociales en environnements virtuels. L'Ă©tude des interactions sociales, facteur clĂ© de l'immersion dans les environnements virtuels, sera divisĂ©e selon trois aspects : les aspects langagiers, les aspects visuels et un cas extrĂȘme des aspects visuels en environnements virtuels. Le premier chapitre de ce mĂ©moire, portant sur les aspects langagiers des environnements virtuels, dĂ©montre que le jugement d'humanitĂ© d'un interlocuteur durant une interaction sociale dĂ©pend non seulement du comportement de celui-ci, mais Ă©galement du comportement du juge lui-mĂȘme. De plus, nos rĂ©sultats mettent l'accent sur l'aspect collaboratif du dialogue, tout comme sur la nature multidimensionnelle et multifactorielle de ce processus. Le deuxiĂšme chapitre de ce mĂ©moire, portant sur les aspects visuels des environnements virtuels, suggĂšre que les dynamiques sociales pourraient avoir une plus grande importance pour l'utilisateur que la rĂ©compense immĂ©diate qu'apporte l'utilisation optimale de l'interface, et ainsi, reprĂ©senter un facteur majeur d'immersion dans les environnements virtuels. Le troisiĂšme chapitre de ce mĂ©moire, portant sur un cas particulier de l'aspect visuel des environnements virtuels, dĂ©montre que la sĂ©lection sociale basĂ©e sur l'apparence semble jouer un rĂŽle considĂ©rable dans le regroupement social des communautĂ©s en ligne. Ainsi, prĂ©senter une apparence semblable pourrait donc reprĂ©senter un moyen de favoriser l'Ă©mergence de groupes sociaux solides et durables. En somme, les rĂ©sultats prĂ©sentĂ©s dans les diffĂ©rents chapitres de ce mĂ©moire sont non seulement essentiels pour la comprĂ©hension des fondements cognitifs des comportements sociaux dans les contextes virtuels, mais Ă©galement pour l'optimisation de l'utilisation et du design des environnements virtuels immersifs futurs

    Everyday reveries: recorded music, memory & emotion

    Get PDF
    This thesis investigates recorded music in everyday life and its relationship to memory. It does this by establishing the social and historical context in which sound recording was invented and developed, and by formulating a theory of how recorded music signifies. It argues that musical recordings do not simply facilitate remembering but are equally bound up with processes of forgetting. Each chapter of the thesis examines a different aspect of the relationship between recorded music and memory. Chapter one analyses the origins of sound recording and charts its subsequent development in terms of a continuum between social and solitary listening. Chapter two interrogates common assumptions about what is meant by the ‘everyday’, and argues that music in everyday life tends to be consumed and remembered in fragmentary form. Chapter three investigates the significance of, and reasons for, involuntary musical memories. Chapter four analyses the relationship between recorded music and nostalgia. Chapter five examines recorded music’s role in pleasurable forms of forgetting or self-oblivion. Chapter six is a summation of the whole thesis, arguing that recorded music in everyday life contains utopian traces which, when reflected upon, yield insights into the nature of social reality. The thesis also contains two ‘interludes’ that deal with pertinent theoretical issues in the field of cultural studies. The first of these interludes argues that Peircean semiotics is better suited to the task of analysing music than Saussurean semiology and that, furthermore, it is able to contribute to the emerging field of affect theory. The second interlude continues this analysis by arguing that mimesis or creative imitation should become a key concept in cultural studies

    Adults imitate to send a social signal

    Get PDF
    Humans are prolific imitators, even when copying may not be efficient. A variety of explanations have been advanced for this phenomenon, including that it is a side-effect of learning, that it arises from a lack of understanding of causality, to imitation being a mechanism to boost affiliation. This thesis systematically outlines the hypothesis that imitation is a social signal sent between interacting partners, which rests on testing whether our propensity to imitate is modulated by the social availability of the interaction partner (i.e., whether our interaction partner is watching us or not). I developed a dyadic block-moving paradigm that allowed us to test this hypothesis in a naturalistic manner in four behavioural and neuroimaging studies using functional near-infrared spectroscopy (fNIRS). I found that imitative fidelity was modulated by whether the interaction partner was watching the participant make their move or not, and this effect replicated across all four studies, in both neurotypicals and autistic participants. I also examined the neural correlates of responding to irrational actions, and of being watched. I found that being watched led to a robust deactivation in the right parietal cortex across both neurotypicals (in two studies) and autistic participants (one study). Among autistic participants we also found strong engagement in the left superior temporal sulcus (STS) when being watched. For responding to irrational actions, in one study of neurotypicals we found greater deactivation in the right superior parietal lobule (SPL) when making more irrational responses. In another study of autistic and neurotypical participants we found deactivation in the bilateral inferior parietal cortex (IPL) in neurotypicals when responding to irrational actions, while this deactivation appeared confined to the left IPL for autistic participants. Autistic participants also showed differentially higher engagement in the left occipitotemporal regions when responding to irrational actions. This thesis supports the social-signalling hypothesis of imitation and is accompanied by suggestions for future directions to explore this theory in more detail
    corecore