65 research outputs found

    A pulse compression radar system for high-resolution ionospheric sounding

    Get PDF
    A low frequency pulse compression radar system, capable of 0.75 km spatial resolution, has been developed. This system utilizes a linear frequency-modulated signal, and yields an effective peak power enhancement of 14.5 dB, over a conventional radar of equivalent resolution. The required instrumentation, as well as the development of the necessary signal processing software, are described in detail. It is shown that the resolution and peak power enhancement achieved by the system are consistent with those predicted by theory. The effectiveness of the pulse compression radar as a tool for ionospheric observation is demonstrated by comparing its performance to that of a conventional radar operating simultaneously

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions

    Get PDF
    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest

    Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures

    Get PDF
    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25) = 2.21, p = .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25) = 3.57, p = .001; Gestures: t(25) = 3.66, p = .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain

    When Your Decisions Are Not (Quite) Your Own: Action Observation Influences Free Choices

    Get PDF
    A growing number of studies have begun to assess how the actions of one individual are represented in an observer. Using a variant of an action observation paradigm, four experiments examined whether one person's behaviour can influence the subjective decisions and judgements of another. In Experiment 1, two observers sat adjacent to each other and took turns to freely select and reach to one of two locations. Results showed that participants were less likely to make a response to the same location as their partner. In three further experiments observers were asked to decide which of two familiar products they preferred or which of two faces were most attractive. Results showed that participants were less likely to choose the product or face occupying the location of their partner's previous reaching response. These findings suggest that action observation can influence a range of free choice preferences and decisions. Possible mechanisms through which this influence occurs are discussed

    Female Fertility Affects Men's Linguistic Choices

    Get PDF
    We examined the influence of female fertility on the likelihood of male participants aligning their choice of syntactic construction with those of female confederates. Men interacted with women throughout their menstrual cycle. On critical trials during the interaction, the confederate described a picture to the participant using particular syntactic constructions. Immediately thereafter, the participant described to the confederate a picture that could be described using either the same construction that was used by the confederate or an alternative form of the construction. Our data show that the likelihood of men choosing the same syntactic structure as the women was inversely related to the women's level of fertility: higher levels of fertility were associated with lower levels of linguistic matching. A follow-up study revealed that female participants do not show this same change in linguistic behavior as a function of changes in their conversation partner's fertility. We interpret these findings in the context of recent data suggesting that non-conforming behavior may be a means of men displaying their fitness as a mate to women

    Attention to Speech-Accompanying Gestures: Eye Movements and Information Uptake

    Get PDF
    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake

    The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Get PDF
    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions
    corecore