9,009 research outputs found

    Multiscale analysis of a spatially heterogeneous microscopic traffic model

    Get PDF
    The microscopic Optimal Velocity (OV) model is posed on an inhomogeneous ring-road, consisting of two spatial regimes which differ by a scaled OV function. Parameters are chosen throughout for which all uniform flows are linearly stable. The large time behaviour of this discrete system is stationary and exhibits three types of macroscopic traffic pattern, each consisting of plateaus joined together by sharp interfaces. At a coarse level, these patterns are determined by simple flow and density balances, which in some cases have non-unique solutions. The theory of characteristics for the classical Lighthill–Whitham PDE model is then applied to explain which pattern the OV model selects. A global analysis of a second-order PDE model is then performed in an attempt to explain some qualitative details of interface structure. Finally, the full microscopic model is analysed at the linear level to explain features which cannot be described by the present macroscopic approache

    What's my line? glass versus paper for cold reading in duologues

    Get PDF
    Part of an actor's job is being able to cold read: to take words directly from the page and to read them as if they were his or her own, often without the chance to read the lines beforehand. This is particularly difficult when two or more actors need to perform a dialogue cold. The need to hold a paper script in hand hinders the actor's ability to move freely. It also introduces a visual distraction between actors trying to engage with one another in a scene. This preliminary study uses Google Glass displayed cue cards as an alternative to traditional scripts, and compares the two approaches through a series of two-person, cold-read performances. Each performance was judged by a panel of theatre experts. The study finds that Glass has the potential to aid performance by freeing actors to better engage with one another. However, it also found that by limiting the display to one line of script at a time, the Glass application used here makes it difficult for some actors to grasp the text. In a further study, when asked to later perform the text from memory, actors who had used Glass recalled only slightly fewer lines than when they had learned using paper

    Wearables and the Brain

    Get PDF
    The brain is the last frontier for wearable sensing. Commercially available wearables can monitor your vital signs and physical activity, but few have the ability to monitor what goes on inside your head. With the advent of new wearable and portable neuroimaging technologies, this situation might be about to change, with profound implications for neuroscience and for wearables. One of the main attractions of wearables, and wearable sensing, comes from the proximity of the devices to the human body and to the wealth of information that might be gathered from being so close. Yet when it comes to sensing the brain-and, even more so, our minds- significant difficulties arise. First among these is the inadequacy of available sensing technology. It is relatively easy to sense the movement of a person's arm, but much more difficult to gain access to the workings of their brain. Second, and perhaps more fundamentally, we still do not really know enough about how brains actually work in the real-world and outside the restrained laboratory setting-and it is hard to sense and make use of what we do not quite understand

    Effects of being watched on eye gaze and facial displays of typical and autistic individuals during conversation

    Get PDF
    Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial motion patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial displays as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies

    Towards recognising collaborative activities using multiple on-body sensors

    Get PDF
    This paper describes the initial stages of a new work on recognising collaborative activities involving two or more people. In the experiment described a physically demanding construction task is completed by a team of 4 volunteers. The task, to build a large video wall, requires communication, coordination, and physical collaboration between group members. Minimal outside assistance is provided to better reflect the ad-hoc and loosely structured nature of real-world construction tasks. On-body inertial measurement units (IMU) record each subject's head and arm movements; a wearable eye-tracker records gaze and ego-centric video; and audio is recorded from each person's head and dominant arm. A first look at the data reveals promising correlations between, for example, the movement patterns of two people carrying a heavy object. Also revealed are clues on how complementary information from different sensor types, such as sound and vision, might further aid collaboration recognition

    Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions

    Get PDF
    Nonverbal communication is an important part of human communication, including head nodding, eye gaze, proximity and body orientation. Recent research has identified specific patterns of head nodding linked to conversation, namely mimicry of head movements at 600 ms delay and fast nodding when listening. In this paper, we implemented these head nodding behaviour rules in virtual humans, and we tested the impact of these behaviours, and whether they lead to increases in trust and liking towards the virtual humans. We use Virtual Reality technology to simulate a face-to-face conversation, as VR provides a high level of immersiveness and social presence, very similar to face-to-face interaction. We then conducted a study with human-subject participants, where the participants took part in conversations with two virtual humans and then rated the virtual character social characteristics, and completed an evaluation of their implicit trust in the virtual human. Results showed more liking for and more trust in the virtual human whose nodding behaviour was driven by realistic behaviour rules. This supports the psychological models of nodding and advances our ability to build realistic virtual humans

    Nonverbal communication in virtual reality: Nodding as a social signal in virtual interactions

    Get PDF
    Nonverbal communication is an important part of human communication, including head nodding, eye gaze, proximity and body orientation. Recent research has identified specific patterns of head nodding linked to conversation, namely mimicry of head movements at 600 ms delay and fast nodding when listening. In this paper, we implemented these head nodding behaviour rules in virtual humans, and we tested the impact of these behaviours, and whether they lead to increases in trust and liking towards the virtual humans. We use Virtual Reality technology to simulate a face-to-face conversation, as VR provides a high level of immersiveness and social presence, very similar to face-to-face interaction. We then conducted a study with human-subject participants, where the participants took part in conversations with two virtual humans and then rated the virtual character social characteristics, and completed an evaluation of their implicit trust in the virtual human. Results showed more liking for and more trust in the virtual human whose nodding behaviour was driven by realistic behaviour rules. This supports the psychological models of nodding and advances our ability to build realistic virtual humans

    The color of smiling: computational synaesthesia of facial expressions

    Get PDF
    This note gives a preliminary account of the transcoding or rechanneling problem between different stimuli as it is of interest for the natural interaction or affective computing fields. By the consideration of a simple example, namely the color response of an affective lamp to a sensed facial expression, we frame the problem within an information- theoretic perspective. A full justification in terms of the Information Bottleneck principle promotes a latent affective space, hitherto surmised as an appealing and intuitive solution, as a suitable mediator between the different stimuli.Comment: Submitted to: 18th International Conference on Image Analysis and Processing (ICIAP 2015), 7-11 September 2015, Genova, Ital
    • …
    corecore