37,832 research outputs found

    Ostensive signals support learning from novel attention cues during infancy

    Get PDF
    Social attention cues (e.g., head turning, gaze direction) highlight which events young infants should attend to in a busy environment and, recently, have been shown to shape infants' likelihood of learning about objects and events. Although studies have documented which social cues guide attention and learning during early infancy, few have investigated how infants learn to learn from attention cues. Ostensive signals, such as a face addressing the infant, often precede social attention cues. Therefore, it is possible that infants can use ostensive signals to learn from other novel attention cues. In this training study, 8-month-olds were cued to the location of an event by a novel non-social attention cue (i.e., flashing square) that was preceded by an ostensive signal (i.e., a face addressing the infant). At test, infants predicted the appearance of specific multimodal events cued by the flashing squares, which were previously shown to guide attention to but not inform specific predictions about the multimodal events (Wu and Kirkham, 2010). Importantly, during the generalization phase, the attention cue continued to guide learning of these events in the absence of the ostensive signal. Subsequent experiments showed that learning was less successful when the ostensive signal was absent even if an interesting but non-ostensive social stimulus preceded the same cued events

    Experimental Approaches to the Composition of Interactive Video Game Music

    Get PDF
    This project explores experimental approaches and strategies to the composition of interactive music for the medium of video games. Whilst music in video games has not enjoyed the technological progress that other aspects of the software have received, budgets expand and incomes from releases grow. Music is now arguably less interactive than it was in the 1990ā€™s, and whilst graphics occupy large amounts of resources and development time, audio does not garner the same attention. This portfolio develops strategies and audio engines, creating music using the techniques of aleatoric composition, real-time remixing of existing work, and generative synthesisers. The project created music for three ā€˜open-formā€™ games : an example of the racing genre (Kart Racing Pro); an arena-based first-person shooter (Counter-Strike : Source); and a real-time strategy title (0 A.D.). These games represent a cross-section of ā€˜sandboxā€™- type games on the market, as well as all being examples of games with open-ended or open-source code

    People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus

    Get PDF
    The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and faceā€“voice integration. However, the nature of the STS preference for these ā€˜social stimuliā€™ remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., ā€˜people-selectiveā€™ regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a ā€˜people-selective, heteromodalā€™ region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a ā€˜social-information processingā€™ centre

    2016-2017 Course Catalog

    Get PDF
    2016-2017 Course Catalo

    Attention, An Interactive Display Is Running! Integrating Interactive Public Display Within Urban Dis(At)tractors

    Get PDF
    Display or interaction blindness is a known problem for interactive public displays where passers-by simply ignore or pay little attention to them. While previous research created interventions that tried to address this problem or reported on differences between experiences in the lab and in the real world, little attention has been given to examining different attractors surrounding the interactive public display, i.e., people, artifacts, and stimuli that compete for peopleā€™s attention in the urban settings and distract them from interacting with public displays. This paper reports on a systematic examination of attractors around a case study of an interactive urban display in London. We outline the initial spatial exploration with the aim to identify suitable locations for the placement of the interactive public display within the urban setting, followed by a two-hour observation of attractors and stimuli around the urban display. We highlight the main attractors that compete for peopleā€™s attention and distract them from potentially interacting with the public display. We also note our attempt to reflect the environment and integrate the public display within its setting

    2017-2018 Course Catalog

    Get PDF
    2017-2018 Course Catalo

    2015-2016 Course Catalog

    Get PDF
    2015-2016 Course Catalo

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    2013-2014 Course Catalog

    Get PDF
    2013-2014 Course Catalo
    • ā€¦
    corecore