13 research outputs found

    The nature of joint attention: perception and other minds

    Get PDF

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    Practicing phonomimetic (conducting-like) gestures facilitates vocal performance of typically developing children and children with autism: an experimental study

    Full text link
    Every music teacher is likely to teach one or more children with autism, given that an average of one in 54 persons in the United States receives a diagnosis of Autism Spectrum Disorder (ASD). ASD persons often show tremendous interest in music, and some even become masterful performers; however, the combination of deficits and abilities associated with ASD can pose unique challenges for music teachers. This experimental study shows that phonomimetic (conducting-like) gestures can be used to teach the expressive qualities of music. Children were asked to watch video recordings of conducting-like gestures and produce vocal sounds to match the gestures. The empirical findings indicate that motor training can strengthen the visual to vocomotor couplings in both populations, suggesting that phonomimetic gesture may be a suitable approach for teaching musical expression in inclusive classrooms

    Crossmodal displays : coordinated crossmodal cues for information provision in public spaces

    Get PDF
    PhD ThesisThis thesis explores the design of Crossmodal Display, a new kind of display-based interface that aims to help prevent information overload and support information presentation for multiple simultaneous people who share a physical space or situated interface but have different information needs and privacy concerns. By exploiting the human multimodal perception and utilizing the synergy of both existing public displays and personal displays, crossmodal displays avoid numerous drawbacks associated with previous approaches, including a reliance on tracking technologies, weak protection for user‟s privacy, small user capacity and high cognitive load demands. The review of the human multimodal perception in this thesis, especially multimodal integration and crossmodal interaction, has many design implications for the design of crossmodal displays and constitutes the foundation for our proposed conceptual model. Two types of crossmodal display prototype applications are developed: CROSSFLOW for indoor navigation and CROSSBOARD for information retrieval on high-density information display; both of these utilize coordinated crossmodal cues to guide multiple simultaneous users‟ attention to publicly visible information relevant to each user timely. Most of the results of single-user and multi-user lab studies on the prototype systems we developed in this research demonstrate the effectiveness and efficiency of crossmodal displays and validate several significant advantages over the previous solutions. However, the results also reveal that more detailed usability and user experience of crossmodal displays as well as the human perception of crossmodal cues should be investigated and improved. This thesis is the first exploration into the design of crossmodal displays. A set of design suggestions and a lifecycle model of crossmodal display development have been produced, and can be used by designers or other researchers who wish to develop crossmodal displays for their applications or integrate crossmodal cues in their interfaces

    Image and Evidence: The Study of Attention through the Combined Lenses of Neuroscience and Art

    Get PDF
    : Levy, EK 2012, ‘An artistic exploration of inattention blindness’, in Frontiers Hum Neurosci, vol. 5, ISSN=1662-5161.Full version unavailable due to 3rd party copyright restrictions.This study proposed that new insights about attention, including its phenomenon and pathology, would be provided by combining perspectives of the neurobiological discourse about attention with analyses of artworks that exploit the constraints of the attentional system. To advance the central argument that art offers a training ground for the attentional system, a wide range of contemporary art was analysed in light of specific tasks invoked. The kinds of cognitive tasks these works initiate with respect to the attentional system have been particularly critical to this research. Attention was explored within the context of transdisciplinary art practices, varied circumstances of viewing, new neuroscientific findings, and new approaches towards learning. Research for this dissertation required practical investigations in a gallery setting, and this original work was contextualised and correlated with pertinent neuroscientific approaches. It was also concluded that art can enhance public awareness of attention disorders and assist the public in discriminating between medical and social factors through questioning how norms of behaviour are defined and measured. This territory was examined through the comparative analysis of several diagnostic tests for attention deficit hyperactivity disorder (ADHD), through the adaptation of a methodology from economics involving patent citation in order to show market incentives, and through examples of data visualisation. The construction of an installation and collaborative animation allowed participants to experience first-hand the constraints on the attentional system, provoking awareness of our own “normal” physiological limitations. The embodied knowledge of images, emotion, and social context that are deeply embedded in art practices appeared to be capable of supplementing neuroscience’s understanding of attention and its disorders

    Attention Restraint, Working Memory Capacity, and Mind Wandering: Do Emotional Valence or Intentionality Matter?

    Get PDF
    Attention restraint appears to mediate the relationship between working memory capacity (WMC) and mind wandering (Kane et al., 2016). Prior work has identifed two dimensions of mind wandering—emotional valence and intentionality. However, less is known about how WMC and attention restraint correlate with these dimensions. Te current study examined the relationship between WMC, attention restraint, and mind wandering by emotional valence and intentionality. A confrmatory factor analysis demonstrated that WMC and attention restraint were strongly correlated, but only attention restraint was related to overall mind wandering, consistent with prior fndings. However, when examining the emotional valence of mind wandering, attention restraint and WMC were related to negatively and positively valenced, but not neutral, mind wandering. Attention restraint was also related to intentional but not unintentional mind wandering. Tese results suggest that WMC and attention restraint predict some, but not all, types of mind wandering

    Influence of multimedia hints on conceptual physics problem solving and visual attention

    Get PDF
    Doctor of PhilosophyDepartment of PhysicsBrett D. DePaolaNobel S. RebelloPrevious research has showed that visual cues can improve learners' problem solving performance on conceptual physics tasks. In this study we investigated the influence of multimedia hints that included visual, textual, and audio modalities, and all possible combinations thereof, on students' problem solving performance and visual attention. The participants (N = 162) were recruited from conceptual physics classes for this study. Each of them participated in an individual interview, which contained four task sets. Each set contained one initial task, six training tasks, one near transfer task and one far transfer task. We used a 2 (visual hint/no visual hint) x 2 (text hint/no text hint) x 2 (audio hint/no audio hint) between participant quasi-experimental design. Participants were randomly assigned into one of the eight conditions and were provided hints for training tasks, corresponding to the assigned condition. Our results showed that problem solving performance on the training tasks was affected by hint modality. Unlike what was predicted by Mayer's modality principle, we found evidence of a reverse modality effect, in which text hints helped participants solve the physics tasks better than audio hints. Then we studied students’ visual attention as they solved these physics tasks. We found the participants preferentially attended to visual hints over text hints when they were presented simultaneously. This effect was unaffected by the inclusion of audio hints. Text hints also imposed less cognitive load than audio hints, as measured by fixation durations. And presenting visual hints caused more cognitive load while fixating expert-like interest areas than during the time intervals before and after hints. A theoretical model is proposed to explain both problem solving performance and visual attention. According to the model, because visual hints integrated the functions of selection, organization, and integration, this caused a relatively heavy cognitive load yet improved problem solving performance. Furthermore, text hints were a better resource for complex linguistic information than transient audio hints. We also discuss limitations of the current study, which may have led to results contrary to Mayer's modality principle in some respects, but consistent with it in others
    corecore