185 research outputs found

    The magic words: Using computers to uncover mental associations for use in magic trick design

    Get PDF
    This work was supported by EPSRC grant number EP/J50029X/1

    Towards an affect sensitive interactive companion

    Get PDF

    Automatic image enhancement using intrinsic geometrical information

    Get PDF
    collaboration

    Manufacturing Magic and Computational Creativity

    Get PDF
    We thank the Engineering and Physical Sciences Research Council (EPSRC)

    Enthusing and inspiring with reusable kinaesthetic activities

    Get PDF
    We describe the experiences of three University projects that use a style of physical, non-computer based activity to enthuse and teach school students computer science concepts. We show that this kind of activity is effective as an outreach and teaching resource even when reused across different age/ability ranges, in lecture and workshop formats and for delivery by different people. We introduce the concept of a Reusable Outreach Object (ROO) that extends Reusable Learning Objects. and argue for a community effort in developing a repository of such objects

    Extending Human-Robot Relationships Based in Music With Virtual Presence

    Get PDF
    Social relationships between humans and robots require both long term engagement and a feeling of believability or social presence toward the robot. It is our contention that music can provide the extended engagement that other open-ended interaction studies have failed to do, also, that in combination with the engaging musical interaction, the addition of simulated social behaviors is necessary to trigger this sense of believability or social presence. Building on previous studies with our robot drummer Mortimer that show including social behaviors can increase engagement and social presence, we present the results of a longitudinal study investigating the effect of extending weekly collocated musical improvisation sessions by making Mortimer an active member of the participant's virtual social network. Although, we found the effects of extending the relationship into the virtual world were less pronounced than results we have previously found by adding social modalities to human-robot musical interaction, interesting questions are raised about the interpretation of our automated behavioral metrics across different contexts. Further, we found repeated results of increasingly uninteruppted playing and notable differences in responses to online posts by Mortimer and posts by participant's human friends

    Learning gender from human gaits and faces

    Get PDF
    Computer vision based gender classification is an important component in visual surveillance systems. In this paper, we investigate gender classification from human gaits in image sequences, a relatively understudied problem. Moreover, we propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of measurements, to fuse the two modalities at the feature level. Experiments demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2 % in large datasets. In this paper, we investigate gender classification from human gaits in image sequences using machine learning methods. Considering each modality, face or gait, in isolation has its inherent weakness and limitations, we further propose to fuse gait and face for improved gender discrimination. We exploit Canonical Correlation Analysis (CCA), a powerful tool that is well suited for relating two sets of signals, to fuse the two modalities at the feature level. Experiments on large dataset demonstrate that our multimodal gender recognition system achieves the superior recognition performance of 97.2%. We plot in Figure 1 the flow chart of our multimodal gender recognition system. 1

    Illumination robust face representation based on intrinsic geometrical information

    Get PDF
    collaboration: keywords: Illumination robust face representation; intrinsic geometrical information; naturalistic human-robot interaction system; human-computer interaction system; binary non-subsampled contourlet transform; B-NSCT; multidirectional contour information; multiscale contour information; facial texture; CMU PIE databases; Yale B databasescollaboration: keywords: Illumination robust face representation; intrinsic geometrical information; naturalistic human-robot interaction system; human-computer interaction system; binary non-subsampled contourlet transform; B-NSCT; multidirectional contour information; multiscale contour information; facial texture; CMU PIE databases; Yale B databasescollaboration: keywords: Illumination robust face representation; intrinsic geometrical information; naturalistic human-robot interaction system; human-computer interaction system; binary non-subsampled contourlet transform; B-NSCT; multidirectional contour information; multiscale contour information; facial texture; CMU PIE databases; Yale B databasescollaboration: keywords: Illumination robust face representation; intrinsic geometrical information; naturalistic human-robot interaction system; human-computer interaction system; binary non-subsampled contourlet transform; B-NSCT; multidirectional contour information; multiscale contour information; facial texture; CMU PIE databases; Yale B databasescollaboration: keywords: Illumination robust face representation; intrinsic geometrical information; naturalistic human-robot interaction system; human-computer interaction system; binary non-subsampled contourlet transform; B-NSCT; multidirectional contour information; multiscale contour information; facial texture; CMU PIE databases; Yale B database
    • …
    corecore