2,423 research outputs found

    Expressive feedback from virtual buttons

    Get PDF
    The simple action of pressing a button is a multimodal interaction with an interesting depth of complexity. As the development of computer interfaces to support 3D tasks evolves, there is a need to better understand how users will interact with virtual buttons that generate feedback from multiple sensory modalities. This research examined the effects of visual, auditory, and haptic feedback from virtual buttons on task performance dialing phone numbers and on the motion of individual buttons presses. This research also presents a theoretical framework for virtual button feedback and a model of virtual button feedback that includes touch feedback hysteresis. The results suggest that although haptic feedback alone was not enough to prevent participants from pressing the button farther than necessary, bimodal and trimodal feedback combinations that included haptic feedback shortened the depth of the presses. However, the shallower presses observed during trimodal feedback may have led to a counterintuitive increase in the number of digits that the participants omitted during the task. Even though interaction with virtual buttons may appear simple, it is important to understand the complexities behind the multimodal interaction because users will seek out the multimodal interactions they prefer

    Piloting Multimodal Learning Analytics using Mobile Mixed Reality in Health Education

    Get PDF
    © 2019 IEEE. Mobile mixed reality has been shown to increase higher achievement and lower cognitive load within spatial disciplines. However, traditional methods of assessment restrict examiners ability to holistically assess spatial understanding. Multimodal learning analytics seeks to investigate how combinations of data types such as spatial data and traditional assessment can be combined to better understand both the learner and learning environment. This paper explores the pedagogical possibilities of a smartphone enabled mixed reality multimodal learning analytics case study for health education, focused on learning the anatomy of the heart. The context for this study is the first loop of a design based research study exploring the acquisition and retention of knowledge by piloting the proposed system with practicing health experts. Outcomes from the pilot study showed engagement and enthusiasm of the method among the experts, but also demonstrated problems to overcome in the pedagogical method before deployment with learners

    Transitioning Between Audience and Performer: Co-Designing Interactive Music Performances with Children

    Full text link
    Live interactions have the potential to meaningfully engage audiences during musical performances, and modern technologies promise unique ways to facilitate these interactions. This work presents findings from three co-design sessions with children that investigated how audiences might want to interact with live music performances, including design considerations and opportunities. Findings from these sessions also formed a Spectrum of Audience Interactivity in live musical performances, outlining ways to encourage interactivity in music performances from the child perspective

    Pro-active Meeting Assistants : Attention Please!

    Get PDF
    This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all

    Pro-active Meeting Assistants: Attention Please!

    Get PDF
    This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all. This paper gives an overview of pro-active meeting assistants, what they are and when they can be useful. We explain how to develop such assistants with respect to requirement definitions and elaborate on a set of Wizard of Oz experiments, aiming to find out in which form a meeting assistant should operate to be accepted by participants and whether the meeting effectiveness and efficiency can be improved by an assistant at all

    Sphericall: A Human/Artificial Intelligence interaction experience

    Get PDF
    Multi-agent systems are now wide spread in scientific works and in industrial applications. Few applications deal with the Human/Multi-agent system interaction. Multi-agent systems are characterized by individual entities, called agents, in interaction with each other and with their environment. Multi-agent systems are generally classified into complex systems categories since the global emerging phenomenon cannot be predicted even if every component is well known. The systems developed in this paper are named reactive because they behave using simple interaction models. In the reactive approach, the issue of Human/system interaction is hard to cope with and is scarcely exposed in literature. This paper presents Sphericall, an application aimed at studying Human/Complex System interactions and based on two physics inspired multi-agent systems interacting together. The Sphericall device is composed of a tactile screen and a spherical world where agents evolve. This paper presents both the technical background of Sphericall project and a feedback taken from the demonstration performed during OFFF Festival in La Villette (Paris)

    Creating music in the classroom with tablet computers: An activity system analysis of two secondary school communities.

    Get PDF
    Tablet computers are becoming inextricably linked with innovation and change in schools. Increasingly therefore, music teachers must consider how tablet computers might influence creative musical development in their own classroom. This qualitative research into two secondary school communities aims to develop understandings about what really happens when students and a music teacher-researcher compose music in partnership with a tablet computer. A sociocultural definition of creativity, theories of Activity, and the musicking argument inform a new systemic framework which guides fieldwork. This framework becomes the unit of analysis from which the research questions and a multi-case, multimodal methodology emerge. The methodology developed here honours the situated nature of those meanings which emerge in each of the two school communities. Consequently, research findings are presented as two separate case reports. Five mixed-ability pairs are purposively sampled from each community to represent the broad range of musical experience present in that setting. A Video-enhanced, participant-observation method ensures that systemic, multimodal musicking behaviours are captured as they emerge overtime. Naturalistic group interviewing at the end of the project reveals how students’ broader musical cultures, interests and experiences influence their tablet-mediated classroom behaviour. Findings develop new understandings about how tablet-mediated creative musical action champions inclusive musicking (musical experience notwithstanding) and better connects the music classroom and its institutional requirements with students’ informal music-making practices. The systems of classroom Activity which emerge also compensate for those moments when the tablet attempts to overtly determine creative behaviour or conversely, does not do enough to ensure a creative outcome. In fact, all system dimensions (e.g. student partner/teacher/student/tablet) influence tablet- mediated action by feeding the system with musical and technological knowledge, which was also pedagogically conditioned. This musical, technological and pedagogical conditioning is mashed-up, influencing action just-in-time, according to cultural, local and personal need. A new method of visual charting is developed to ‘peer inside’ these classroom-situated systems. Colour-coded charts evidence how classroom musicians make use of and synthesize different system dimensions to find, focus and fix their creative musical ideas over time. There are also implications for research, policy and practice going forward. In terms of researching digitally-mediated creativity, a new social-cultural Activity framework is presented which encourages researchers to revise their definition of creativity itself. Such a definition would emphasise the role of cultural, local and personal constraint in creative musical development. With reference to classroom practice, this research discovers that when students partner with tablet computers, their own musical interests, experiences and desires are forwarded. Even though these desires become fused with institutional requirements, students take ownership of their learning and are found rightfully proud of their creative products. This naturalistic, community-driven form of tablet- mediated creative musical development encourages policy makers and teachers to reposition the music classroom: to reconnect it with the local community it serves

    An Augmented Interaction Strategy For Designing Human-Machine Interfaces For Hydraulic Excavators

    Get PDF
    Lack of adequate information feedback and work visibility, and fatigue due to repetition have been identified as the major usability gaps in the human-machine interface (HMI) design of modern hydraulic excavators that subject operators to undue mental and physical workload, resulting in poor performance. To address these gaps, this work proposed an innovative interaction strategy, termed “augmented interaction”, for enhancing the usability of the hydraulic excavator. Augmented interaction involves the embodiment of heads-up display and coordinated control schemes into an efficient, effective and safe HMI. Augmented interaction was demonstrated using a framework consisting of three phases: Design, Implementation/Visualization, and Evaluation (D.IV.E). Guided by this framework, two alternative HMI design concepts (Design A: featuring heads-up display and coordinated control; and Design B: featuring heads-up display and joystick controls) in addition to the existing HMI design (Design C: featuring monitor display and joystick controls) were prototyped. A mixed reality seating buck simulator, named the Hydraulic Excavator Augmented Reality Simulator (H.E.A.R.S), was used to implement the designs and simulate a work environment along with a rock excavation task scenario. A usability evaluation was conducted with twenty participants to characterize the impact of the new HMI types using quantitative (task completion time, TCT; and operating error, OER) and qualitative (subjective workload and user preference) metrics. The results indicated that participants had a shorter TCT with Design A. For OER, there was a lower error probability due to collisions (PER1) with Design A, and lower error probability due to misses (PER2)with Design B. The subjective measures showed a lower overall workload and a high preference for Design B. It was concluded that augmented interaction provides a viable solution for enhancing the usability of the HMI of a hydraulic excavator

    Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

    Get PDF
    Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C’s Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers’ observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is soun
    corecore