5,313 research outputs found
Constructing sonified haptic line graphs for the blind student: first steps
Line graphs stand as an established information visualisation and analysis technique taught at various levels of difficulty according to standard Mathematics curricula. It has been argued that blind individuals cannot use line graphs as a visualisation and analytic tool because they currently primarily exist in the visual medium. The research described in this paper aims at making line graphs accessible to blind students through auditory and haptic media. We describe (1) our design space for representing line graphs, (2) the technology we use to develop our prototypes and (3) the insights from our preliminary work
From ‘hands up’ to ‘hands on’: harnessing the kinaesthetic potential of educational gaming
Traditional approaches to distance learning and the student learning journey have focused on closing the gap between the experience of off-campus students and their on-campus peers. While many initiatives have sought to embed a sense of community, create virtual learning environments and even build collaborative spaces for team-based assessment and presentations, they are limited by technological innovation in terms of the types of learning styles they support and develop. Mainstream gaming development – such as with the Xbox Kinect and Nintendo Wii – have a strong element of kinaesthetic learning from early attempts to simulate impact, recoil, velocity and other environmental factors to the more sophisticated movement-based games which create a sense of almost total immersion and allow untethered (in a technical sense) interaction with the games’ objects, characters and other players. Likewise, gamification of learning has become a critical focus for the engagement of learners and its commercialisation, especially through products such as the Wii Fit.
As this technology matures, there are strong opportunities for universities to utilise gaming consoles to embed levels of kinaesthetic learning into the student experience – a learning style which has been largely neglected in the distance education sector. This paper will explore the potential impact of these technologies, to broadly imagine the possibilities for future innovation in higher education
Congestion Control for Network-Aware Telehaptic Communication
Telehaptic applications involve delay-sensitive multimedia communication
between remote locations with distinct Quality of Service (QoS) requirements
for different media components. These QoS constraints pose a variety of
challenges, especially when the communication occurs over a shared network,
with unknown and time-varying cross-traffic. In this work, we propose a
transport layer congestion control protocol for telehaptic applications
operating over shared networks, termed as dynamic packetization module (DPM).
DPM is a lossless, network-aware protocol which tunes the telehaptic
packetization rate based on the level of congestion in the network. To monitor
the network congestion, we devise a novel network feedback module, which
communicates the end-to-end delays encountered by the telehaptic packets to the
respective transmitters with negligible overhead. Via extensive simulations, we
show that DPM meets the QoS requirements of telehaptic applications over a wide
range of network cross-traffic conditions. We also report qualitative results
of a real-time telepottery experiment with several human subjects, which reveal
that DPM preserves the quality of telehaptic activity even under heavily
congested network scenarios. Finally, we compare the performance of DPM with
several previously proposed telehaptic communication protocols and demonstrate
that DPM outperforms these protocols.Comment: 25 pages, 19 figure
Beyond multimedia adaptation: Quality of experience-aware multi-sensorial media delivery
Multiple sensorial media (mulsemedia) combines multiple media elements which engage three or more of human senses, and as most other media content, requires support for delivery over the existing networks. This paper proposes an adaptive mulsemedia framework (ADAMS) for delivering scalable video and sensorial data to users. Unlike existing two-dimensional joint source-channel adaptation solutions for video streaming, the ADAMS framework includes three joint adaptation dimensions: video source, sensorial source, and network optimization. Using an MPEG-7 description scheme, ADAMS recommends the integration of multiple sensorial effects (i.e., haptic, olfaction, air motion, etc.) as metadata into multimedia streams. ADAMS design includes both coarse- and fine-grained adaptation modules on the server side: mulsemedia flow adaptation and packet priority scheduling. Feedback from subjective quality evaluation and network conditions is used to develop the two modules. Subjective evaluation investigated users' enjoyment levels when exposed to mulsemedia and multimedia sequences, respectively and to study users' preference levels of some sensorial effects in the context of mulsemedia sequences with video components at different quality levels. Results of the subjective study inform guidelines for an adaptive strategy that selects the optimal combination for video segments and sensorial data for a given bandwidth constraint and user requirement. User perceptual tests show how ADAMS outperforms existing multimedia delivery solutions in terms of both user perceived quality and user enjoyment during adaptive streaming of various mulsemedia content. In doing so, it highlights the case for tailored, adaptive mulsemedia delivery over traditional multimedia adaptive transport mechanisms
Enabling audio-haptics
This thesis deals with possible solutions to facilitate orientation, navigation and overview of non-visual interfaces and virtual environments with the help of sound in combination with force-feedback haptics. Applications with haptic force-feedback, s
Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments
Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These
are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or
simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of
cooperation, multi-user object manipulation has rarely been studied. This paper describes a behavioral
experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was
used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object
were integrated together either symmetrically or asymmetrically. The former only allowed the common
component of participants' actions to take place, but the latter used the mean. Symmetric action integration was
superior for sections of the task when both participants had to perform similar actions, but if participants had to
move in different ways (e.g., one maneuvering themselves through a narrow opening while the other traveled
down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to
which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the
reduction in performance caused by having to cooperate with another person)
Using Wii technology to explore real spaces via virtual environments for people who are blind
Purpose - Virtual environments (VEs) that represent real spaces (RSs) give people who are blind the opportunity to build a cognitive map in advance that they will be able to use when arriving at the RS. Design - In this research study Nintendo Wii based technology was used for exploring VEs via the Wiici application. The Wiimote allows the user to interact with VEs by simulating walking and scanning the space. Finding - By getting haptic and auditory feedback the user learned to explore new spaces. We examined the participants' abilities to explore new simple and complex places, construct a cognitive map, and perform orientation tasks in the RS. Originality – To our knowledge, this finding presents the first virtual environment for people who are blind that allow the participants to scan the environment and by this to construct map model spatial representations
- …