5,309 research outputs found
The LAB@FUTURE Project - Moving Towards the Future of E-Learning
This paper presents Lab@Future, an advanced e-learning platform that uses novel Information and Communication Technologies to support and expand laboratory teaching practices. For this purpose, Lab@Future uses real and computer-generated objects that are interfaced using mechatronic systems, augmented reality, mobile technologies and 3D multi user environments. The main aim is to develop and demonstrate technological support for practical experiments in the following focused subjects namely: Fluid Dynamics - Science subject in Germany, Geometry - Mathematics subject in Austria, History and Environmental Awareness ĂąâŹâ Arts and Humanities subjects in Greece and Slovenia. In order to pedagogically enhance the design and functional aspects of this e-learning technology, we are investigating the dialogical operationalisation of learning theories so as to leverage our understanding of teaching and learning practices in the targeted context of deployment
A user perspective of quality of service in m-commerce
This is the post-print version of the Article. The official published version can be accessed from the link below - Copyright @ 2004 Springer VerlagIn an m-commerce setting, the underlying communication system will have to provide a Quality of Service (QoS) in the presence of two competing factorsânetwork bandwidth and, as the pressure to add value to the business-to-consumer (B2C) shopping experience by integrating multimedia applications grows, increasing data sizes. In this paper, developments in the area of QoS-dependent multimedia perceptual quality are reviewed and are integrated with recent work focusing on QoS for e-commerce. Based on previously identified user perceptual tolerance to varying multimedia QoS, we show that enhancing the m-commerce B2C user experience with multimedia, far from being an idealised scenario, is in fact feasible if perceptual considerations are employed
Spatial audio in small display screen devices
Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices
Access Grid Nodes in Field Research
This article reports fieldwork with an Access Grid Node ('AGN') device, analogous to video teleconferencing but based on grid computational technology. The device enables research respondents to be interviewed at remote sites, with potential savings in travelling to conduct fieldwork. Practical, methodological and analytic aspects of the experimental fieldwork are reported. Findings include some distinctive features of AGN interviews relative to co-present interviews; overall, there were some benefits and some disadvantages to communication. The article concludes that this new research interview mode shows potential, particularly once the difficulties associated with a new research technology are resolved.Social Research Methods, Interview Methods, New Technologies for Social Research, Access Grid Nodes, Interview Communication, Witnesses at Court
An Immersive Multi-Party Conferencing System for Mobile Devices Using 3D Binaural Audio
[EN] The use of mobile telephony, along with the widespread
of smartphones in the consumer market, is gradually displacing
traditional telephony. Fixed-line telephone conference
calls have been widely employed for carrying out
distributed meetings around the world in the last decades.
However, the powerful characteristics brought by
modern mobile devices and data networks allow for new
conferencing schemes based on immersive communication,
one the fields having major commercial and technical
interest within the telecommunications industry today.
In this context, adding spatial audio features into conventional
conferencing systems is a natural way of creating
a realistic communication environment. In fact, the
human auditory system takes advantage of spatial audio
cues to locate, separate and understand multiple speakers
when they talk simultaneously. As a result, speech
intelligibility is significantly improved if the speakers are
simulated to be spatially distributed. This paper describes
the development of a new immersive multi-party conference
call service for mobile devices (smartphones and
tablets) that substantially improves the identification and
intelligibility of the participants. Headphone-based audio
reproduction and binaural sound processing algorithms
allow the user to locate the different speakers within a
virtual meeting room. Moreover, the use of a large touch
screen helps the user to identify and remember the participants
taking part in the conference, with the possibility
of changing their spatial location in an interactive
way.This work has been partially supported by the government of Spain grant TEC-2009-14414-C03-01 and by the new technologies department of TelefĂłnicaAguilera MartĂ, E.; LĂłpez Monfort, JJ.; Cobos Serrano, M.; MaciĂ Pina, L.; MartĂ Guerola, A. (2012). An Immersive Multi-Party Conferencing System for Mobile Devices Using 3D Binaural Audio. Waves. 4:5-14. http://hdl.handle.net/10251/57918S514
Localization and Rendering of Sound Sources in Acoustic Fields
DisertaÄnĂ prĂĄce se zabĂœvĂĄ lokalizacĂ zdrojĆŻ zvuku a akustickĂœm zoomem. HlavnĂm cĂlem tĂ©to prĂĄce je navrhnout systĂ©m s akustickĂœm zoomem, kterĂœ pĆiblĂĆŸĂ zvuk jednoho mluvÄĂho mezi skupinou mluvÄĂch, a to i kdyĆŸ mluvĂ souÄasnÄ. Tento systĂ©m je kompatibilnĂ s technikou prostorovĂ©ho zvuku. HlavnĂ pĆĂnosy disertaÄnĂ prĂĄce jsou nĂĄsledujĂcĂ: 1. NĂĄvrh metody pro odhad vĂce smÄrĆŻ pĆichĂĄzejĂcĂho zvuku. 2. NĂĄvrh metody pro akustickĂ© zoomovĂĄnĂ pomocĂ DirAC. 3. NĂĄvrh kombinovanĂ©ho systĂ©mu pomocĂ pĆedchozĂch krokĆŻ, kterĂœ mĆŻĆŸe bĂœt pouĆŸit v telekonferencĂch.This doctoral thesis deals with sound source localization and acoustic zooming. The primary goal of this dissertation is to design an acoustic zooming system, which can zoom the sound of one speaker among multiple speakers even when they speak simultaneously. The system is compatible with surround sound techniques. In particular, the main contributions of the doctoral thesis are as follows: 1. Design of a method for multiple sound directions estimations. 2. Proposing a method for acoustic zooming using DirAC. 3. Design a combined system using the previous mentioned steps, which can be used in teleconferencing.
The contrast effect: QoE of mixed video-qualities at the same time
In desktop multi-party video-conferencing videostreams of participants are delivered in different qualities, but we know little about how such composition of the screen affects the quality of experience. Do the different videostreams serve as indirect quality references and the perceived video quality is thus dependent on other streams in the same session? How is the relation between the perceived qualities of each stream and the perceived quality of the overall session? To answer these questions we conducted a crowdsourcing study, in which we gathered over 5000 perceived quality ratings of overall sessions and individual streams. Our results show a contrast effect: high quality streams are rated better when more low quality streams are co-present, and vice versa. In turn, the quality p
Refining personal and social presence in virtual meetings
Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of âbeing thereâ. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinectâą) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participantâs faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts
Pervasive and standalone computing: The perceptual effects of variable multimedia quality.
The introduction of multimedia on pervasive and mobile communication devices raises a number of perceptual quality issues, however, limited work has been done examining the 3-way interaction between use of equipment, quality of perception and quality of service. Our work measures levels of informational transfer (objective) and user satisfaction (subjective)when users are presented with multimedia video clips at three different frame rates, using four different display devices, simulating variation in participant mobility. Our results will show that variation in frame-rate does not impact a userâs level of information assimilation, however, does impact a usersâ perception of multimedia video âqualityâ. Additionally, increased visual immersion can be used to increase transfer of video information, but can negatively affect the usersâ perception of âqualityâ. Finally, we illustrate the significant affect of clip-content on the transfer of video, audio and textual information, placing into doubt the use of purely objective quality definitions when considering multimedia
presentations
A Novel Combined System of Direction Estimation and Sound Zooming of Multiple Speakers
This article presents a new system for estimation the direction of multiple speakers and zooming the sound of one of them at a time. The proposed system is a combination of two levels; namely, sound source direction estimation, and acoustic zooming. The sound source direction estimation uses so-called the energetic analysis method for estimation the direction of multiple speakers, whereas the acoustic zooming is based on modifying the parameters of the directional audio coding (DirAC) in order to zoom the sound of a selected speaker among the others. Both listening tests and objective assessments are performed to evaluate this system using different time-frequency transforms
- âŠ