7 research outputs found

    Audio 3D e ancoraggio sonoro per l'esplorazione multimodale di ambienti virtuali

    Get PDF
    Questo lavoro presenta un sistema interattivo audio-aptico di ausilio all\u2019orientamento e alla mobilita per soggetti non ` vedenti, e un esperimento soggettivo volto a studiare i meccanismi cognitivi nella rappresentazione spaziale in assenza di informazione visuale. Si presenta in particolare un esperimento di riconoscimento di oggetti, che investiga il ruolo dell\u2019informazione auditiva spaziale dinamica integrata con feedback aptico, in un semplice ambiente virtuale. Tale informazione e strutturata come una \u201cancora ` sonora\u201d, erogata in cuffia tramite tecniche di rendering 3D binaurale del suono (in particolare tramite Head-Related Transfer Functions, o HRTF, opportunamente personalizzate). I risultati sperimentali relativi al tempo di riconoscimento da parte dei soggetti mostrano una relazione tra la posizione dell\u2019ancora sonora e la forma dell\u2019oggetto riconosciuto. Inoltre, un\u2019analisi qualitativa delle traiettorie di esplorazione suggerisce l\u2019insorgere di modifiche comportamentali tra le condizioni monomodale e multimodale

    The Effect of Programmable Tactile Displays on Spatial Learning Skills in Children and Adolescents of Different Visual Disability

    Get PDF
    Vision loss has severe impacts on physical, social and emotional well-being. The education of blind children poses issues as many scholar disciplines (e.g., geometry, mathematics) are normally taught by heavily relying on vision. Touch-based assistive technologies are potential tools to provide graphical contents to blind users, improving learning possibilities and social inclusion. Raised-lines drawings are still the golden standard, but stimuli cannot be reconfigured or adapted and the blind person constantly requires assistance. Although much research concerns technological development, little work concerned the assessment of programmable tactile graphics, in educative and rehabilitative contexts. Here we designed, on programmable tactile displays, tests aimed at assessing spatial memory skills and shapes recognition abilities. Tests involved a group of blind and a group of low vision children and adolescents in a four-week longitudinal schedule. After establishing subject-specific difficulty levels, we observed a significant enhancement of performance across sessions and for both groups. Learning effects were comparable to raised paper control tests: however, our setup required minimal external assistance. Overall, our results demonstrate that programmable maps are an effective way to display graphical contents in educative/rehabilitative contexts. They can be at least as effective as traditional paper tests yet providing superior flexibility and versatility

    Evaluaciones utilizadas en investigaciones de tecnología de asistencia : Estado de arte.

    Get PDF
    El propósito del trabajo de grado ¿Estado de arte: Evaluaciones utilizadas en las investigaciones de tecnología de asistencia¿ fue identificar, analizar e interpretar los artículos en tecnología de asistencia, en relación con las evaluaciones, al no encontrarse una recopilación que diera cuenta de los contenidos y tendencias en el tema de investigación, para guiar a los profesionales de diferentes disciplinas en el proceso de prestación de servicios en tecnología de asistencia. Se propuso un estudio cualitativo, de tipo exploratorio, con diseño de investigación documental a 12 meses, llevado cabo en 8 bases de datos disponibles de la Universidad del Valle: EBSCO, DOAJ, SCIENCE, Springer Link, IEEE, Wiley Journals, Pubmed, ISI web of Science. Para lo anterior se elaboró un marco teórico dando referencia a conceptos del tema de investigación: discapacidad, tecnología de asistencia (TA), Modelo de la Actividad Humana: Tecnología de Asistencia (HAAT), Modelo Persona- Tecnología (MPT), Clasificación Internacional del Funcionamiento de la Discapacidad y la Salud (CIF), Modelo de Evaluación de la Prestación de Servicios en Tecnología de Asistencia (ATA), Prestación de servicios en Tecnología de Asistencia, evaluaciones, entre otros. Se evidenciaron tendencias en materia de investigación, determinando necesidades futuras que consolidan un marco de conocimientos sobre el tema, con el análisis de 134 artículos, la mayoría obtenidos de la base de datos EBSCO, publicados en 97 revistas, comprendidas en 50 áreas de conocimiento, en 27 países. A partir de estos artículos se recopilaron 274 evaluaciones utilizadas en las investigaciones en tecnología de asistencia

    THE EFFECT OF HAPTIC INTERACTION AND LEARNER CONTROL ON STUDENT PERFORMANCE IN AN ONLINE DISTANCE EDUCATION COURSE

    Get PDF
    Today’s learners are taking advantage of a whole new world of multimedia and hypermedia experiences to gain understanding and construct knowledge. While at the same time, teachers and instructional designers are producing these experiences at rapid paces. Many angles of interactivity with digital content continue to be researched, as is the case with this study. The purpose of this study is to determine whether there is a significant difference in the performance of distance education students who exercise learner control interactivity effectively through a traditional input device versus students who exercise learner control interactivity through haptic input methods. This study asks three main questions about the relationship and potential impact touch input had on the interactivity sequence a learner chooses while participating in an online distance education course. Effects were measured by using criterion from logged assessments within one module of a distance education course. This study concludes that learner control sequence choices did have significant effects on learner outcomes. However, input method did not. The sequence that learners chose had positive effects on scores, the number of attempts it took to pass assessments, and the overall range of scores per assessment attempts. Touch input learners performed as well as traditional input learners, and summative first sequence learners outperformed all other learners. These findings support the beliefs that new input methods are not detrimental and that learner-controlled options while participating in digital online courses are valuable for learners, under certain conditions

    Mixed Structural Models for 3D Audio in Virtual Environments

    Get PDF
    In the world of ICT, strategies for innovation and development are increasingly focusing on applications that require spatial representation and real-time interaction with and within 3D media environments. One of the major challenges that such applications have to address is user-centricity, reflecting e.g. on developing complexity-hiding services so that people can personalize their own delivery of services. In these terms, multimodal interfaces represent a key factor for enabling an inclusive use of the new technology by everyone. In order to achieve this, multimodal realistic models that describe our environment are needed, and in particular models that accurately describe the acoustics of the environment and communication through the auditory modality. Examples of currently active research directions and application areas include 3DTV and future internet, 3D visual-sound scene coding, transmission and reconstruction and teleconferencing systems, to name but a few. The concurrent presence of multimodal senses and activities make multimodal virtual environments potentially flexible and adaptive, allowing users to switch between modalities as needed during the continuously changing conditions of use situation. Augmentation through additional modalities and sensory substitution techniques are compelling ingredients for presenting information non-visually, when the visual bandwidth is overloaded, when data are visually occluded, or when the visual channel is not available to the user (e.g., for visually impaired people). Multimodal systems for the representation of spatial information will largely benefit from the implementation of audio engines that have extensive knowledge of spatial hearing and virtual acoustics. Models for spatial audio can provide accurate dynamic information about the relation between the sound source and the surrounding environment, including the listener and his/her body which acts as an additional filter. Indeed, this information cannot be substituted by any other modality (i.e., visual or tactile). Nevertheless, today's spatial representation of audio within sonification tends to be simplistic and with poor interaction capabilities, being multimedia systems currently focused on graphics processing mostly, and integrated with simple stereo or multi-channel surround-sound. On a much different level lie binaural rendering approaches based on headphone reproduction, taking into account that possible disadvantages (e.g. invasiveness, non-flat frequency responses) are counterbalanced by a number of desirable features. Indeed, these systems might control and/or eliminate reverberation and other acoustic effects of the real listening space, reduce background noise, and provide adaptable and portable audio displays, which are all relevant aspects especially in enhanced contexts. Most of the binaural sound rendering techniques currently exploited in research rely on the use of Head-Related Transfer Functions (HRTFs), i.e. peculiar filters that capture the acoustic effects of the human head and ears. HRTFs allow loyal simulation of the audio signal that arrives at the entrance of the ear canal as a function of the sound source's spatial position. HRTF filters are usually presented under the form of acoustic signals acquired on dummy heads built according to mean anthropometric measurements. Nevertheless, anthropometric features of the human body have a key role in HRTF shaping: several studies have attested how listening to non-individual binaural sounds results in evident localization errors. On the other hand, individual HRTF measurements on a significant number of subjects result both time- and resource-expensive. Several techniques for synthetic HRTF design have been proposed during the last two decades and the most promising one relies on structural HRTF models. In this revolutionary approach, the most important effects involved in spatial sound perception (acoustic delays and shadowing due to head diffraction, reflections on pinna contours and shoulders, resonances inside the ear cavities) are isolated and modeled separately with a corresponding filtering element. HRTF selection and modeling procedures can be determined by physical interpretation: parameters of each rendering blocks or selection criteria can be estimated from real and simulated data and related to anthropometric geometries. Effective personal auditory displays represent an innovative breakthrough for a plethora of applications and structural approach can also allow for effective scalability depending on the available computational resources or bandwidth. Scenes with multiple highly realistic audiovisual objects are easily managed exploiting parallelism of increasingly ubiquitous GPUs (Graphics Processing Units). Building individual headphone equalization with perceptually robust inverse filtering techniques represents a fundamental step towards the creation of personal virtual auditory displays (VADs). To this regard, several examples might benefit from these considerations: multi-channel downmix over headphones, personal cinema, spatial audio rendering in mobile devices, computer-game engines and individual binaural audio standards for movie and music production. This thesis presents a family of approaches that overcome the current limitations of headphone-based 3D audio systems, aiming at building personal auditory displays through structural binaural audio models for an immersive sound reproduction. The resulting models allow for an interesting form of content adaptation and personalization, since they include parameters related to the user's anthropometry in addition to those related to the sound sources and the environment. The covered research directions converge to a novel framework for synthetic HRTF design and customization that combines the structural modeling paradigm with other HRTF selection techniques (inspired by non-individualized HRTF selection procedures) and represents the main novel contribution of this thesis: the Mixed Structural Modeling (MSM) approach considers the global HRTF as a combination of structural components, which can be chosen to be either synthetic or recorded components. In both cases, customization is based on individual anthropometric data, which are used to either fit the model parameters or to select a measured/simulated component within a set of available responses. The definition and experimental validation of the MSM approach addresses several pivotal issues towards the acquisition and delivery of binaural sound scenes and designing guidelines for personalized 3D audio virtual environments holding the potential of novel forms of customized communication and interaction with sound and music content. The thesis also presents a multimodal interactive system which is used to conduct subjective test on multi-sensory integration in virtual environments. Four experimental scenarios are proposed in order to test the capabilities of auditory feedback jointly to tactile or visual modalities. 3D audio feedback related to user’s movements during simple target following tasks is tested as an applicative example of audio-visual rehabilitation system. Perception of direction of footstep sounds interactively generated during walking and provided through headphones highlights how spatial information can clarify the semantic congruence between movement and multimodal feedback. A real time, physically informed audio-tactile interactive system encodes spatial information in the context of virtual map presentation with particular attention to orientation and mobility (O&M) learning processes addressed to visually impaired people. Finally, an experiment analyzes the haptic estimation of size of a virtual 3D object (a stair-step) whereas the exploration is accompanied by a real-time generated auditory feedback whose parameters vary as a function of the height of the interaction point. The collected data from these experiments suggest that well-designed multimodal feedback, exploiting 3D audio models, can definitely be used to improve performance in virtual reality and learning processes in orientation and complex motor tasks, thanks to the high level of attention, engagement, and presence provided to the user. The research framework, based on the MSM approach, serves as an important evaluation tool with the aim of progressively determining the relevant spatial attributes of sound for each application domain. In this perspective, such studies represent a novelty in the current literature on virtual and augmented reality, especially concerning the use of sonification techniques in several aspects of spatial cognition and internal multisensory representation of the body. This thesis is organized as follows. An overview of spatial hearing and binaural technology through headphones is given in Chapter 1. Chapter 2 is devoted to the Mixed Structural Modeling formalism and philosophy. In Chapter 3, topics in structural modeling for each body component are studied, previous research and two new models, i.e. near-field distance dependency and external-ear spectral cue, are presented. Chapter 4 deals with a complete case study of the mixed structural modeling approach and provides insights about the main innovative aspects of such modus operandi. Chapter 5 gives an overview of number of a number of proposed tools for the analysis and synthesis of HRTFs. System architectural guidelines and constraints are discussed in terms of real-time issues, mobility requirements and customized audio delivery. In Chapter 6, two case studies investigate the behavioral importance of spatial attribute of sound and how continuous interaction with virtual environments can benefit from using spatial audio algorithms. Chapter 7 describes a set of experiments aimed at assessing the contribution of binaural audio through headphones in learning processes of spatial cognitive maps and exploration of virtual objects. Finally, conclusions are drawn and new research horizons for further work are exposed in Chapter 8

    Predicting Successful Tactile Mapping of Virtual Objects

    No full text
    corecore