392 research outputs found

    Tooteko: A case study of augmented reality for an accessible cultural heritage. Digitization, 3D printing and sensors for an audio-tactile experience

    Get PDF
    Tooteko is a smart ring that allows to navigate any 3D surface with your finger tips and get in return an audio content that is relevant in relation to the part of the surface you are touching in that moment. Tooteko can be applied to any tactile surface, object or sheet. However, in a more specific domain, it wants to make traditional art venues accessible to the blind, while providing support to the reading of the work for all through the recovery of the tactile dimension in order to facilitate the experience of contact with art that is not only "under glass." The system is made of three elements: A high-tech ring, a tactile surface tagged with NFC sensors, and an app for tablet or smartphone. The ring detects and reads the NFC tags and, thanks to the Tooteko app, communicates in wireless mode with the smart device. During the tactile navigation of the surface, when the finger reaches a hotspot, the ring identifies the NFC tag and activates, through the app, the audio track that is related to that specific hotspot. Thus a relevant audio content relates to each hotspot. The production process of the tactile surfaces involves scanning, digitization of data and 3D printing. The first experiment was modelled on the facade of the church of San Michele in Isola, made by Mauro Codussi in the late fifteenth century, and which marks the beginning of the Renaissance in Venice. Due to the absence of recent documentation on the church, the Correr Museum asked the Laboratorio di Fotogrammetria to provide it with the aim of setting up an exhibition about the order of the Camaldolesi, owners of the San Michele island and church. The Laboratorio has made the survey of the facade through laser scanning and UAV photogrammetry. The point clouds were the starting point for prototypation and 3D printing on different supports. The idea of the integration between a 3D printed tactile surface and sensors was born as a final thesis project at the Postgraduate Mastercourse in Digital Architecture of the University of Venice (IUAV) in 2012. Now Tooteko is now a start up company based in Venice, Italy

    Testing a Shape-Changing Haptic Navigation Device With Vision-Impaired and Sighted Audiences in an Immersive Theater Setting

    Get PDF
    Flatland was an immersive “in-the-wild” experimental theater and technology project, undertaken with the goal of developing systems that could assist “real-world” pedestrian navigation for both vision-impaired (VI) and sighted individuals, while also exploring inclusive and equivalent cultural experiences for VI and sighted audiences. A novel shape-changing handheld haptic navigation device, the “Animotus,” was developed. The device has the ability to modify its form in the user's grasp to communicate heading and proximity to navigational targets. Flatland provided a unique opportunity to comparatively study the use of novel navigation devices with a large group of individuals (79 sighted, 15 VI) who were primarily attending a theater production rather than an experimental study. In this paper, we present our findings on comparing the navigation performance (measured in terms of efficiency, average pace, and time facing targets) and opinions of VI and sighted users of the Animotus as they negotiated the 112 m2 production environment. Differences in navigation performance were nonsignificant across VI and sighted individuals and a similar range of opinions on device function and engagement spanned both groups. We believe more structured device familiarization, particularly for VI users, could improve performance and incorrect technology expectations (such as obstacle avoidance capability), which influenced overall opinion. This paper is intended to aid the development of future inclusive technologies and cultural experiences

    e-Archeo. A pilot national project to valorize Italian archaeological parks through digital and virtual reality technologies

    Get PDF
    Commissioned to ALES spa by the Ministry of Culture (MiC), the e-Archeo project was born with the intention of enhancing and promoting knowledge of some Italian archaeological sites with a considerable narrative potential that has not yet been fully expressed. The main principle that guided the choice of the sites and the contents was of illustrating the various cultures and types of settlements present in the Italian territory. Eight sites were chosen, spread across the national territory from north to south, founded by Etruscans, Greeks, Phoenicians, natives and Romans. e-Archeo has developed multimedia, integrated and multi-channel solutions for various uses and types of audiences, adopting both scientific and narrative and emotional languages. Particular attention was paid to multimedia accessibility, technological sustainability and open science. The e-Archeo project was born from a strong synergy between public entities, research bodies and private industries thanks to the collaboration of MiC and ALES with the CNR ISPC, 10 Italian Universities, 12 Creative Industries and the Italian National Television (RAI). This exceptional and unusual condition made it possible to realise all the project’s high-quality contents and several outputs in only one and a half years

    A Sound Approach Toward a Mobility Aid for Blind and Low-Vision Individuals

    Get PDF
    Reduced independent mobility of blind and low-vision individuals (BLVIs) cause considerable societal cost, burden on relatives, and reduced quality of life for the individuals, including increased anxiety, depression symptoms, need of assistance, risk of falls, and mortality. Despite the numerous electronic travel aids proposed since at least the 1940’s, along with ever-advancing technology, the mobility issues persist. A substantial reason for this is likely several and severe shortcomings of the field, both in regards to aid design and evaluation.In this work, these shortcomings are addressed with a generic design model called Desire of Use (DoU), which describes the desire of a given user to use an aid for a given activity. It is then applied on mobility of BLVIs (DoU-MoB), to systematically illuminate and structure possibly all related aspects that such an aid needs to aptly deal with, in order for it to become an adequate aid for the objective. These aspects can then both guide user-centered design as well as choice of test methods and measures.One such measure is then demonstrated in the Desire of Use Questionnaire for Mobility of Blind and Low-Vision Individuals (DoUQ-MoB), an aid-agnostic and comprehensive patient-reported outcome measure. The question construction originates from the DoU-MoB to ensure an encompassing focus on mobility of BLVIs, something that has been missing in the field. Since it is aid-agnostic it facilitates aid comparison, which it also actively promotes. To support the reliability of the DoUQ-MoB, it utilizes the best known practices of questionnaire design and has been validated once with eight orientation and mobility professionals, and six BLVIs. Based on this, the questionnaire has also been revised once.To allow for relevant and reproducible methodology, another tool presented herein is a portable virtual reality (VR) system called the Parrot-VR. It uses a hybrid control scheme of absolute rotation by tracking the user’s head in reality, affording intuitive turning; and relative movement where simple button presses on a controller moves the virtual avatar forward and backward, allowing for large-scale traversal while not walking physically. VR provides excellent reproducibility, making various aggregate movement analysis feasible, while it is also inherently safe. Meanwhile, the portability of the system facilitates testing near the participants, substantially increasing the number of potential blind and low-vision recruits for user tests.The thesis also gives a short account on the state of long-term testing in the field; it being short is mainly due to that there is not much to report. It then provides an initial investigation into possible outcome measures for such tests by taking instruments in use by Swedish orientation and mobility professionals as a starting point. Two of these are also piloted in an initial single-session trial with 19 BLVIs, and could plausibly be used for long-term tests after further evaluation.Finally, a discussion is presented regarding the Audomni project — the development of a primary mobility aid for BLVIs. Audomni is a visuo-auditory sensory supplementation device, which aims to take visual information and translate it to sound. A wide field-of-view, 3D-depth camera records the environment, which is then transformed to audio through the sonification algorithms of Audomni, and finally presented in a pair of open-ear headphones that do not block out environmental sounds. The design of Audomni leverages the DoU-MoB to ensure user-centric development and evaluation, in the aim of reaching an aid with such form and function that it grants the users better mobility, while the users still want to use it.Audomni has been evaluated with user tests twice, once in pilot tests with two BLVIs, and once in VR with a heterogenous set of 19 BLVIs, utilizing the Parrot-VR and the DoUQ-MoB. 76 % of responders (13 / 17) answered that it was very or extremely likely that they would want use Audomni along with their current aid. This might be the first result in the field demonstrating a majority of blind and low-vision participants reporting that they actually want to use a new electronic travel aid. This shows promise that eventual long-term tests will demonstrate an increased mobility of blind and low-vision users — the overarching project aim. Such results would ultimately mean that Audomni can become an aid that alleviates societal cost, reduces burden on relatives, and improves users’ quality of life and independence

    O ambiente virtual áudio-háptico como instrumento para a Aprendizagem de geometria : estudo das formas para estudantes cegos

    Get PDF
    O desenvolvimento das habilidades para os objetos de conhecimento da geometria está relacionado com as formas de organização de aprendizagem matemática e com os recursos didáticos utilizados para a construção de competências para o pensamento matemático (BRASIL, 2017). Esta tese se propõe a investigar como um ambiente virtual áudio-háptico pode contribuir na aprendizagem de geometria para estudantes cegos no Ensino Fundamental, com base em uma sequência de tarefas que contemplem os objetivos de conhecimento e habilidades previstos na Base Nacional Comum Curricular (BNCC). O referencial teórico proposto para esta pesquisa destaca os seguintes temas: a Teoria da Abstração Reflexionante de Piaget (1995), que traz contribuições para a compreensão da construção de conhecimentos; a percepção háptica humana na perspectiva de Lederman e Klatzky (1987), que propõem um conjunto de procedimentos exploratórios (Exploratory Procedure – EPs) para uma pessoa examinar um objeto com ou sem visão no intuito de perceber propriedades por intermédio do tato, bem como discutem as sensações que são convertidas pelo cérebro em informações cutâneas e cinestésica; a tecnologia háptica de hardware, em que se apresentam os dispositivos que permitem a interação de pessoas com ambientes virtuais por meio do tato e feedback de força; e o estudo da geometria, evidenciando a importância da aprendizagem dos conceitos geométricos no Ensino Fundamental, e a Tecnologia Assistiva digital com foco no sentido tátil-cinestésico para inclusão de estudantes cegos no estudo da geometria. A pesquisa apresenta abordagem qualitativa de natureza aplicada e foi realizada no Instituto Benjamin Constant, instituição de ensino para deficientes visuais localizada no bairro da Urca, na cidade e no estado do Rio de Janeiro. Para a coleta de dados, foram utilizadas as técnicas de observação participante, gravação de vídeo e Think Aloud, com o intuito de explorar os fatores de eficácia e eficiência e o mecanismo de abstração reflexionante na construção de conhecimentos geométricos. Os dados coletados foram analisados por meio da técnica de análise categorial prevista no método de análise de conteúdo (BARDIN, 2016). Acredita-se que esta tese possa contribuir como um recurso assistivo que apoie a aprendizagem de geometria – estudo das formas para estudantes cegos no Ensino Fundamental.The development of skills for the objects of knowledge of geometry is related to the forms of organization of mathematical learning and the didactic resources used to build competencies for mathematical thinking (BRASIL, 2017). This thesis aims to investigate how an audiohaptic virtual environment can contribute to Geometry Learning for Blind Elementary School Students, based on a sequence of tasks that address the objectives of knowledge and skills foreseen in the Common National Curriculum Base (BNCC). The theoretical framework proposed for this research highlights the following themes: Piaget's theory of reflect abstraction, Piaget (1995), which brings contributions to the understanding of the construction of knowledge; the human pain perception from the perspective of Lederman and Klatzky (1987), who propose a set of Exploratory Procedures (EPs) for a person to examine an object with or without vision in order to perceive properties through touch, as well as discuss the sensations that are converted by the brain into cutaneous and kinesthetic information; the political technology of hardware, in which the devices that allow the interaction of people with virtual environments through touch and force feedback are presented; and the study of geometry, evidencing the importance of learning geometric concepts in Elementary School, and digital assistive technology focused on the tactile-kinesthetic sense for the inclusion of blind students in the study of geometry. The research presents a qualitative approach of an applied nature and was held at the Benjamin Constant Institute, an educational institution for the visually impaired located in the Urca neighborhood, in the city and in the state of Rio de Janeiro. For data collection of this investigation, participant observation, video recording and Think Aloud techniques were used in order to explore the factors of efficacy and efficiency and the mechanism of reflecting abstraction in the construction of geometric knowledge. The collected data were analyzed using the category analysis technique provided for in the content analysis method (BARDIN, 2016). It is believed that this can contribute as an assistive resource that supports the learning of geometry – study of forms for blind students in Elementary School

    SI-Lab Annual Research Report 2020

    Get PDF
    The Signal & Images Laboratory (http://si.isti.cnr.it/) is an interdisciplinary research group in computer vision, signal analysis, smart vision systems and multimedia data understanding. It is part of the Institute for Information Science and Technologies of the National Research Council of Italy. This report accounts for the research activities of the Signal and Images Laboratory of the Institute of Information Science and Technologies during the year 2020

    Mixed Structural Models for 3D Audio in Virtual Environments

    Get PDF
    In the world of ICT, strategies for innovation and development are increasingly focusing on applications that require spatial representation and real-time interaction with and within 3D media environments. One of the major challenges that such applications have to address is user-centricity, reflecting e.g. on developing complexity-hiding services so that people can personalize their own delivery of services. In these terms, multimodal interfaces represent a key factor for enabling an inclusive use of the new technology by everyone. In order to achieve this, multimodal realistic models that describe our environment are needed, and in particular models that accurately describe the acoustics of the environment and communication through the auditory modality. Examples of currently active research directions and application areas include 3DTV and future internet, 3D visual-sound scene coding, transmission and reconstruction and teleconferencing systems, to name but a few. The concurrent presence of multimodal senses and activities make multimodal virtual environments potentially flexible and adaptive, allowing users to switch between modalities as needed during the continuously changing conditions of use situation. Augmentation through additional modalities and sensory substitution techniques are compelling ingredients for presenting information non-visually, when the visual bandwidth is overloaded, when data are visually occluded, or when the visual channel is not available to the user (e.g., for visually impaired people). Multimodal systems for the representation of spatial information will largely benefit from the implementation of audio engines that have extensive knowledge of spatial hearing and virtual acoustics. Models for spatial audio can provide accurate dynamic information about the relation between the sound source and the surrounding environment, including the listener and his/her body which acts as an additional filter. Indeed, this information cannot be substituted by any other modality (i.e., visual or tactile). Nevertheless, today's spatial representation of audio within sonification tends to be simplistic and with poor interaction capabilities, being multimedia systems currently focused on graphics processing mostly, and integrated with simple stereo or multi-channel surround-sound. On a much different level lie binaural rendering approaches based on headphone reproduction, taking into account that possible disadvantages (e.g. invasiveness, non-flat frequency responses) are counterbalanced by a number of desirable features. Indeed, these systems might control and/or eliminate reverberation and other acoustic effects of the real listening space, reduce background noise, and provide adaptable and portable audio displays, which are all relevant aspects especially in enhanced contexts. Most of the binaural sound rendering techniques currently exploited in research rely on the use of Head-Related Transfer Functions (HRTFs), i.e. peculiar filters that capture the acoustic effects of the human head and ears. HRTFs allow loyal simulation of the audio signal that arrives at the entrance of the ear canal as a function of the sound source's spatial position. HRTF filters are usually presented under the form of acoustic signals acquired on dummy heads built according to mean anthropometric measurements. Nevertheless, anthropometric features of the human body have a key role in HRTF shaping: several studies have attested how listening to non-individual binaural sounds results in evident localization errors. On the other hand, individual HRTF measurements on a significant number of subjects result both time- and resource-expensive. Several techniques for synthetic HRTF design have been proposed during the last two decades and the most promising one relies on structural HRTF models. In this revolutionary approach, the most important effects involved in spatial sound perception (acoustic delays and shadowing due to head diffraction, reflections on pinna contours and shoulders, resonances inside the ear cavities) are isolated and modeled separately with a corresponding filtering element. HRTF selection and modeling procedures can be determined by physical interpretation: parameters of each rendering blocks or selection criteria can be estimated from real and simulated data and related to anthropometric geometries. Effective personal auditory displays represent an innovative breakthrough for a plethora of applications and structural approach can also allow for effective scalability depending on the available computational resources or bandwidth. Scenes with multiple highly realistic audiovisual objects are easily managed exploiting parallelism of increasingly ubiquitous GPUs (Graphics Processing Units). Building individual headphone equalization with perceptually robust inverse filtering techniques represents a fundamental step towards the creation of personal virtual auditory displays (VADs). To this regard, several examples might benefit from these considerations: multi-channel downmix over headphones, personal cinema, spatial audio rendering in mobile devices, computer-game engines and individual binaural audio standards for movie and music production. This thesis presents a family of approaches that overcome the current limitations of headphone-based 3D audio systems, aiming at building personal auditory displays through structural binaural audio models for an immersive sound reproduction. The resulting models allow for an interesting form of content adaptation and personalization, since they include parameters related to the user's anthropometry in addition to those related to the sound sources and the environment. The covered research directions converge to a novel framework for synthetic HRTF design and customization that combines the structural modeling paradigm with other HRTF selection techniques (inspired by non-individualized HRTF selection procedures) and represents the main novel contribution of this thesis: the Mixed Structural Modeling (MSM) approach considers the global HRTF as a combination of structural components, which can be chosen to be either synthetic or recorded components. In both cases, customization is based on individual anthropometric data, which are used to either fit the model parameters or to select a measured/simulated component within a set of available responses. The definition and experimental validation of the MSM approach addresses several pivotal issues towards the acquisition and delivery of binaural sound scenes and designing guidelines for personalized 3D audio virtual environments holding the potential of novel forms of customized communication and interaction with sound and music content. The thesis also presents a multimodal interactive system which is used to conduct subjective test on multi-sensory integration in virtual environments. Four experimental scenarios are proposed in order to test the capabilities of auditory feedback jointly to tactile or visual modalities. 3D audio feedback related to user’s movements during simple target following tasks is tested as an applicative example of audio-visual rehabilitation system. Perception of direction of footstep sounds interactively generated during walking and provided through headphones highlights how spatial information can clarify the semantic congruence between movement and multimodal feedback. A real time, physically informed audio-tactile interactive system encodes spatial information in the context of virtual map presentation with particular attention to orientation and mobility (O&M) learning processes addressed to visually impaired people. Finally, an experiment analyzes the haptic estimation of size of a virtual 3D object (a stair-step) whereas the exploration is accompanied by a real-time generated auditory feedback whose parameters vary as a function of the height of the interaction point. The collected data from these experiments suggest that well-designed multimodal feedback, exploiting 3D audio models, can definitely be used to improve performance in virtual reality and learning processes in orientation and complex motor tasks, thanks to the high level of attention, engagement, and presence provided to the user. The research framework, based on the MSM approach, serves as an important evaluation tool with the aim of progressively determining the relevant spatial attributes of sound for each application domain. In this perspective, such studies represent a novelty in the current literature on virtual and augmented reality, especially concerning the use of sonification techniques in several aspects of spatial cognition and internal multisensory representation of the body. This thesis is organized as follows. An overview of spatial hearing and binaural technology through headphones is given in Chapter 1. Chapter 2 is devoted to the Mixed Structural Modeling formalism and philosophy. In Chapter 3, topics in structural modeling for each body component are studied, previous research and two new models, i.e. near-field distance dependency and external-ear spectral cue, are presented. Chapter 4 deals with a complete case study of the mixed structural modeling approach and provides insights about the main innovative aspects of such modus operandi. Chapter 5 gives an overview of number of a number of proposed tools for the analysis and synthesis of HRTFs. System architectural guidelines and constraints are discussed in terms of real-time issues, mobility requirements and customized audio delivery. In Chapter 6, two case studies investigate the behavioral importance of spatial attribute of sound and how continuous interaction with virtual environments can benefit from using spatial audio algorithms. Chapter 7 describes a set of experiments aimed at assessing the contribution of binaural audio through headphones in learning processes of spatial cognitive maps and exploration of virtual objects. Finally, conclusions are drawn and new research horizons for further work are exposed in Chapter 8

    Design and evaluation of auditory spatial cues for decision making within a game environment for persons with visual impairments

    Get PDF
    An audio platform game was created and evaluated in order to answer the question of whether or not an audio game could be designed that effectively conveys the spatial information necessary for persons with visual impairments to successfully navigate the game levels and respond to audio cues in time to avoid obstacles. The game used several types of audio cues (sounds and speech) to convey the spatial setup (map) of the game world. Most audio-only players seemed to be able to create a workable mental map from the game\u27s sound cues alone, pointing to potential for the further development of similar audio games for persons with visual impairments. The research also investigated the navigational strategies used by persons with visual impairments and the accuracy of the participants\u27 mental maps as a consequence of their navigational strategy. A comparisons of the maps created by visually impaired participants with those created by sighted participants playing the game with and without graphics, showed no statistically significant difference in map accuracy between groups. However, there was a marked difference between the number of invented objects when we compared this value between the sighted audio-only group and the other groups, which could serve as an area for future research
    • …
    corecore