8 research outputs found

    Emulating Perceptual Experience of Color Vision Deficiency with Virtual Reality

    Get PDF
    Abstract. One of the major goals of Universal Design is to create experiences that are inclusive to all users, including those affected by Color Vision Deficiency. Color Vision Deficiency might have a significant impact on a users’ perception of the content or the environment. There is a range of tools already available, that can be used to either aid or automate the process of readability testing for digital interfaces and content in respect to Color Vision Deficiency. Two different approaches to addressing this issue can be found. A brief review of such methodologies is provided in this paper. The first approach (user-end) attempts to solve the problem by altering mediation between the user and the content. The second (design-end) allows the designer to view an image, or color scheme altered to recreate the perceptual experience of a user affected by Color Vision Deficiency and asses the design from the perspective of a color-blind user. With an implemented proof-of-concept we investigate the potential use of Virtual Reality Head-Mounted Displays to employ similar methodology, to allow designers or interior decorators to experience physical environments (i.e.: classroom, library or a cafeteria) from the perspective of a color-blind person. Such tools might increase the designers’ empathy towards color-blind users but also allow them to identify visual components, such as infographics or advertisement, in a physical environment that are poorly visible to color-blind users. Such tools could be developed by taking advantage of a modern Head-Mounted Displays six degrees of freedom tracking, a 360 camera and color processing filters applied during post-processing at run-time, allowing a designer to easily switch between different types of colorblindness emulation

    Bioinspired Electronic White Cane Implementation Based on a LIDAR, a Tri-Axial Accelerometer and a Tactile Belt

    Get PDF
    This work proposes the creation of a bioinspired electronic white cane for blind people using the whiskers principle for short-range navigation and exploration. Whiskers are coarse hairs of an animal's face that tells the animal that it has touched something using the nerves of the skin. In this work the raw data acquired from a low-size terrestrial LIDAR and a tri-axial accelerometer is converted into tactile information using several electromagnetic devices configured as a tactile belt. The LIDAR and the accelerometer are attached to the user’s forearm and connected with a wire to the control unit placed on the belt. Early validation experiments carried out in the laboratory are promising in terms of usability and description of the environment

    Converting SVG images to text and speech

    Get PDF
    There are contents with a strong visual component, e.g.,in the engineering field, such as technical drawings,charts, diagrams, etc., virtually inaccessible for peoplewith visual disabilities (blind, visually impaired, etc.).These contents are mostly vector based and can generallybe found on the Web in various formats, but therecommended one is SVG.The authors created an online application to convertSVG images (containing simple geometric figures ofvarious sizes, filling colours and thickness and colourfilling contours) in textual and spoken description by aspeech synthesizer, client side based, without browserplugins. This application also allows the user to navigatewith efficiency through the image description using fourlevels of detail and keyboard commands.In this paper, the authors propose a novel method forimage description based on the Gestalt theory,considering the cognitive load and, for the first time,providing users with visual impairments access to the fullcontent of SVG images.Application tests were carried out with 11 users (eightnormally sighted, two blind and one amblyopic),comparing descriptions made by the application and byhumans. The authors concluded that, for all users, therewere improvements of 9%, using the application.Considering only the visually impaired, this figure risesto 18%

    Layering the senses: exploring audio primacy in multisensory cinema

    Get PDF
    This is an accepted manuscript of a paper published by the Institute of Acoustics in Proceedings of the Institute of Acoustics on 22/11/2021. The accepted version of the publication may differ from the final published version

    Aprendizado de variedades para a síntese de áudio espacial

    Get PDF
    Orientadores: Luiz César Martini, Bruno Sanches MasieroTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O objetivo do áudio espacial gerado com a técnica binaural é simular uma fonte sonora em localizações espaciais arbitrarias através das Funções de Transferência Relativas à Cabeça (HRTFs) ou também chamadas de Funções de Transferência Anatômicas. As HRTFs modelam a interação entre uma fonte sonora e a antropometria de uma pessoa (e.g., cabeça, torso e orelhas). Se filtrarmos uma fonte de áudio através de um par de HRTFs (uma para cada orelha), o som virtual resultante parece originar-se de uma localização espacial específica. Inspirados em nossos resultados bem sucedidos construindo uma aplicação prática de reconhecimento facial voltada para pessoas com deficiência visual que usa uma interface de usuário baseada em áudio espacial, neste trabalho aprofundamos nossa pesquisa para abordar vários aspectos científicos do áudio espacial. Neste contexto, esta tese analisa como incorporar conhecimentos prévios do áudio espacial usando uma nova representação não-linear das HRTFs baseada no aprendizado de variedades para enfrentar vários desafios de amplo interesse na comunidade do áudio espacial, como a personalização de HRTFs, a interpolação de HRTFs e a melhoria da localização de fontes sonoras. O uso do aprendizado de variedades para áudio espacial baseia-se no pressuposto de que os dados (i.e., as HRTFs) situam-se em uma variedade de baixa dimensão. Esta suposição também tem sido de grande interesse entre pesquisadores em neurociência computacional, que argumentam que as variedades são cruciais para entender as relações não lineares subjacentes à percepção no cérebro. Para todas as nossas contribuições usando o aprendizado de variedades, a construção de uma única variedade entre os sujeitos através de um grafo Inter-sujeito (Inter-subject graph, ISG) revelou-se como uma poderosa representação das HRTFs capaz de incorporar conhecimento prévio destas e capturar seus fatores subjacentes. Além disso, a vantagem de construir uma única variedade usando o nosso ISG e o uso de informações de outros indivíduos para melhorar o desempenho geral das técnicas aqui propostas. Os resultados mostram que nossas técnicas baseadas no ISG superam outros métodos lineares e não-lineares nos desafios de áudio espacial abordados por esta teseAbstract: The objective of binaurally rendered spatial audio is to simulate a sound source in arbitrary spatial locations through the Head-Related Transfer Functions (HRTFs). HRTFs model the direction-dependent influence of ears, head, and torso on the incident sound field. When an audio source is filtered through a pair of HRTFs (one for each ear), a listener is capable of perceiving a sound as though it were reproduced at a specific location in space. Inspired by our successful results building a practical face recognition application aimed at visually impaired people that uses a spatial audio user interface, in this work we have deepened our research to address several scientific aspects of spatial audio. In this context, this thesis explores the incorporation of spatial audio prior knowledge using a novel nonlinear HRTF representation based on manifold learning, which tackles three major challenges of broad interest among the spatial audio community: HRTF personalization, HRTF interpolation, and human sound localization improvement. Exploring manifold learning for spatial audio is based on the assumption that the data (i.e. the HRTFs) lies on a low-dimensional manifold. This assumption has also been of interest among researchers in computational neuroscience, who argue that manifolds are crucial for understanding the underlying nonlinear relationships of perception in the brain. For all of our contributions using manifold learning, the construction of a single manifold across subjects through an Inter-subject Graph (ISG) has proven to lead to a powerful HRTF representation capable of incorporating prior knowledge of HRTFs and capturing the underlying factors of spatial hearing. Moreover, the use of our ISG to construct a single manifold offers the advantage of employing information from other individuals to improve the overall performance of the techniques herein proposed. The results show that our ISG-based techniques outperform other linear and nonlinear methods in tackling the spatial audio challenges addressed by this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica2014/14630-9FAPESPCAPE

    Context Aware Computing or the Sense of Context

    Get PDF
    ITALIANO: I sistemi ubiquitous e pervasivi, speciali categorie di sistemi embedded (immersi), possono essere utilizzati per rilevare il contesto che li circonda. In particolare, i sistemi context-aware sono in grado di alterare il loro stato interno e il loro comportamento in base all’ambiente (context) che percepiscono. Per aiutare le persone nell’espletare le proprie attivitá, tali sistemi possono utilizzare le conoscenze raccolte attorno a loro. Un grande sforzo industriale e di ricerca, orientato all’innovazione dei sensori, processori, sistemi operativi, protocolli di comunicazione, e framework, offre molte tecnologie definibili abilitanti, come le reti di sensori wireless o gli Smartphone. Tuttavia, nonostante tale sforzo significativo, l’adozione di sistemi pervasivi che permettano di migliorare il monitoraggio dello sport, l’allenamento e le tecnologie assistive é ancora piuttosto limitato. Questa tesi individua due fattori determinanti per questo basso utilizzo delle tecnologie pervasive, principalmente relativi agli utenti. Da un lato il tentativo degli esperti e dei ricercatori dell’informatica di indurre l’adozione di soluzioni informatiche, trascurando parzialmente l’interazione con gli utenti finali, dall’altro lato una scarsa attenzione all’interazione tra uomo e computer. Il primo fattore puó essere tradotto nella mancanza di attenzione a ció che é rilevante nel contesto dei bisogni (speciali) dell’utente. Il secondo é rappresentato dall’utilizzo diffuso di interfacce grafiche di presentazione delle informazioni, che richiede un elevato livello di sforzo cognitivo da parte degli utenti. Mentre lo studio della letteratura puó fornire conoscenze sul contesto dell’utente, solo il contatto diretto con lui arricchisce la conoscenza di consapevolezza, fornendo una precisa identificazione dei fattori che sono piú rilevanti per il destinatario dell’applicazione. Per applicare con successo le tecnologie pervasive al campo dello sport e delle tecnologie assistive, l’identificazione dei fattori rilevanti é una premessa necessaria, Tale processo di identificazione rappresenta l’approccio metodologico principale utilizzato per questa tesi. Nella tesi si analizzano diversi sport (canottaggio, nuoto, corsa ) e una disabilitá (la cecitá), per mostrare come la metodologia di investigazione e di progettazione proposta venga messa in pratica. Infatti i fattori rilevanti sono stati identificati grazie alla stretta collaborazione con gli utenti e gli esperti nei rispettivi campi. Si descrive il processo di identificazione, insieme alle soluzioni elaborate su misura per il particolare campo d’uso. L’uso della sonificazione, cioé la trasmissione di informazioni attraverso il suono, si propone di affrontare il secondo problema presentato, riguardante le interfacce utente. L’uso della sonificazione puó facilitare la fruizione in tempo reale delle informazioni sulle prestazioni di attivitá sportive, e puó contribuire ad alleviare parzialmente la disabilitá degli utenti non vedenti. Nel canottaggio, si é identificato nel livello di sincronia del team uno dei fattori rilevanti per una propulsione efficace dell’imbarcazione. Il problema di rilevare il livello di sincronia viene analizzato mediante una rete di accelerometri wireless, proponendo due diverse soluzioni. La prima soluzione é basata sull’indice di correlazione di Pearson e la seconda su un approccio emergente chiamato stigmergia. Entrambi gli approcci sono stati testati con successo in laboratorio e sul campo. Inoltre sono state sviluppate due applicazioni, per smartphone e PC, per fornire la telemetria e la sonificazione del moto di una barca a remi. Nel campo del nuoto é stata condotta una ricerca in letteratura riguardo la convinzione diffusa di considerare la cinematica come il fattore rilevante della propulsione efficace dei nuotatori. Questa indagine ha richiamato l’attenzione sull’importanza di studiare il cosiddetto feel-for-water (sensazione-dell’-acqua) percepito dai nuotatori d’alto livello. É stato progettato un innovativo sistema, per rilevare e comunicare gli effetti fluidodinamici causati dallo spostamento delle masse d’acqua intorno alle mani dei nuotatori. Il sistema é in grado di trasformare la pressione dell’acqua, misurata con sonde Piezo intorno alle mani, in un bio-feedback auditivo, pensato per i nuotatori e gli allenatori, come base per un nuovo modo di comunicare la sensazione-dell’acqua. Il sistema é stato testato con successo nel campo e ha dimostrato di fornire informazioni in tempo reale per il nuotatore e il formatore. Nello sport della corsa sono stati individuati due parametri rilevanti: il tempo di volo e di contatto dei piedi. É stato progettato un sistema innovativo per ottenere questi parametri attraverso un unico accelerometro montato sul tronco del corridore ed é stato implementato su uno smartphone. Per ottenere il risultato voluto é stato necessario progettare e realizzare un sistema per riallineare virtualmente gli assi dell’accelerometro e per estrarre il tempo di volo e di contatto dal segnale dell’accelerometro riallineato. L’applicazione per smartphone completa é stata testata con successo sul campo, confrontando i valori con quelli di attrezzature specializzate, dimostrando la sua idoneitá come ausilio pervasivo all’allenamento di corridori. Per esplorare le possibilitá della sonificazione usata come una base per tecnologia assistiva, abbiamo iniziato una collaborazione con un gruppo di ricerca presso l’Universitá di Scienze Applicate, Ginevra, in Svizzera. Tale collaborazione si é concentrata su un progetto chiamato SeeColOr (See Color with an Orchestra - vedere i colori con un’orchestra). In particolare, abbiamo avuto l’opportunitá di implementare il sistema SeeColOr su smartphone, al fine di consentire agli utenti non vedenti di utilizzare tale tecnologia su dispositivi leggeri e a basso costo. Inoltre, la tesi esplora alcune questioni relative al campo del rilevamento ambientale in ambienti estremi, come i ghiacciai, utilizzando la tecnologia delle Wireless Sensor Networks. Considerando che la tecnologia é simile a quella usata in altri contesti presentati, le considerazioni possono facilmente essere riutilizzate. Si sottolinea infatti che i problemi principali sono legati alla elevata difficoltá e scarsa affidabilitá di questa tecnologia innovativa rispetto alle altre soluzioni disponibili in commercio , definite legacy, basate solitamente su dispositivi piú grandi e costosi, chiamati datalogger. La tesi presenta i problemi esposti e le soluzioni proposte per mostrare l’applicazione dell’approccio progettuale cercato e definito durante lo sviluppo delle attività sperimentali e la ricerca che le ha implementate. ---------------------------------------- ENGLISH: Ubiquitous and pervasive systems, special categories of embedded systems, can be used to sense the context in their surrounding. In particular, context-aware systems are able to alter their internal state and their behaviour based on the context they perceive. To help people in better performing their activities, such systems must use the knowledge gathered about the context. A big research and industrial effort, geared towards the innovation of sensors, processors, operating systems, communication protocols, and frameworks, provides many "enabling" technologies, such as Wireless Sensor Networks or Smartphones. However, despite that significant effort, the adoption of pervasive systems to enhance sports monitoring, training and assistive technologies is still rather small. This thesis identifies two main issues concerning this low usage of pervasive technologies, both mainly related to users. On one side the attempt of computer science experts and researchers to induce the adoption of information technology based solutions, partially neglecting interaction with end users; on the other side a scarce attention to the interaction between humans and computers. The first can be translated into the lack of attention at what is relevant in the context of the user’s (special) needs. The second is represented by the widespread usage of graphical user interfaces to present information, requiring a high level of cognitive effort. While literature studies can provide knowledge about the user’s context, only direct contact with users enriches knowledge with awareness, providing a precise identification of the factors that are more relevant to the user. To successfully apply pervasive technologies to the field of sports engineering and assistive technology, the identification of relevant factors is an obliged premise, and represents the main methodological approach used throughout this thesis. This thesis analyses different sports (rowing, swimming, running) and a disability (blindness), to show how the proposed design methodology is put in practice. Relevant factors were identified thanks to the tight collaboration with users and experts in the respective fields. The process of identification is described, together with the proposed application tailored for the special field. The use of sonification, i.e. conveying information as sound, is proposed to leverage the second presented issue, that regards the user interfaces. The usage of sonification can ease the exploitation of information about performance in real-time for sport activities and can help to partially leverage the disability of blind users. In rowing, the synchrony level of the team was identified as one of the relevant factors for effective propulsion. The problem of detecting the synchrony level is analysed by means of a network of wireless accelerometers, proposing two different solutions. The first solution is based on Pearson’s correlation index and the second on an emergent approach called stigmergy. Both approaches were successfully tested in laboratory and in the field. Moreover two applications, for smartphones and PCs, were developed to provide telemetry and sonification of a rowing boat’s motion. In the field of swimming, an investigation about the widespread belief considering kinematics as the relevant factor of effective propulsion of swimmers drew attention to the importance of studying the so called "feel-for-water" experienced by elite swimmers. An innovative system was designed to sense and communicate fluid-dynamic effects caused by moving water masses around swimmers hands. The system is able to transform water pressure, measured with Piezo-probes, around hands into an auditive biofeedback, to be used by swimmers and trainers, as the base for a new way of communication about the "feel-for-water". The system was successfully tested in the field and proved to provide real-time information for the swimmer and the trainer. In running sports two relevant parameters are time of flight and contact of feet. An innovative system was designed to obtain these parameters using a single trunk mounted accelerometer and was implemented on a smartphone. To achieve the intended result it was necessary to design and implement a system to virtually realign the axes of the accelerometer and to extract time of flight and time of contact phases from the realigned accelerometer signal. The complete smartphone application was successfully tested in the field with specialized equipment, proving its suitability in enhancing training of runners with a pervasive system. To explore possibilities of sonification applied as an assistive technology, we started a collaboration with research group from University of Applied Science, Geneva, Switzerland, focused on a project called SeeColOr (See Color with an Orchestra). In particular we had the opportunity to implement the SeeColOr system on smartphones, in order to enable blind users to use that technology on low cost and lightweight devices. Moreover, the thesis exposes some issues related to a field, environmental sensing in extreme environments, like glaciers, using the innovative Wireless Sensor Networks technology. Considering that the technology is similar to the one used in other presented contexts, learned lessons can easily be reused. It is emphasized that the main problems are related to the high difficulty and low reliability of that innovative technology with respect to other "legacy" commercially available solutions, based on expensive and bigger devices, called dataloggers. The thesis presents the exposed problems and proposed solutions to show the application of the design approach strived during the development and research

    Image and Video Processing for Visually Handicapped People

    No full text
    This paper reviews the state of the art in the field of assistive devices for sight-handicapped people. It concentrates in particular on systems that use image and video processing for converting visual data into an alternate rendering modality that will be appropriate for a blind user. Such alternate modalities can be auditory, haptic, or a combination of both. There is thus the need for modality conversion, from the visual modality to another one; this is where image and video processing plays a crucial role. The possible alternate sensory channels are examined with the purpose of using them to present visual information to totally blind persons. Aids that are either already existing or still under development are then presented, where a distinction is made according to the final output channel. Haptic encoding is the most often used by means of either tactile or combined tactile/kinesthetic encoding of the visual data. Auditory encoding may lead to low-cost devices, but there is need to handle high information loss incurred when transforming visual data to auditory one. Despite a higher technical complexity, audio/haptic encoding has the advantage of making use of all available user’s sensory channels

    Image and Video Processing for Visually Handicapped People

    No full text
    This paper reviews the state of the art in the field of assistive devices for sight-handicapped people. It concentrates in particular on systems that use image and video processing for converting visual data into an alternate rendering modality that will be appropriate for a blind user. Such alternate modalities can be auditory, haptic, or a combination of both. There is thus the need for modality conversion, from the visual modality to another one; this is where image and video processing plays a crucial role. The possible alternate sensory channels are examined with the purpose of using them to present visual information to totally blind persons. Aids that are either already existing or still under development are then presented, where a distinction is made according to the final output channel. Haptic encoding is the most often used by means of either tactile or combined tactile/kinesthetic encoding of the visual data. Auditory encoding may lead to low-cost devices, but there is need to handle high information loss incurred when transforming visual data to auditory one. Despite a higher technical complexity, audio/haptic encoding has the advantage of making use of all available user's sensory channels.</p
    corecore