313 research outputs found

    Exploring the Use of Wearables to develop Assistive Technology for Visually Impaired People

    Get PDF
    This thesis explores the usage of two prominent wearable devices to develop assistive technology for users who are visually impaired. Specifically, the work in this thesis aims at improving the quality of life of users who are visually impaired by improving their mobility and ability to socially interact with others. We explore the use of a smart watch for creating low-cost spatial haptic applications. This app explores the use of haptic feedback provided using a smartwatch and smartphone to provide navigation instructions that let visually impaired people safely traverse a large open space. This spatial feedback guides them to walk on a straight path from source to destination by avoiding veering. Exploring the paired interaction between a Smartphone and a Smartwatch, helped to overcome the limitation that smart devices have only single haptic actuator.We explore the use of a head-mounted display to enhance social interaction by helping people with visual impairments align their head towards a conversation partner as well as maintain personal space during a conversation. Audio feedback is provided to the users guiding them to achieve effective face-to-face communication. A qualitative study of this method shows the effectiveness of the application and explains how it helps visually impaired people to perceive non-verbal cues and feel more engaged and assertive in social interactions

    Enhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensors

    Get PDF
    As estimated by the World Health Organization, there are millions of people who lives with some form of vision impairment. As a consequence, some of them present mobility problems in outdoor environments. With the aim of helping them, we propose in this work a system which is capable of delivering the position of potential obstacles in outdoor scenarios. Our approach is based on non-intrusive wearable devices and focuses also on being low-cost. First, a depth map of the scene is estimated from a color image, which provides 3D information of the environment. Then, an urban object detector is in charge of detecting the semantics of the objects in the scene. Finally, the three-dimensional and semantic data is summarized in a simpler representation of the potential obstacles the users have in front of them. This information is transmitted to the user through spoken or haptic feedback. Our system is able to run at about 3.8 fps and achieved a 87.99% mean accuracy in obstacle presence detection. Finally, we deployed our system in a pilot test which involved an actual person with vision impairment, who validated the effectiveness of our proposal for improving its navigation capabilities in outdoors.This work has been supported by the Spanish Government TIN2016-76515R Grant, supported with Feder funds, the University of Alicante project GRE16-19, and by the Valencian Government project GV/2018/022. Edmanuel Cruz is funded by a Panamenian grant for PhD studies IFARHU & SENACYT 270-2016-207. This work has also been supported by a Spanish grant for PhD studies ACIF/2017/243. Thanks also to Nvidia for the generous donation of a Titan Xp and a Quadro P6000

    BRAILLESHAPES : efficient text input on smartwatches for blind people

    Get PDF
    Tese de Mestrado, Engenharia Informática, 2023, Universidade de Lisboa, Faculdade de CiênciasMobile touchscreen devices like smartphones or smartwatches are a predominant part of our lives. They have evolved, and so have their applications. Due to the constant growth and advancements in technology, using such devices as a means to accomplish a vast amount of tasks has become common practice. Nonetheless, relying on touch-based interactions, requiring good spatial ability and memorization inherent to mobile devices, and lacking sufficient tactile cues, makes these devices visually demanding, thus providing a strenuous interaction modality for visually impaired people. In scenarios occurring in movement-based contexts or where onehanded use is required, it is even more apparent. We believe devices like smartwatches can provide numerous advantages when addressing such topics. However, they lack accessible solutions for several tasks, with most of the existing ones for mobile touchscreen devices targeting smartphones. With communication being of the utmost importance and intrinsic to humankind, one task, in particular, for which it is imperative to provide solutions addressing its surrounding accessibility concerns is text entry. Since Braille is a reading standard for blind people and provided positive results in prior work regarding accessible text entry approaches, we believe using it as the basis for an accessible text entry solution can help solidify a standardization for this type of interaction modality. It can also allow users to leverage previous knowledge, reducing possible extra cognitive load. Yet, even though Braille-based chording solutions achieved good results, due to the reduced space of the smartwatch’s touchscreen, a tapping approach is not the most feasible. Hence, we found the best option to be a gesture-based solution. Therefore, with this thesis, we explored and validated the concept and feasibility of Braille-based shapes as the foundation for an accessible gesture-based smartwatch text entry method for visually impaired people

    Aprendizado de variedades para a síntese de áudio espacial

    Get PDF
    Orientadores: Luiz César Martini, Bruno Sanches MasieroTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O objetivo do áudio espacial gerado com a técnica binaural é simular uma fonte sonora em localizações espaciais arbitrarias através das Funções de Transferência Relativas à Cabeça (HRTFs) ou também chamadas de Funções de Transferência Anatômicas. As HRTFs modelam a interação entre uma fonte sonora e a antropometria de uma pessoa (e.g., cabeça, torso e orelhas). Se filtrarmos uma fonte de áudio através de um par de HRTFs (uma para cada orelha), o som virtual resultante parece originar-se de uma localização espacial específica. Inspirados em nossos resultados bem sucedidos construindo uma aplicação prática de reconhecimento facial voltada para pessoas com deficiência visual que usa uma interface de usuário baseada em áudio espacial, neste trabalho aprofundamos nossa pesquisa para abordar vários aspectos científicos do áudio espacial. Neste contexto, esta tese analisa como incorporar conhecimentos prévios do áudio espacial usando uma nova representação não-linear das HRTFs baseada no aprendizado de variedades para enfrentar vários desafios de amplo interesse na comunidade do áudio espacial, como a personalização de HRTFs, a interpolação de HRTFs e a melhoria da localização de fontes sonoras. O uso do aprendizado de variedades para áudio espacial baseia-se no pressuposto de que os dados (i.e., as HRTFs) situam-se em uma variedade de baixa dimensão. Esta suposição também tem sido de grande interesse entre pesquisadores em neurociência computacional, que argumentam que as variedades são cruciais para entender as relações não lineares subjacentes à percepção no cérebro. Para todas as nossas contribuições usando o aprendizado de variedades, a construção de uma única variedade entre os sujeitos através de um grafo Inter-sujeito (Inter-subject graph, ISG) revelou-se como uma poderosa representação das HRTFs capaz de incorporar conhecimento prévio destas e capturar seus fatores subjacentes. Além disso, a vantagem de construir uma única variedade usando o nosso ISG e o uso de informações de outros indivíduos para melhorar o desempenho geral das técnicas aqui propostas. Os resultados mostram que nossas técnicas baseadas no ISG superam outros métodos lineares e não-lineares nos desafios de áudio espacial abordados por esta teseAbstract: The objective of binaurally rendered spatial audio is to simulate a sound source in arbitrary spatial locations through the Head-Related Transfer Functions (HRTFs). HRTFs model the direction-dependent influence of ears, head, and torso on the incident sound field. When an audio source is filtered through a pair of HRTFs (one for each ear), a listener is capable of perceiving a sound as though it were reproduced at a specific location in space. Inspired by our successful results building a practical face recognition application aimed at visually impaired people that uses a spatial audio user interface, in this work we have deepened our research to address several scientific aspects of spatial audio. In this context, this thesis explores the incorporation of spatial audio prior knowledge using a novel nonlinear HRTF representation based on manifold learning, which tackles three major challenges of broad interest among the spatial audio community: HRTF personalization, HRTF interpolation, and human sound localization improvement. Exploring manifold learning for spatial audio is based on the assumption that the data (i.e. the HRTFs) lies on a low-dimensional manifold. This assumption has also been of interest among researchers in computational neuroscience, who argue that manifolds are crucial for understanding the underlying nonlinear relationships of perception in the brain. For all of our contributions using manifold learning, the construction of a single manifold across subjects through an Inter-subject Graph (ISG) has proven to lead to a powerful HRTF representation capable of incorporating prior knowledge of HRTFs and capturing the underlying factors of spatial hearing. Moreover, the use of our ISG to construct a single manifold offers the advantage of employing information from other individuals to improve the overall performance of the techniques herein proposed. The results show that our ISG-based techniques outperform other linear and nonlinear methods in tackling the spatial audio challenges addressed by this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica2014/14630-9FAPESPCAPE

    Accessible On-Body Interaction for People With Visual Impairments

    Get PDF
    While mobile devices offer new opportunities to gain independence in everyday activities for people with disabilities, modern touchscreen-based interfaces can present accessibility challenges for low vision and blind users. Even with state-of-the-art screenreaders, it can be difficult or time-consuming to select specific items without visual feedback. The smooth surface of the touchscreen provides little tactile feedback compared to physical button-based phones. Furthermore, in a mobile context, hand-held devices present additional accessibility issues when both of the users’ hands are not available for interaction (e.g., on hand may be holding a cane or a dog leash). To improve mobile accessibility for people with visual impairments, I investigate on-body interaction, which employs the user’s own skin surface as the input space. On-body interaction may offer an alternative or complementary means of mobile interaction for people with visual impairments by enabling non-visual interaction with extra tactile and proprioceptive feedback compared to a touchscreen. In addition, on-body input may free users’ hands and offer efficient interaction as it can eliminate the need to pull out or hold the device. Despite this potential, little work has investigated the accessibility of on-body interaction for people with visual impairments. Thus, I begin by identifying needs and preferences of accessible on-body interaction. From there, I evaluate user performance in target acquisition and shape drawing tasks on the hand compared to on a touchscreen. Building on these studies, I focus on the design, implementation, and evaluation of an accessible on-body interaction system for visually impaired users. The contributions of this dissertation are: (1) identification of perceived advantages and limitations of on-body input compared to a touchscreen phone, (2) empirical evidence of the performance benefits of on-body input over touchscreen input in terms of speed and accuracy, (3) implementation and evaluation of an on-body gesture recognizer using finger- and wrist-mounted sensors, and (4) design implications for accessible non-visual on-body interaction for people with visual impairments

    Investigando Natural User Interfaces (NUIs) : tecnologias e interação em contexto de acessibilidade

    Get PDF
    Orientador: Maria Cecília Calani BaranauskasTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Natural User Interfaces (NUIs) representam um novo paradigma de interação, com a promessa de ser mais intuitivo e fácil de usar do que seu antecessor, que utiliza mouse e teclado. Em um contexto no qual as tecnologias estão cada vez mais invisíveis e pervasivas, não só a quantidade mas também a diversidade de pessoas que participam deste contexto é crescente. Nesse caso, é preciso estudar como esse novo paradigma de interação de fato consegue ser acessível a todas as pessoas que podem utilizá-lo no dia-a-dia. Ademais, é preciso também caracterizar o paradigma em si, para entender o que o torna, de fato, natural. Portanto, nesta tese apresentamos o caminho que percorremos em busca dessas duas respostas: como caracterizar NUIs, no atual contexto tecnológico, e como tornar NUIs acessíveis para todos. Para tanto, primeiro apresentamos uma revisão sistemática de literatura com o estado da arte. Depois, mostramos um conjunto de heurísticas para o design e a avaliação de NUIs, que foram aplicadas em estudos de caso práticos. Em seguida, estruturamos as ideias desta pesquisa dentro dos artefatos da Semiótica Organizacional, e obtivemos esclarecimentos sobre como fazer o design de NUIs com Acessibilidade, seja por meio de Design Universal, seja para propor Tecnologias Assistivas. Depois, apresentamos três estudos de caso com sistemas NUI cujo design foi feito por nós. A partir desses estudos de caso, expandimos nosso referencial teórico e conseguimos, por fim, encontrar três elementos que resumem a nossa caracterização de NUI: diferenças, affordances e enaçãoAbstract: Natural User Interfaces (NUIs) represent a new interaction paradigm, with the promise of being more intuitive and easy to use than its predecessor, that utilizes mouse and keyboard. In a context where technology is becoming each time more invisible and pervasive, not only the amount but also the diversity of people who participate in this context is increasing. In this case, it must be studied how this new interaction paradigm can, in fact, be accessible to all the people who may use it on their daily routine. Furthermore, it is also necessary to characterize the paradigm itself, to understand what makes it, in fact, natural. Therefore, in this thesis we present the path we took in search of these two answers: how to characterize NUIs in the current technological context, and how to make NUIs accessible to all. To do so, first we present a systematic literature review with the state of the art. Then, we show a set of heuristics for the design and evaluation of NUIs, which were applied in practical study cases. Afterwards, we structure the ideas of this research into the Organizational Semiotics artifacts, and we obtain insights into how to design NUIs with Accessibility, be it through Universal Design, be it to propose Assistive Technologies. Then, we present three case studies with NUI systems which we designed. From these case studies, we expanded our theoretical references were able to, finally, find three elements that sum up our characterization of NUI: differences, affordances and enactionDoutoradoCiência da ComputaçãoDoutora em Ciência da Computação160911/2015-0CAPESCNP

    Self-Powered Gesture Recognition with Ambient Light

    Get PDF
    We present a self-powered module for gesture recognition that utilizes small, low-cost photodiodes for both energy harvesting and gesture sensing. Operating in the photovoltaic mode, photodiodes harvest energy from ambient light. In the meantime, the instantaneously harvested power from individual photodiodes is monitored and exploited as a clue for sensing finger gestures in proximity. Harvested power from all photodiodes are aggregated to drive the whole gesture-recognition module including a micro-controller running the recognition algorithm. We design robust, lightweight algorithm to recognize finger gestures in the presence of ambient light fluctuations. We fabricate two prototypes to facilitate user’s interaction with smart glasses and smart watches. Results show 99.7%/98.3% overall precision/recall in recognizing five gestures on glasses and 99.2%/97.5% precision/recall in recognizing seven gestures on the watch. The system consumes 34.6 µW/74.3 µW for the glasses/watch and thus can be powered by the energy harvested from ambient light. We also test system’s robustness under various light intensities, light directions, and ambient light fluctuations. The system maintains high recognition accuracy (\u3e 96%) in all tested settings

    SmartWheels: Detecting urban features for wheelchair users’ navigation

    Get PDF
    People with mobility impairments have heterogeneous needs and abilities while moving in an urban environment and hence they require personalized navigation instructions. Providing these instructions requires the knowledge of urban features like curb ramps, steps or other obstacles along the way. Since these urban features are not available from maps and change in time, crowdsourcing this information from end-users is a scalable and promising solution. However, it is inconvenient for wheelchair users to input data while on the move. Hence, an automatic crowdsourcing mechanism is needed. In this contribution we present SmartWheels, a solution to detect urban features by analyzing inertial sensors data produced by wheelchair movements. Activity recognition techniques are used to process the sensors data stream. SmartWheels is evaluated on data collected from 17 real wheelchair users navigating in a controlled environment (10 users) and in-the-wild (7 users). Experimental results show that SmartWheels is a viable solution to detect urban features, in particular by applying specific strategies based on the confidence assigned to predictions by the classifier

    Development of a mobile technology system to measure shoulder range of motion

    Get PDF
    In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice.In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice
    corecore