19 research outputs found

    Prototype ultrasonic wayfinder with haptic feedback for an IOT environment

    Get PDF
    Pervasive computing and the Internet of Things (IoT) have stimulated the development of many new assistive devices. It is possible to incorporate sensors such as acoustic, inductive, capacitive, temperature, humidity, pressure, location, and many more. Haptic feedback provides a person with sensory information through the skin using vibration or force-feedback responses. Commercial organizations have moved very quickly into this design space, particularly Sunu (smart-watch), HandSight (cameras on glove), and others. Arduino and Raspberry Pi are examples of the computing platforms currently in use. Sonar or ultrasonic transducers enable the production of lighter equipment with improved functionalities. Sonar as a means of assistive navigation has been used extensively in maritime environments to detect animals (D'Amico and Pittenger, 2009, Evans and Awbrey, 1988). As an assistive technology, there are projects for the blind which upgrade their walking sticks with an ultrasonic sensor (Amemiya and Sugiyama, 2010). Similar projects have been undertaken worldwide and most devices can only provide one or two designated functions. The size of the completed device is small enough to embed on a shoe, a walking stick, or on a wheelchair. A sonar sensor can detect something less than a meter from an individual user. This study uses a glove to attach a sonar sensor on a Raspberry Pi 0, whereas the Tacit glove (Hoefer, 2011) carries two sonar sensors with an Arduino controller actuating vibrating motors on a glove

    MAKEMESEE – AN AID TO HELP VISUAL IMPAIRMENT PEOPLE

    Get PDF
    SoluçÔes envolvendo inteligĂȘncia artificial e visĂŁo computacional tĂȘm se tornado cada vez mais comuns nos Ășltimos anos, devido ao aumento do poder computacional e o desenvolvimento de novas tecnologias. Essas soluçÔes abrangem boa parte das necessidades humanas, como carros autĂŽnomos, segmentação de imagens mĂ©dicas ou previsĂ”es para o mercado financeiro. Visto que a acessibilidade tambĂ©m Ă© uma ĂĄrea muito importante e que as tĂ©cnicas de inteligĂȘncia artificial e visĂŁo computacional podem proporcionar soluçÔes que auxiliem pessoas com deficiĂȘncia, neste trabalho Ă© abordada uma solução que permite a detecção, cĂĄlculo e narração de obstĂĄculos para auxiliar portadores de deficiĂȘncia visual. Por meio de um hardware composto por duas webcams, capaz de fazer a captura de imagens diferentes de uma mesma cena, e de um software capaz de processar as imagens obtidas, classificando e detectando os obstĂĄculos, a solução visa informar ao usuĂĄrio o que estĂĄ a sua frente

    Distributed and adaptive location identification system for mobile devices

    Full text link
    Indoor location identification and navigation need to be as simple, seamless, and ubiquitous as its outdoor GPS-based counterpart is. It would be of great convenience to the mobile user to be able to continue navigating seamlessly as he or she moves from a GPS-clear outdoor environment into an indoor environment or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing infrastructure-based indoor localization systems lack such capability, on top of potentially facing several critical technical challenges such as increased cost of installation, centralization, lack of reliability, poor localization accuracy, poor adaptation to the dynamics of the surrounding environment, latency, system-level and computational complexities, repetitive labor-intensive parameter tuning, and user privacy. To this end, this paper presents a novel mechanism with the potential to overcome most (if not all) of the abovementioned challenges. The proposed mechanism is simple, distributed, adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a mobile blind device can potentially utilize, as GPS-like reference nodes, either in-range location-aware compatible mobile devices or preinstalled low-cost infrastructure-less location-aware beacon nodes. The proposed approach is model-based and calibration-free that uses the received signal strength to periodically and collaboratively measure and update the radio frequency characteristics of the operating environment to estimate the distances to the reference nodes. Trilateration is then used by the blind device to identify its own location, similar to that used in the GPS-based system. Simulation and empirical testing ascertained that the proposed approach can potentially be the core of future indoor and GPS-obstructed environments

    Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video

    Get PDF
    The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load

    Development of a Visuomotor Augmentative Sensory Aid for Visually Impaired Persons

    Get PDF
    Vision is the primary source of information about the surrounding environment. Human beings rely heavily on the sense of sight to carry out most of the activities necessary for survival. Visual impairment takes away the principal source of information about the immediate environment of an affected individual. Hence, visual impairment has been reported to limit independence and social inclusion, affecting the quality of life of affected persons. This work aims to develop a socially acceptable Sensory Substitution Device (SSD) known as a Visuomotor Augmentative Sensory Aid to aid blind and visually impaired people in navigating safely and independently. This is achieved using four HC-SR04 ultrasonic sensors that feed distance readings to an Arduino UNO development board. The Arduino performs filtering and processing on the sensor data before feeding it back to the user through customised vibrations. Evaluation of this work shows that the device is portable, user-friendly, lightweight, and socially acceptable, as indicated by the responses of the participants
    corecore