7 research outputs found

    AN OBSTACLE DETECTION SYSTEM USING DEPTH INFORMATION AND REGION GROWING FOR VISUALLY IMPAIRED PEOPLE

    Get PDF
    [[abstract]]This study proposes an obstacle detection method based on depth information to aid the visually impaired people in avoiding obstacles as they move in an unfamiliar environment. Firstly, we have applied dilation of morphology and erosion of morphology to remove the crushing noise of the depth image and have used the Least Squares Method (LSM) in a quadratic polynomial to approximate floor curves and determine the floor height threshold in the V-disparity. Secondly, we have searched for dramatic changes depth value in accordance with the floor height threshold to find out suspicious stair edge points. Thirdly, we have used the Hough Transform to find out the location of the drop line. In order to strengthen the characteristics of the different objects to overcome the drawbacks of the region growing method, we have applied edge detection to remove the edge. Fourthly, we have used the floor height threshold and features of the ground to remove ground plane. And then our system has used the region growing method to label the tags on different objects. It has analyzed each object to determine whether the object is a stair. Fifthly, if the result is neither up stair nor down stair, we have used K-SVD algorithm to determine whether the object is people. Finally, the system has assisted the users to determine the stairs direction and obstacle distance through a voice prompt by Text To Speech (TTS). Experimental results show that the proposed system has great robustness and convenience.[[sponsorship]]National Taipei University[[conferencetype]]國際[[conferencedate]]20150718~20150719[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]Tokyo, Japa

    A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Get PDF
    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots

    Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    No full text
    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user’s pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind

    Obstacle detection display for visually impaired:Coding of direction, distance, and height on a vibrotactile waist band

    Get PDF
    Electronic travel aids (ETAs) can potentially increase the safety and comfort of blind users by detecting and displaying obstacles outside the range of the white cane. In a series of experiments, we aim to balance the amount of information displayed and the comprehensibility of the information taking into account the risk of information overload. In Experiment 1, we investigate perception of compound signals displayed on a tactile vest while walking. The results confirm that the threat of information overload is clear and present. Tactile coding parameters that are sufficiently discriminable in isolation may not be so in compound signals and while walking and using the white cane. Horizontal tactor location is a strong coding parameter, and temporal pattern is the preferred secondary coding parameter. Vertical location is also possible as coding parameter but it requires additional tactors and makes the display hardware more complex and expensive and less user friendly. In Experiment 2, we investigate how we can off-load the tactile modality by mitigating part of the information to an auditory display. Off-loading the tactile modality through auditory presentation is possible, but this off-loading is limited and may result in a new threat of auditory overload. In addition, taxing the auditory channel may in turn interfere with other auditory cues from the environment. In Experiment 3, we off-load the tactile sense by reducing the amount of displayed information using several filter rules. The resulting design was evaluated in Experiment 4 with visually impaired users. Although they acknowledge the potential of the display, the added of the ETA as a whole also depends on its sensor and object recognition capabilities. We recommend to use not more than two coding parameters in a tactile compound message and apply filter rules to reduce the amount of obstacles to be displayed in an obstacle avoidance ETA.</p

    Virtual envonments and spatial ability to people with special educational needs (SEN)/disabilities

    Get PDF
    Σημείωση: διατίθεται συμπληρωματικό υλικό σε ξεχωριστό αρχείο

    Active-Proprioceptive-Vibrotactile and Passive-Vibrotactile Haptics for Navigation

    Get PDF
    Navigation is a complex activity and an enabling skill that humans take for granted. It is vital for humans as it fosters spatial awareness, enables exploration, facilitates efficient travel, ensures safety, supports daily activities, promotes cognitive development, and provides a sense of independence. Humans have created tools for diverse activities, including navigation. Usually, these tools for navigation are vision-based, but for situations where visual channels are obstructed, unavailable, or are to be complemented for immersion or multi-tasking, touch-based tools exist. These touch-based tools or devices are called haptic displays. Many different types of haptic displays are employed by a range of fields from telesurgery to education and navigation. In the context of navigation, certain classes of haptic displays are more popular than others, for example, passive multi-element vibrotactile haptic displays, such as haptic belts. However, certain other classes of haptic displays, such as active proprioceptive vibrotactile and passive single-element vibrotactile, may be better suited for certain practical situations and may prove to be more effective and intuitive for navigational tasks than a popular option, such as a haptic belt. However, these other classes have not been evaluated and cross-compared in the context of navigation. This research project aims to contribute towards the understanding and, consequently, the improvement of designs and user experience of navigational haptic displays by thoroughly evaluating and cross-comparing the effectiveness and intuitiveness of three classes of haptic display (passive single-element vibrotactile; passive multi-element vibrotactile; and various active proprioceptive vibrotactile) for navigation. Evaluation and cross-comparisons take into account quantitative measures, for example, accuracy, response time, number of repeats taken, experienced mental workload, and perceived usability, as well as qualitative feedback collected through informal interviews during the testing of the prototypes. Results show that the passive single-element vibrotactile and active proprioceptive vibrotactile classes can be used as effective and intuitive navigational displays. Furthermore, results shed light on the multifaceted nature of haptic displays and their impact on user performance, preferences, and experiences. Quantitative findings related to performance combined with qualitative findings emphasise that one size does not fit all, and a tailored approach is necessary to address the varying needs and preferences of users
    corecore