12 research outputs found

    Sensory substitution information informs locomotor adjustments when walking through apertures

    Get PDF
    The study assessed the ability of the central nervous system (CNS) to use echoic information from sensory substitution devices (SSDs) to rotate the shoulders and safely pass through apertures of different width. Ten visually normal participants performed this task with full vision, or blindfolded using an SSD to obtain information regarding the width of an aperture created by two parallel panels. Two SSDs were tested. Participants passed through apertures of +0%, +18%, +35%, and +70% of measured body width. Kinematic indices recorded movement time, shoulder rotation, average walking velocity across the trial, peak walking velocities before crossing, after crossing and throughout a whole trial. Analyses showed participants used SSD information to regulate shoulder rotation, with greater rotation associated with narrower apertures. Rotations made using an SSD were greater compared to vision, movement times were longer, average walking velocity lower and peak velocities before crossing, after crossing and throughout the whole trial were smaller, suggesting greater caution. Collisions sometimes occurred using an SSD but not using vision, indicating that substituted information did not always result in accurate shoulder rotation judgements. No differences were found between the two SSDs. The data suggest that spatial information, provided by sensory substitution, allows the relative position of aperture panels to be internally represented, enabling the CNS to modify shoulder rotation according to aperture width. Increased buffer space indicated by greater rotations (up to approximately 35% for apertures of +18% of body width), suggests that spatial representations are not as accurate as offered by full vision

    A real-time obstacle detection algorithm for the visually impaired using binocular camera

    No full text
    In this paper, a real-time depth-data based obstacle detection to assist the visually impaired people in avoiding obstacles independently is presented. Depth data obtained by the binocular camera, is analyzed to detect obstacles. With the help of the proposed method, the distance between the binocular camera and the obstacle can be calculated with the speed of 30fps. Our method further allows the computation of the position and the size of the obstacle. The proposed algorithm has been extensively tested on both real images and public Laundry data-set. Experimental results demonstrate that the proposed method is not only able to detect the obstacles correctly but it is also fast, efficient and stable
    corecore