39,062 research outputs found

    GPS NAVIGATOR FOR VISUALLY IMPAIRED

    Get PDF
    The objective of this study is the development of navigation system which supports activities of the visually impaired without help of others. This system navigates a visually impaired person by using information about GPS (Global Positioning system). In this navigation system, after setting the destination, position of user is obtained by GPS and a visually impaired user is guided along the predefined route

    Recovering Heading for Visually-Guided Navigation

    Get PDF
    We present a model for recovering the direction of heading of an observer who is moving relative to a scene that may contain self-moving objects. The model builds upon an algorithm proposed by Rieger and Lawton (1985), which is based on earlier work by Longuet-Higgens and Prazdny (1981). The algorithm uses velocity differences computed in regions of high depth variation to estimate the location of the focus of expansion, which indicates the observer's heading direction. We relate the behavior of the proposed model to psychophysical observations regarding the ability of human observers to judge their heading direction, and show how the model can cope with self-moving objects in the environment. We also discuss this model in the broader context of a navigational system that performs tasks requiring rapid sensing and response through the interaction of simple task-specific routines

    A model of ant route navigation driven by scene familiarity

    Get PDF
    In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints

    Natural landmark detection for visually-guided robot navigation

    Get PDF
    The main difficulty to attain fully autonomous robot navigation outdoors is the fast detection of reliable visual references, and their subsequent characterization as landmarks for immediate and unambiguous recognition. Aimed at speed, our strategy has been to track salient regions along image streams by just performing on-line pixel sampling. Persistent regions are considered good candidates for landmarks, which are then characterized by a set of subregions with given color and normalized shape. They are stored in a database for posterior recognition during the navigation process. Some experimental results showing landmark-based navigation of the legged robot Lauron III in an outdoor setting are provided.Peer Reviewe

    Navigating the Aural Web: Augmenting User Experience for Visually Impaired and Mobile Users

    Get PDF
    poster abstractThe current web navigation paradigm structures interaction around vision and thus hampers users in two eyes-free scenarios: mobile computing and information access for the visually impaired. Users in both scenarios are unable to navigate complex information architectures efficiently because of the strictly linear perceptual bandwidth of the aural channel. To combat this problem, we are conducting a long-term research program aimed at establishing novel design strategies that can augment the aural navigation while users browse complex information architectures typical of the web. A pervasive problem in designing for web accessibility (especially for screen reader users) is to provide efficient access to a large collection of contents, which is manifested in long lists indexing the underlying contents. Cognitively managing the interaction with long lists is cumbersome in the aural paradigm because users need to listen attentively to each list item to make a decision about what link to follow and then select a link. For every non relevant page selected, screen reader users need to go back to the list to select another page. Our most recent study studies compared the performance of index-based web navigation to guided-tour navigation (navigation without lists) for screen-reader users. Guided-tour navigation allows users to move directly back and forth across the content pages of a collection, bypassing lists. An experiment (N=10), conducted at the Indiana School for the Blind and Visually Impaired (ISBVI), examined these web navigation strategies during fact-finding tasks. Guided-tour significantly reduced time on task, number of pages visited, number of keystrokes, and perceived cognitive effort while enhancing the navigational experience. By augmenting existing navigational methods for screen-reader users, our research offers design strategies to web designers to improve web accessibility without costly site redesign. This research material is based upon work supported by the National Science Foundation under Grant #1018054

    Perceptual impact of environmental factors in sighted and visually impaired individuals

    Get PDF
    To a visually impaired individual the physical world presents many challenges. For a person with impaired sight, wayfinding through a complex environment is fraught with dangers, both actual and imagined. The current generation of mobility aids have the possibility of addressing a broad range of physical issues through technological solutions. The perception of difficulty however, can mean that many visually impaired individuals are fearful or uncomfortable about independent mobility or travel. In this context it becomes necessary to discover exactly what environments, environmental factors or items constitute a ‘perception of difficulty’ in the individuals mental landscape and may trigger a negative response before they interact with the physical environment. This paper reports on research, which sought to ascertain what levels of perceptual difficulties specific environments and factors presented to individuals. The research was conducted with both visually impaired and sighted groups and compared differences and similarities in perceptual difficulty between these two groups

    Smart Path Guidance Mobile Aid for Visually Disabled Persons

    Get PDF
    A traditional blind-navigation cane mostly used by a visually impaired person is not very appropriate mainly due to narrow search area. While a conventional cane warns of changes along the ground, it does not warn of other walking hazards and objects above a person’s waist. There are many electronics based blind-navigation devices employ a voice guided GPS (global positioning system) and/or complex high-order processor. It is apparent that the costs of these devices are too high that a common visually impaired people cannot afford them. In addition, a kind of previous arts is difficult to handle due to the weight, volume and functions incubated with basic purpose. Therefore, these types of advanced navigation systems are difficult to be commercialized. The purpose of this research is to design and develop a smart path guidance system for the blind and visually impaired, particularly the mobile aid to carry by hand, contains a smart sensor logic system. An appropriate model has developed for the selected design with embedding fuzzy logic decision. A presented solution is also tested for various condition inputs to verify the system’s behavior. Through several experiments, the sensors are calibrated to increase the accuracy of decision. The presented prototype enables the blind person to walk freely in an unfamiliar environment

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    Bi-directional route learning in wood ants

    Get PDF
    Some ants and bees readily learn visually guided routes between their nests and feeding sites. They can learn the appearance of visual landmarks for the food-bound or homeward segment of the route when these landmarks are only present during that particular segment of their round trip. We show here that wood ants can also acquire landmark information for guiding their homeward path while running their food-bound path, and that this information may be picked up, when ants briefly reverse direction and retrace their steps for a short distance. These short periods of looking back tend to occur early in route acquisition and are more frequent on homeward than on food-bound segments

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines
    • …
    corecore