270 research outputs found

    Generalized Regressive Motion: a Visual Cue to Collision

    Full text link

    Exploring the Use of Wearables to Enable Indoor Navigation for Blind Users

    Get PDF
    One of the challenges that people with visual impairments (VI) have to have to confront daily, is navigating independently through foreign or unfamiliar spaces.Navigating through unfamiliar spaces without assistance is very time consuming and leads to lower mobility. Especially in the case of indoor environments where the use of GPS is impossible, this task becomes even harder.However, advancements in mobile and wearable computing pave the path to new cheap assistive technologies that can make the lives of people with VI easier.Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. Smart watches and heads up displays (HUDs), in combination with smartphones, can provide a basis for development of advanced algorithms, capable of providing inexpensive solutions for navigation in indoor spaces. New interfaces are also introduced making the interaction between users who are blind and mo-bile devices more intuitive.This work presents a set of new systems and technologies created to help users with VI navigate indoor environments. The first system presented is an indoor navigation system for people with VI that operates by using sensors found in mo-bile devices and virtual maps of the environment. The second system presented helps users navigate large open spaces with minimum veering. Next a study is conducted to determine the accuracy of pedometry based on different body placements of the accelerometer sensors. Finally, a gesture detection system is introduced that helps communication between the user and mobile devices by using sensors in wearable devices

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    An Approach Based on Particle Swarm Optimization for Inspection of Spacecraft Hulls by a Swarm of Miniaturized Robots

    Get PDF
    The remoteness and hazards that are inherent to the operating environments of space infrastructures promote their need for automated robotic inspection. In particular, micrometeoroid and orbital debris impact and structural fatigue are common sources of damage to spacecraft hulls. Vibration sensing has been used to detect structural damage in spacecraft hulls as well as in structural health monitoring practices in industry by deploying static sensors. In this paper, we propose using a swarm of miniaturized vibration-sensing mobile robots realizing a network of mobile sensors. We present a distributed inspection algorithm based on the bio-inspired particle swarm optimization and evolutionary algorithm niching techniques to deliver the task of enumeration and localization of an a priori unknown number of vibration sources on a simplified 2.5D spacecraft surface. Our algorithm is deployed on a swarm of simulated cm-scale wheeled robots. These are guided in their inspection task by sensing vibrations arising from failure points on the surface which are detected by on-board accelerometers. We study three performance metrics: (1) proximity of the localized sources to the ground truth locations, (2) time to localize each source, and (3) time to finish the inspection task given a 75% inspection coverage threshold. We find that our swarm is able to successfully localize the present so

    Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation

    Get PDF
    Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. A Vicon motion capture system was used to measure human movement kinematics objectively. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.This research was supported by the Vision and Eye Research Unit, Postgraduate Medical Institute at Anglia Ruskin University (awarded to SP), and the Medical Research Council (awarded to BCJM, Grant number G0701870)

    Mobile robots and vehicles motion systems: a unifying framework

    Get PDF
    Robots perform many different activities in order to accomplish their tasks. The robot motion capability is one of the most important ones for an autonomous be- havior in a typical indoor-outdoor mission (without it other tasks can not be done), since it drastically determines the global success of a robotic mission. In this thesis, we focus on the main methods for mobile robot and vehicle motion systems and we build a common framework, where similar components can be interchanged or even used together in order to increase the whole system performance

    A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes

    Get PDF
    Bertrand O, Lindemann JP, Egelhaaf M. A Bio-inspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes. PLoS Computational Biology. 2015;11(11): e1004339.Avoiding collisions is one of the most basic needs of any mobile agent, both biological and technical, when searching around or aiming toward a goal. We propose a model of collision avoidance inspired by behavioral experiments on insects and by properties of optic flow on a spherical eye experienced during translation, and test the interaction of this model with goal-driven behavior. Insects, such as flies and bees, actively separate the rotational and translational optic flow components via behavior, i.e. by employing a saccadic strategy of flight and gaze control. Optic flow experienced during translation, i.e. during intersaccadic phases, contains information on the depth-structure of the environment, but this information is entangled with that on self-motion. Here, we propose a simple model to extract the depth structure from translational optic flow by using local properties of a spherical eye. On this basis, a motion direction of the agent is computed that ensures collision avoidance. Flying insects are thought to measure optic flow by correlation-type elementary motion detectors. Their responses depend, in addition to velocity, on the texture and contrast of objects and, thus, do not measure the velocity of objects veridically. Therefore, we initially used geometrically determined optic flow as input to a collision avoidance algorithm to show that depth information inferred from optic flow is sufficient to account for collision avoidance under closed-loop conditions. Then, the collision avoidance algorithm was tested with bio-inspired correlation-type elementary motion detectors in its input. Even then, the algorithm led successfully to collision avoidance and, in addition, replicated the characteristics of collision avoidance behavior of insects. Finally, the collision avoidance algorithm was combined with a goal direction and tested in cluttered environments. The simulated agent then showed goal-directed behavior reminiscent of components of the navigation behavior of insects

    From visuomotor control to latent space planning for robot manipulation

    Get PDF
    Deep visuomotor control is emerging as an active research area for robot manipulation. Recent advances in learning sensory and motor systems in an end-to-end manner have achieved remarkable performance across a range of complex tasks. Nevertheless, a few limitations restrict visuomotor control from being more widely adopted as the de facto choice when facing a manipulation task on a real robotic platform. First, imitation learning-based visuomotor control approaches tend to suffer from the inability to recover from an out-of-distribution state caused by compounding errors. Second, the lack of versatility in task definition limits skill generalisability. Finally, the training data acquisition process and domain transfer are often impractical. In this thesis, individual solutions are proposed to address each of these issues. In the first part, we find policy uncertainty to be an effective indicator of potential failure cases, in which the robot is stuck in out-of-distribution states. On this basis, we introduce a novel uncertainty-based approach to detect potential failure cases and a recovery strategy based on action-conditioned uncertainty predictions. Then, we propose to employ visual dynamics approximation to our model architecture to capture the motion of the robot arm instead of the static scene background, making it possible to learn versatile skill primitives. In the second part, taking inspiration from the recent progress in latent space planning, we propose a gradient-based optimisation method operating within the latent space of a deep generative model for motion planning. Our approach bypasses the traditional computational challenges encountered by established planning algorithms, and has the capability to specify novel constraints easily and handle multiple constraints simultaneously. Moreover, the training data comes from simple random motor-babbling of kinematically feasible robot states. Our real-world experiments further illustrate that our latent space planning approach can handle both open and closed-loop planning in challenging environments such as heavily cluttered or dynamic scenes. This leads to the first, to our knowledge, closed-loop motion planning algorithm that can incorporate novel custom constraints, and lays the foundation for more complex manipulation tasks
    corecore