8,885 research outputs found

    Exploring haptic interfacing with a mobile robot without visual feedback

    Get PDF
    Search and rescue scenarios are often complicated by low or no visibility conditions. The lack of visual feedback hampers orientation and causes significant stress for human rescue workers. The Guardians project [1] pioneered a group of autonomous mobile robots assisting a human rescue worker operating within close range. Trials were held with fire fighters of South Yorkshire Fire and Rescue. It became clear that the subjects by no means were prepared to give up their procedural routine and the feel of security they provide: they simply ignored instructions that contradicted their routines

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Emerging robot swarm traffic

    Get PDF
    We discuss traffic patterns generated by swarms of robots while commuting to and from a base station. The overall question is whether to explicitly organise the traffic or whether a certain regularity develops `naturally'. Human driven motorized traffic is rigidly structured in two lanes. However, army ants develop a three-lane pattern in their traffic, while human pedestrians generate a main trail and secondary trials in either direction. Our robot swarm approach is bottom-up: designing individual agents we first investigate the mathematics of cases occurring when applying the artificial potential field method to three 'perfect' robots. We show that traffic lane pattern will not be disturbed by the internal system of forces. Next, we define models of sensor designs to account for the practical fact that robots (and ants) have limited visibility and compare the sensor models in groups of three robots. In the final step we define layouts of a highway: an unbounded open space, a trail with surpassable edges and a hard defined (walled) highway. Having defined the preliminaries we run swarm simulations and look for emerging traffic patterns. Apparently, depending on the initial situation a variety of lane patterns occurs, however, high traffic densities do delay the emergence of traffic lanes considerably. Overall we conclude that regularities do emerge naturally and can be turned into an advantage to obtain efficient robot traffic

    Bayesian Optimisation for Safe Navigation under Localisation Uncertainty

    Full text link
    In outdoor environments, mobile robots are required to navigate through terrain with varying characteristics, some of which might significantly affect the integrity of the platform. Ideally, the robot should be able to identify areas that are safe for navigation based on its own percepts about the environment while avoiding damage to itself. Bayesian optimisation (BO) has been successfully applied to the task of learning a model of terrain traversability while guiding the robot through more traversable areas. An issue, however, is that localisation uncertainty can end up guiding the robot to unsafe areas and distort the model being learnt. In this paper, we address this problem and present a novel method that allows BO to consider localisation uncertainty by applying a Gaussian process model for uncertain inputs as a prior. We evaluate the proposed method in simulation and in experiments with a real robot navigating over rough terrain and compare it against standard BO methods.Comment: To appear in the proceedings of the 18th International Symposium on Robotics Research (ISRR 2017

    Reactive direction control for a mobile robot: A locust-like control of escape direction emerges when a bilateral pair of model locust visual neurons are integrated

    Get PDF
    Locusts possess a bilateral pair of uniquely identifiable visual neurons that respond vigorously to the image of an approaching object. These neurons are called the lobula giant movement detectors (LGMDs). The locust LGMDs have been extensively studied and this has lead to the development of an LGMD model for use as an artificial collision detector in robotic applications. To date, robots have been equipped with only a single, central artificial LGMD sensor, and this triggers a non-directional stop or rotation when a potentially colliding object is detected. Clearly, for a robot to behave autonomously, it must react differently to stimuli approaching from different directions. In this study, we implement a bilateral pair of LGMD models in Khepera robots equipped with normal and panoramic cameras. We integrate the responses of these LGMD models using methodologies inspired by research on escape direction control in cockroaches. Using ‘randomised winner-take-all’ or ‘steering wheel’ algorithms for LGMD model integration, the khepera robots could escape an approaching threat in real time and with a similar distribution of escape directions as real locusts. We also found that by optimising these algorithms, we could use them to integrate the left and right DCMD responses of real jumping locusts offline and reproduce the actual escape directions that the locusts took in a particular trial. Our results significantly advance the development of an artificial collision detection and evasion system based on the locust LGMD by allowing it reactive control over robot behaviour. The success of this approach may also indicate some important areas to be pursued in future biological research
    • …
    corecore