23,119 research outputs found
MOMA: Visual Mobile Marker Odometry
In this paper, we present a cooperative odometry scheme based on the
detection of mobile markers in line with the idea of cooperative positioning
for multiple robots [1]. To this end, we introduce a simple optimization scheme
that realizes visual mobile marker odometry via accurate fixed marker-based
camera positioning and analyse the characteristics of errors inherent to the
method compared to classical fixed marker-based navigation and visual odometry.
In addition, we provide a specific UAV-UGV configuration that allows for
continuous movements of the UAV without doing stops and a minimal
caterpillar-like configuration that works with one UGV alone. Finally, we
present a real-world implementation and evaluation for the proposed UAV-UGV
configuration
Application of Visual Servo Control in Autonomous Mobile Rescue Robots
Mobile robots that integrate visual servo control for facilitating autonomous grasping nd manipulation are the focus of this paper. In view of mobility, they have wider pplication than traditional fixed-based robots with visual servoing. Visual servoing s widely used in mobile robot navigation. However, there are not so many report or applying it to mobile manipulation. In this paper, challenges and limitations of pplying visual servoing in mobile manipulation are discussed. Next, two classical pproaches (image-based visual servoing (IBVS) and position-based visual servoing (PBVS)) are introduced aloing with their advantages and disadvantages. Simulations n Matlab are carried out using the two methods, there advantages and drawbacks are llustrated and discussed. On this basis, a suggested system in mobile manipulation s proposed including an IBVS with an eye-in-hand camera configuration system. imulations and experimentations are carried with this robot configuration in a earch and rescue scenario, which show good performance
MGRO Recognition Algorithm-Based Artificial Potential Field for Mobile Robot Navigation
This paper describes a novel recognition algorithm which includes mean filter, Gaussian filter, Retinex enhancement method, and Ostu threshold segmentation method (MGRO) for the navigation of mobile robots with visual sensors. The approach includes obstacle visual recognition and navigation path planning. In the first part, a three-stage method for obstacle visual recognition is constructed. Stage 1 combines mean filtering and Gaussian filtering to remove random noise and Gauss noise in the environmental image. Stage 2 increases image contrast by using the Retinex enhancement method. Stage 3 uses the Ostu threshold segmentation method to achieve obstacle feature extraction. A navigation method based on the auxiliary visual information is constructed in the second part. The method is based on the artificial potential field (APF) method and is able to avoid falling into local minimum by changing the repulsion field function. Experimental results confirm that obstacle features can be extracted accurately and the mobile robot can avoid obstacles safely and arrive at target positions correctly
Near range path navigation using LGMD visual neural networks
In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically
inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios
A mosaic of eyes
Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties
Navigation without localisation: reliable teach and repeat based on the convergence theorem
We present a novel concept for teach-and-repeat visual navigation. The
proposed concept is based on a mathematical model, which indicates that in
teach-and-repeat navigation scenarios, mobile robots do not need to perform
explicit localisation. Rather than that, a mobile robot which repeats a
previously taught path can simply `replay' the learned velocities, while using
its camera information only to correct its heading relative to the intended
path. To support our claim, we establish a position error model of a robot,
which traverses a taught path by only correcting its heading. Then, we outline
a mathematical proof which shows that this position error does not diverge over
time. Based on the insights from the model, we present a simple monocular
teach-and-repeat navigation method. The method is computationally efficient, it
does not require camera calibration, and it can learn and autonomously traverse
arbitrarily-shaped paths. In a series of experiments, we demonstrate that the
method can reliably guide mobile robots in realistic indoor and outdoor
conditions, and can cope with imperfect odometry, landmark deficiency,
illumination variations and naturally-occurring environment changes.
Furthermore, we provide the navigation system and the datasets gathered at
http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
- …