683 research outputs found

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Low-Resolution Vision for Autonomous Mobile Robots

    Get PDF
    The goal of this research is to develop algorithms using low-resolution images to perceive and understand a typical indoor environment and thereby enable a mobile robot to autonomously navigate such an environment. We present techniques for three problems: autonomous exploration, corridor classification, and minimalistic geometric representation of an indoor environment for navigation. First, we present a technique for mobile robot exploration in unknown indoor environments using only a single forward-facing camera. Rather than processing all the data, the method intermittently examines only small 32X24 downsampled grayscale images. We show that for the task of indoor exploration the visual information is highly redundant, allowing successful navigation even using only a small fraction (0.02%) of the available data. The method keeps the robot centered in the corridor by estimating two state parameters: the orientation within the corridor and the distance to the end of the corridor. The orientation is determined by combining the results of five complementary measures, while the estimated distance to the end combines the results of three complementary measures. These measures, which are predominantly information-theoretic, are analyzed independently, and the combined system is tested in several unknown corridor buildings exhibiting a wide variety of appearances, showing the sufficiency of low-resolution visual information for mobile robot exploration. Because the algorithm discards such a large percentage (99.98%) of the information both spatially and temporally, processing occurs at an average of 1000 frames per second, or equivalently takes a small fraction of the CPU. Second, we present an algorithm using image entropy to detect and classify corridor junctions from low resolution images. Because entropy can be used to perceive depth, it can be used to detect an open corridor in a set of images recorded by turning a robot at a junction by 360 degrees. Our algorithm involves detecting peaks from continuously measured entropy values and determining the angular distance between the detected peaks to determine the type of junction that was recorded (either middle, L-junction, T-junction, dead-end, or cross junction). We show that the same algorithm can be used to detect open corridors from both monocular as well as omnidirectional images. Third, we propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). The representation is extracted from low-resolution images using a novel combination of information theoretic measures and gradient cues. Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that centerline and wall-floor boundaries can be estimated with reasonable accuracy even in texture-poor environments with low-resolution images. In a database of 7 unique corridor sequences for orientation measurements, less than 2% additional error was observed as the resolution of the image decreased by 99.9%

    Navigation for automatic guided vehicles using omnidirectional optical sensing

    Get PDF
    Thesis (M. Tech. (Engineering: Electrical)) -- Central University of technology, Free State, 2013Automatic Guided Vehicles (AGVs) are being used more frequently in a manufacturing environment. These AGVs are navigated in many different ways, utilising multiple types of sensors for detecting the environment like distance, obstacles, and a set route. Different algorithms or methods are then used to utilise this environmental information for navigation purposes applied onto the AGV for control purposes. Developing a platform that could be easily reconfigured in alternative route applications utilising vision was one of the aims of the research. In this research such sensors detecting the environment was replaced and/or minimised by the use of a single, omnidirectional Webcam picture stream utilising an own developed mirror and Perspex tube setup. The area of interest in each frame was extracted saving on computational recourses and time. By utilising image processing, the vehicle was navigated on a predetermined route. Different edge detection methods and segmentation methods were investigated on this vision signal for route and sign navigation. Prewitt edge detection was eventually implemented, Hough transfers used for border detection and Kalman filtering for minimising border detected noise for staying on the navigated route. Reconfigurability was added to the route layout by coloured signs incorporated in the navigation process. The result was the manipulation of a number of AGV’s, each on its own designated coloured signed route. This route could be reconfigured by the operator with no programming alteration or intervention. The YCbCr colour space signal was implemented in detecting specific control signs for alternative colour route navigation. The result was used generating commands to control the AGV through serial commands sent on a laptop’s Universal Serial Bus (USB) port with a PIC microcontroller interface board controlling the motors by means of pulse width modulation (PWM). A total MATLAB¼ software development platform was utilised by implementing written M-files, Simulink¼ models, masked function blocks and .mat files for sourcing the workspace variables and generating executable files. This continuous development system lends itself to speedy evaluation and implementation of image processing options on the AGV. All the work done in the thesis was validated by simulations using actual data and by physical experimentation

    People tracking and following with a smart wheelchair using an omnidirectional camera and a RGB-D Camera

    Get PDF
    The project implements a new service that enables a smart wheelchair user and another person to have a normal talk while freely strolling around the environment, without the need of any interaction towards the wheelchair, called Jiaolong

    Real Time UAV Altitude, Attitude and Motion Estimation form Hybrid Stereovision

    Get PDF
    International audienceKnowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during crit- ical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and de- tecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera con- tributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the trans- lation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit

    Teleoperated visual inspection and surveillance with unmanned ground and aerial vehicles,” Int

    Get PDF
    Abstract—This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV

    Innovative Mobile Manipulator Solution for Modern Flexible Manufacturing Processes

    Get PDF
    There is a paradigm shift in current manufacturing needs that is causing a change from the current mass-production-based approach to a mass customization approach where production volumes are smaller and more variable. Current processes are very adapted to the previous paradigm and lack the required flexibility to adapt to the new production needs. To solve this problem, an innovative industrial mobile manipulator is presented. The robot is equipped with a variety of sensors that allow it to perceive its surroundings and perform complex tasks in dynamic environments. Following the current needs of the industry, the robot is capable of autonomous navigation, safely avoiding obstacles. It is flexible enough to be able to perform a wide variety of tasks, being the change between tasks done easily thanks to skills-based programming and the ability to change tools autonomously. In addition, its security systems allow it to share the workspace with human operators. This prototype has been developed as part of THOMAS European project, and it has been tested and demonstrated in real-world manufacturing use cases.This research was funded by the EC research project “THOMAS—Mobile dual arm robotic workers with embedded cognition for hybrid and dynamically reconfigurable manufacturing systems” (Grant Agreement: 723616) (www.thomas-project.eu/)

    Vision-based Assistive Indoor Localization

    Full text link
    An indoor localization system is of significant importance to the visually impaired in their daily lives by helping them localize themselves and further navigate an indoor environment. In this thesis, a vision-based indoor localization solution is proposed and studied with algorithms and their implementations by maximizing the usage of the visual information surrounding the users for an optimal localization from multiple stages. The contributions of the work include the following: (1) Novel combinations of a daily-used smart phone with a low-cost lens (GoPano) are used to provide an economic, portable, and robust indoor localization service for visually impaired people. (2) New omnidirectional features (omni-features) extracted from 360 degrees field-of-view images are proposed to represent visual landmarks of indoor positions, and then used as on-line query keys when a user asks for localization services. (3) A scalable and light-weight computation and storage solution is implemented by transferring big database storage and computational heavy querying procedure to the cloud. (4) Real-time query performance of 14 fps is achieved with a Wi-Fi connection by identifying and implementing both data and task parallelism using many-core NVIDIA GPUs. (5) Rene localization via 2D-to-3D and 3D-to-3D geometric matching and automatic path planning for efficient environmental modeling by utilizing architecture AutoCAD floor plans. This dissertation first provides a description of assistive indoor localization problem with its detailed connotations as well as overall methodology. Then related work in indoor localization and automatic path planning for environmental modeling is surveyed. After that, the framework of omnidirectional-vision-based indoor assistive localization is introduced. This is followed by multiple refine localization strategies such as 2D-to-3D and 3D-to-3D geometric matching approaches. Finally, conclusions and a few promising future research directions are provided

    Enhanced vision-based localization and control for navigation of non-holonomic omnidirectional mobile robots in GPS-denied environments

    Get PDF
    New Zealand’s economy relies on primary production to a great extent, where use of the technological advances can have a significant impact on the productivity. Robotics and automation can play a key role in increasing productivity in primary sector, leading to a boost in national economy. This thesis investigates novel methodologies for design, control, and navigation of a mobile robotic platform, aimed for field service applications, specifically in agricultural environments such as orchards to automate the agricultural tasks. The design process of this robotic platform as a non-holonomic omnidirectional mobile robot, includes an innovative integrated application of CAD, CAM, CAE, and RP for development and manufacturing of the platform. Robot Operating System (ROS) is employed for the optimum embedded software system design and development to enable control, sensing, and navigation of the platform. 3D modelling and simulation of the robotic system is performed through interfacing ROS and Gazebo simulator, aiming for off-line programming, optimal control system design, and system performance analysis. Gazebo simulator provides 3D simulation of the robotic system, sensors, and control interfaces. It also enables simulation of the world environment, allowing the simulated robot to operate in a modelled environment. The model based controller for kinematic control of the non-holonomic omnidirectional platform is tested and validated through experimental results obtained from the simulated and the physical robot. The challenges of the kinematic model based controller including the mathematical and kinematic singularities are discussed and the solution to enable an optimal kinematic model based controller is presented. The kinematic singularity associated with the non-holonomic omnidirectional robots is solved using a novel fuzzy logic based approach. The proposed approach is successfully validated and tested through the simulation and experimental results. Development of a reliable localization system is aimed to enable navigation of the platform in GPS-denied environments such as orchards. For this aim, stereo visual odometry (SVO) is considered as the core of the non-GPS localization system. Challenges of SVO are introduced and the SVO accumulative drift is considered as the main challenge to overcome. SVO drift is identified in form of rotational and translational drift. Sensor fusion is employed to improve the SVO rotational drift through the integration of IMU and SVO. A novel machine learning approach is proposed to improve the SVO translational drift using Neural-Fuzzy system and RBF neural network. The machine learning system is formulated as a drift estimator for each image frame, then correction is applied at that frame to avoid the accumulation of the drift over time. The experimental results and analyses are presented to validate the effectiveness of the methodology in improving the SVO accuracy. An enhanced SVO is aimed through combination of sensor fusion and machine learning methods to improve the SVO rotational and translational drifts. Furthermore, to achieve a robust non-GPS localization system for the platform, sensor fusion of the wheel odometry and the enhanced SVO is performed to increase the accuracy of the overall system, as well as the robustness of the non-GPS localization system. The experimental results and analyses are conducted to support the methodology
    • 

    corecore