909 research outputs found

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Mobile Robot Range Sensing through Visual Looming

    Full text link
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Mobile Robot Range Sensing through Visual Looming

    Get PDF
    This article describes and evaluates visual looming as a monocular range sensing method for mobile robots. The looming algorithm is based on the relationship between the displacement of a camera relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. We have carried out systematic experiments to evaluate the ranging accuracy of the looming algorithm using a Pioneer I mobile robot equipped with a color camera. We have also performed noise sensitivity for the looming algorithm, obtaining theoretical error bounds on the range estimates for given levels of odometric and visual noise, which were verified through experimental data. Our results suggest that looming can be used as a robust, inexpensive range sensor as a complement to sonar.Defense Advanced Research Projects Agency; Office of Naval Research; Navy Research Laboratory (00014-96-1-0772, 00014-95-1-0409

    Intelligent Navigation Service Robot Working in a Flexible and Dynamic Environment

    Get PDF
    Numerous sensor fusion techniques have been reported in the literature for a number of robotics applications. These techniques involved the use of different sensors in different configurations. However, in the case of food driving, the possibility of the implementation has been overlooked. In restaurants and food delivery spots, enhancing the food transfer to the correct table is neatly required, without running into other robots or diners or toppling over. In this project, a particular algorithm module has been proposed and implemented to enhance the robot driving methodology and maximize robot functionality, accuracy, and the food transfer experience. The emphasis has been on enhancing movement accuracy to reach the targeted table from the start to the end. Four major elements have been designed to complete this project, including mechanical, electrical, electronics, and programming. Since the floor condition greatly affecting the wheels and turning angle selection, the movement accuracy was improved during the project. The robot was successfully able to receive the command from the restaurant and go to deliver the food to the customers\u27 tables, considering any obstacles on the way to avoid. The robot has equipped with two trays to mount the food with well-configured voices to welcome and greet the customer. The performance has been evaluated and undertaken using a routine robot movement tests. As part of this study, the designed service wheeled robot required to be with a high-performance real-time processor. As long as the processor was adequate, the experimental results showed a highly effective search robot methodology. Having concluded from the study that a minimum number of sensors are needed if they are placed appropriately and used effectively on a robot\u27s body, as navigation could be performed by using a small set of sensors. The Arduino Due has been used to provide a real-time operating system. It has provided a very successful data processing and transfer throughout any regular operation. Furthermore, an easy-to-use application has been developed to improve the user experience, so that the operator can interact directly with the robot via a special setting screen. It is possible, using this feature, to modify advanced settings such as voice commands or IP address without having to return back to the code

    Image-guided Landmark-based Localization and Mapping with LiDAR

    Get PDF
    Mobile robots must be able to determine their position to operate effectively in diverse environments. The presented work proposes a system that integrates LiDAR and camera sensors and utilizes the YOLO object detection model to identify objects in the robot's surroundings. The system, developed in ROS, groups detected objects into triangles, utilizing them as landmarks to determine the robot's position. A triangulation algorithm is employed to obtain the robot's position, which generates a set of nonlinear equations that are solved using the Levenberg-Marquardt algorithm. The presented work comprehensively discusses the proposed system's study, design, and implementation. The investigation begins with an overview of current SLAM techniques. Next, the system design considers the requirements for localization and mapping tasks and an analysis comparing the proposed approach to the contemporary SLAM methods. Finally, we evaluate the system's effectiveness and accuracy through experimentation in the Gazebo simulation environment, which allows for controlling various disturbances that a real scenario can introduce

    Location of a Mobile Robot using Odometry in the DMF

    Get PDF
    La Universidad de Ciencias Aplicadas de Viena, cuenta con una fábrica digital en miniatura en la que se puede realizar la investigación, desarrollo e implementación de las diferentes tecnologías de la industria 4.0. Esta fábrica tiene varias estaciones de trabajo y un robot móvil que se mueve entre ellas para hacer llegar al cliente las piezas pedidas correspondientes del mosquetón. El tema principal de este trabajo fin de grado es el desarrollo de un procedimiento por el cual se pueda obtener la localización del robot calculando sus coordenadas y su ángulo; todo ello con el objetivo de integrarlo en la fábrica miniaturizada de la universidad. El método que se usará para conocer la posición y orientación del robot estará basado en la odometría de un robot diferencial. El control del robot se realizará mediante el puerto serie de Arduino o mediante Thing Worx, enviando los comandos necesarios para su movimiento. La pose (posición en coordenadas y orientación) del robot será enviada al servidor central haciendo uso de la comunicación IoT, donde se podrán visualizar y hacer uso para otros trabajos.The University of Applied Sciences Technikum Wien has a digital miniature factory in which it can be done the research, development and implementation of different technologies related with the 4.0 industry. This miniature factory has several working stations and a mobile robot that moves between them in order to deliver the corresponding carabiner parts ordered by the supposed customer. The main subject of this final bachelor project is the development of a procedure by which the localization of the mobile robot can be obtained by calculating its coordinates and angle; all with the aim of integrating it into the miniaturised factory of the university. The method to be used to know the position and orientation of the robot will be based on the wheel odometry of a differential robot. The control of the robot is done through the serial port of the Arduino or through ThingWorx, sending the necessary commands to make it moves. The pose (position in coordinates and orientation) of the robot will be sent to the central server using IoT communication, where it can be visualised and used for other projects.Departamento de Tecnología ElectrónicaGrado en Ingeniería en Electrónica Industrial y Automátic

    A tesselated probabilistic representation for spatial robot perception and navigation

    Get PDF
    The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations
    corecore