289 research outputs found

    Autonomous navigation of a wheeled mobile robot in farm settings

    Get PDF
    This research is mainly about autonomously navigation of an agricultural wheeled mobile robot in an unstructured outdoor setting. This project has four distinct phases defined as: (i) Navigation and control of a wheeled mobile robot for a point-to-point motion. (ii) Navigation and control of a wheeled mobile robot in following a given path (path following problem). (iii) Navigation and control of a mobile robot, keeping a constant proximity distance with the given paths or plant rows (proximity-following). (iv) Navigation of the mobile robot in rut following in farm fields. A rut is a long deep track formed by the repeated passage of wheeled vehicles in soft terrains such as mud, sand, and snow. To develop reliable navigation approaches to fulfill each part of this project, three main steps are accomplished: literature review, modeling and computer simulation of wheeled mobile robots, and actual experimental tests in outdoor settings. First, point-to-point motion planning of a mobile robot is studied; a fuzzy-logic based (FLB) approach is proposed for real-time autonomous path planning of the robot in unstructured environment. Simulation and experimental evaluations shows that FLB approach is able to cope with different dynamic and unforeseen situations by tuning a safety margin. Comparison of FLB results with vector field histogram (VFH) and preference-based fuzzy (PBF) approaches, reveals that FLB approach produces shorter and smoother paths toward the goal in almost all of the test cases examined. Then, a novel human-inspired method (HIM) is introduced. HIM is inspired by human behavior in navigation from one point to a specified goal point. A human-like reasoning ability about the situations to reach a predefined goal point while avoiding any static, moving and unforeseen obstacles are given to the robot by HIM. Comparison of HIM results with FLB suggests that HIM is more efficient and effective than FLB. Afterward, navigation strategies are built up for path following, rut following, and proximity-following control of a wheeled mobile robot in outdoor (farm) settings and off-road terrains. The proposed system is composed of different modules which are: sensor data analysis, obstacle detection, obstacle avoidance, goal seeking, and path tracking. The capabilities of the proposed navigation strategies are evaluated in variety of field experiments; the results show that the proposed approach is able to detect and follow rows of bushes robustly. This action is used for spraying plant rows in farm field. Finally, obstacle detection and obstacle avoidance modules are developed in navigation system. These modules enables the robot to detect holes or ground depressions (negative obstacles), that are inherent parts of farm settings, and also over ground level obstacles (positive obstacles) in real-time at a safe distance from the robot. Experimental tests are carried out on two mobile robots (PowerBot and Grizzly) in outdoor and real farm fields. Grizzly utilizes a 3D-laser range-finder to detect objects and perceive the environment, and a RTK-DGPS unit for localization. PowerBot uses sonar sensors and a laser range-finder for obstacle detection. The experiments demonstrate the capability of the proposed technique in successfully detecting and avoiding different types of obstacles both positive and negative in variety of scenarios

    Human-Robot Perception in Industrial Environments: A Survey

    Get PDF
    Perception capability assumes significant importance for human–robot interaction. The forthcoming industrial environments will require a high level of automation to be flexible and adaptive enough to comply with the increasingly faster and low-cost market demands. Autonomous and collaborative robots able to adapt to varying and dynamic conditions of the environment, including the presence of human beings, will have an ever-greater role in this context. However, if the robot is not aware of the human position and intention, a shared workspace between robots and humans may decrease productivity and lead to human safety issues. This paper presents a survey on sensory equipment useful for human detection and action recognition in industrial environments. An overview of different sensors and perception techniques is presented. Various types of robotic systems commonly used in industry, such as fixed-base manipulators, collaborative robots, mobile robots and mobile manipulators, are considered, analyzing the most useful sensors and methods to perceive and react to the presence of human operators in industrial cooperative and collaborative applications. The paper also introduces two proofs of concept, developed by the authors for future collaborative robotic applications that benefit from enhanced capabilities of human perception and interaction. The first one concerns fixed-base collaborative robots, and proposes a solution for human safety in tasks requiring human collision avoidance or moving obstacles detection. The second one proposes a collaborative behavior implementable upon autonomous mobile robots, pursuing assigned tasks within an industrial space shared with human operators

    Innovative Mobile Manipulator Solution for Modern Flexible Manufacturing Processes

    Get PDF
    There is a paradigm shift in current manufacturing needs that is causing a change from the current mass-production-based approach to a mass customization approach where production volumes are smaller and more variable. Current processes are very adapted to the previous paradigm and lack the required flexibility to adapt to the new production needs. To solve this problem, an innovative industrial mobile manipulator is presented. The robot is equipped with a variety of sensors that allow it to perceive its surroundings and perform complex tasks in dynamic environments. Following the current needs of the industry, the robot is capable of autonomous navigation, safely avoiding obstacles. It is flexible enough to be able to perform a wide variety of tasks, being the change between tasks done easily thanks to skills-based programming and the ability to change tools autonomously. In addition, its security systems allow it to share the workspace with human operators. This prototype has been developed as part of THOMAS European project, and it has been tested and demonstrated in real-world manufacturing use cases.This research was funded by the EC research project “THOMAS—Mobile dual arm robotic workers with embedded cognition for hybrid and dynamically reconfigurable manufacturing systems” (Grant Agreement: 723616) (www.thomas-project.eu/)

    3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments

    Get PDF
    Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation

    Design of an obstacle avoidance system for automated guided vehicles

    Get PDF
    Most Industrial Automated Guided Vehicles CAGV s) follow fixed guide paths embedded in the floor or bonded to the floor surface. Whilst reliable in their basic operation, these AGV systems fail if unexpected obstacles are placed in the vehicle path. This can be a problem particularly in semi-automated factories where men and AGVs share the same environment. The perfonnance of line-guided AGVs may therefore be enhanced with a capability to avoid unexpected obstructions in the guide path. The research described in this thesis addresses some fundamental problems associated with obstacle avoidance for utomated vehicles. A new obstacle avoidance system has been designed which operates by detecting obstacles as they disturb a light pattern projected onto the floor ahead of the AGV. A CCD camera mounted under the front of the vehicle senses obstacles as they emerge into the projection area and reflect the light pattern. Projected light patterns have been used as an aid to static image analysis in the fields f Computer Aided Design and Engineering. This research extends these ideas in a real-time mobile application. A novel light coding system has been designed which simplifies the image analysis task and allows a low-cost embedded microcontroller to carry out the image processing, code recognition and obstacle avoidance planning functions. An AGV simulation package has been developed as a design tool for obstacle avoidance algorithms. This enables potential strategies to be developed in a high level language and tested via a Graphical User Interface. The algorithms designed using the simulation package were successfully translated to assembler language and implemented on the embedded system. An experimental automated vehicle has been designed and built as a test bed for the research and the complete obstacle avoidance system was evaluated in the Flexible Manufacturing laboratory at the University of Huddersfield

    Designing Automated Guided Vehicle Using Image Sensor

    Get PDF
    Automated guided vehicles (AGV) are one of the greatest achievements in the field of mobile robotics. Without continuous guidance from a human they navigate on desired path thus completing various tasks, e.g. fork lifting objects, towing, and product transportation inside manufacturing firm. Their development can revolutionize the world in the sense of fool proof navigation and accurate maneuvering. Though most of the presently the AGV work in a retrofitted environment, work space as they require some identification for tracing their guide path, works are going on developing such AGVs which are dynamic in sense of navigation and whose locomotion is not limited to just a retrofitted workspace. The aim of this work was developing such a natural feature AGV which takes visual input in the form images and gains detailed object, obstacle, landmark, identification to decide its guide path. The AGV set up developed, used a commercial electric motor based car ‘Reva i’, as chassis which was fitted with camera to take real time input and resolve it using segmentation and image processing techniques to reach a decision of driving controls. These controls were communicated, or better imparted to vehicle using parallel port of computer to servo motors, which in turn controlled the motion of vehicle. The work was focused more on dynamically controlling the vehicle using refinement of driving mechanism (hardware), however it could be assisted using better segmentation and obstacle detection algorithm. All the retro-fitting and codes were developed in such a way that they could be improved at any stage of time. The results could be enhanced if a better stereoscopic camera were used with a dedicated cpu with better graphics capability. This vision based AGV can revolutionize the mobile robotics world, including systems where a human driver is required to take decisions on the basis of visualized condition

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Distance Estimation based on Color-Block: A Simple Big-O Analysis

    Get PDF
    This paper explains how the process of reading the data object detection results with a certain color. In this case the object is an orange tennis ball. We use a Pixy CMUcam5 connecting to the Arduino Nano with microcontroler ATmega328-based. Then through the USB port, data from Arduino nano re-read and displayed. It’s to ensure weather an orange object is detected or not. By this process it will be exactly known how many blocks object detected, including the X and Y coordinates of the object. Finally, it will be explained the complexity of the algorithms used in the process of reading the results of the detection orange object

    2D position system for a mobile robot in unstructured environments

    Get PDF
    Nowadays, several sensors and mechanisms are available to estimate a mobile robot trajectory and location with respect to its surroundings. Usually absolute positioning mechanisms are the most accurate, but they also are the most expensive ones, and require pre installed equipment in the environment. Therefore, a system capable of measuring its motion and location within the environment (relative positioning) has been a research goal since the beginning of autonomous vehicles. With the increasing of the computational performance, computer vision has become faster and, therefore, became possible to incorporate it in a mobile robot. In visual odometry feature based approaches, the model estimation requires absence of feature association outliers for an accurate motion. Outliers rejection is a delicate process considering there is always a trade-off between speed and reliability of the system. This dissertation proposes an indoor 2D position system using Visual Odometry. The mobile robot has a camera pointed to the ceiling, for image analysis. As requirements, the ceiling and the oor (where the robot moves) must be planes. In the literature, RANSAC is a widely used method for outlier rejection. However, it might be slow in critical circumstances. Therefore, it is proposed a new algorithm that accelerates RANSAC, maintaining its reliability. The algorithm, called FMBF, consists on comparing image texture patterns between pictures, preserving the most similar ones. There are several types of comparisons, with different computational cost and reliability. FMBF manages those comparisons in order to optimize the trade-off between speed and reliability

    Modeling Robotic Systems with Activity Flow Graphs

    Get PDF
    Autonomous robotic systems are becoming increasingly common in our society, with research efforts towards automated goods transportation, service robots and autonomous cars. These complex systems have to solve many different problems in order to function robustly. Two especially important areas of interest are perception and high level control. Intelligent systems have to perceive their surroundings in order to facilitate autonomy. With an understanding of the environment, they then can make their own decisions based on high level control policies defined by the developers. Robotic systems differ drastically in their sensory capabilities, their computational power, and their designated tasks. When developing algorithms, however, we need to have a common modeling framework that enables us to generalize and re-use existing solutions. A modular approach, which is coherent across different platforms, also allows faster prototyping of new systems. In this dissertation we develop a modeling framework based on data flow that achieves this goal. We first extend the existing Synchronous Data Flow (SDF) model and combine it with reactive programming ideas and finite-state machines. Together, these existing frameworks enable us to model many aspects of complex robotic systems. We apply this model to a robot in a warehouse scenario to demonstrate the viability of the approach. Using three disjoint formalisms to model a robotic system has many downsides. In a first unification step we merge SDF and reactive programming into Hybrid Flow Graphs (HFGs), where we explicitly model synchronous and asynchronous data flow. We then apply the HFG model to the perception system of an autonomous transportation robot. In a last step, we eliminate the need for separate finite-state machines by introducing the concept of activity into the data flow. We therefore unify the different aspects into a single and coherent framework which we call Activity Flow Graphs (AFGs). The flow of activity enables us to model high level state directly in the data flow graph. The result is a single computation graph that can express both perception and high level control aspects of any robotic system. We then demonstrate this with multiple high level robotic system models. Finally, we make use of the uniform AFG model to provide a single graphical user interface that allows a developer to rapidly prototype complete robotic systems. Since all aspects of a robot can be implemented using the same theoretical framework, there is no need to switch between different paradigms. The user interface is designed to give immediate feedback, which speeds up prototyping, testing and evaluation, as well as debugging when working with real robots.Autonome Roboter werden zunehmend zu einem wichtigen Bestandteil unserer Gesellschaft, in Bereichen wie dem automatisierten Gütertransport, in der Servicerobotik und bei autonomen Automobilen. Diese komplexen Systeme müssen viele Problem lösen, um robust zu funktionieren. Zwei sehr wichtige Anwendungsfelder sind die Umgebungswahrnehmung und die Ablaufplanung. Intelligente Systeme müssen ihre Umgebung wahrnehmen, um autonom agieren zu können. Mit einem Verständnis der Umwelt können sie Entscheidungen treffen, welche auf abstrakten Richtlinien der Entwickler basieren. Verschiedene Roboter weichen stark in ihren sensorischen Fähigkeiten, in ihrer Rechenleistung und in ihren zu lösenden Aufgaben voneinander ab. Bei der Entwicklung von Algorithmen wird jedoch ein einheitliches Modellierungssystem benötigt, welches die Wiederverwendung von existierenden Lösungen erlaubt. Ein modulares System, welches über mehrere Plattformen hinweg genutzt werden kann, ermöglicht eine schnellere Entwicklung von neuen Systemen. In dieser Dissertation wird ein auf Datenfluss basierendes Modell entwickelt, welches diese Anforderungen erfüllt. Zuerst wird das existierende Synchronous Data Flow (SDF) Modell erweitert und mit Elementen von reaktiver Programmierung und endlichen Zustandsautomaten kombiniert. Zusammen können so viele Aspekte von Robotern modelliert werden. Das Modell wird auf einen Roboter in einem Warenhausszenario angewandt, um den Ansatz zu validieren. Drei verschiedene Formalismen zur Modellierung von Robotern zu verwenden hat viele Nachteile. In einem ersten Vereinigungsschritt werden SDF und reaktive Programmierung zu hybriden Flussgraphen (HFG) kombiniert, bei denen synchroner und asynchroner Datenfluss explizit modelliert werden. Dann wird das HFG-Modell auf die Wahrnehmungsmodule eines autonomen Transportsystems angewandt. Anschließend wird der Bedarf eines Zustandsautomaten beseitigt, indem das Konzept der Aktivität in den Datenfluss eingeführt wird. Dadurch werden alle Aspekte in einem einzigen, schlüssigen System vereinigt, welches Aktivitätsflussgraph (AFG) genannt wird. Der Aktivitätsfluss ermöglicht es, den höheren Systemzustand direkt im Datenflussgraphen zu modellieren. Als Ergebnis erhalten wir einen einzigen Berechnungsgraphen, der sowohl zur Beschreibung der Umgebungswahrnehmung als auch zur Kontrolle der höheren Abläufe benutzt werden kann. Dies wird anhand mehrerer Robotersysteme demonstriert. Eine graphische Benutzerschnittstelle wird bereitgestellt, welche von dem einheitlichen Modell Gebrauch macht, um ein schnelles Prototyping von Robotern zu ermöglichen. Da alle Aspekte mit demselben System modelliert werden, muss nicht zwischen verschiedenen Paradigmen gewechselt werden. Die Nutzerschnittstelle erleichtert Entwicklung, Test und Validierung von Algorithmen sowie das Auffinden von Fehlern bei echten Robotern
    corecore