8,338 research outputs found

    Mobile robot guidance

    Get PDF
    Assistive robotics is an increasingly growing field that has many applications. In an assisted living setting, there may be instances in which patients experience compromised mobility, and are therefore left either temporarily or permanently restricted to wheelchairs or beds. The utilization of assistive robotics in these settings could revolutionize treatment for immobile individuals by promoting effective patient-environment interaction and increase the independence and overall morale of affected individuals.Currently, there are two primary classes of assistive robots: service robots, and social robots. Service robots assist with tasks that individuals would normally complete themselves, but are unable to complete due to impairment or temporary restriction. Assistive social robots include companion robots, which stimulate mental activity and, intellectually engages its users. Current service robots may have depth sensors and visual recognition software integrated into one self-contained unit. The depth sensors are used for obstacle avoidance. Vision systems may be used for many applications including obstacle avoidance, gesture recognition, or object recognition. Gestures may be used by the unit as commands to move in the indicated direction.Assistive mobile robots have included devices such as laser pointers or vision systems to determine a user's object of interest and where it is located. Others have used video cameras for gesture recognition as stated above. Approaches to mobile robot guidance involving these devices may be difficult for individuals with impaired manual dexterity to use. If the individual is immobile, it would be difficult to operate the mentioned devices.The objective of this research was to integrate a method that allowed the user to command a robotic agent to traverse to an object of interest by utilizing eye gaze. This approach allowed the individual to command the robot with eyesight through the use of a head-worn gaze tracking device. Once the object was recognized, the robot was given the coordinates retrieved from the gaze tracker. The unit then proceeded to the object of interest by utilizing multiple sensors to avoid obstacles.In this research, the participant was asked to don an eye gaze tracker head worn device. The device gathered multiple points in the x, y, and z coordinate planes. MATLAB was used to determine the accuracy of the collected data, as well as the means to determine a set of x, y, and z coordinates needed as input for the mobile robot. After analyzing the results, it was determined that the eye gaze tracker could provide x and y coordinates that could be utilized as inputs for the mobile robot to get the object of interest. The z coordinate was determined to be unreliable as it would either be too short or overshoot from the object of interest

    A reconfigurable hybrid intelligent system for robot navigation

    Get PDF
    Soft computing has come of age to o er us a wide array of powerful and e cient algorithms that independently matured and in uenced our approach to solving problems in robotics, search and optimisation. The steady progress of technology, however, induced a ux of new real-world applications that demand for more robust and adaptive computational paradigms, tailored speci cally for the problem domain. This gave rise to hybrid intelligent systems, and to name a few of the successful ones, we have the integration of fuzzy logic, genetic algorithms and neural networks. As noted in the literature, they are signi cantly more powerful than individual algorithms, and therefore have been the subject of research activities in the past decades. There are problems, however, that have not succumbed to traditional hybridisation approaches, pushing the limits of current intelligent systems design, questioning their solutions of a guarantee of optimality, real-time execution and self-calibration. This work presents an improved hybrid solution to the problem of integrated dynamic target pursuit and obstacle avoidance, comprising of a cascade of fuzzy logic systems, genetic algorithm, the A* search algorithm and the Voronoi diagram generation algorithm

    Machine Vision-based Obstacle Avoidance for Mobile Robot

    Get PDF
    Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value> 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing <12500 pixel is 70% of the movement of the robot forward approaching the barrier object

    Development of a bio-inspired vision system for mobile micro-robots

    Get PDF
    In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control

    Bayesian robot Programming

    Get PDF
    We propose a new method to program robots based on Bayesian inference and learning. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combinations, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of this approach are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Neural Network Local Navigation of Mobile Robots in a Moving Obstacles Environment

    Get PDF
    IF AC Intelligent Components and Instruments for Control Applications, Budapest, Hungary, 1994This paper presents a local navigation method based on generalized predictive control. A modified cost function to avoid moving and static obstacles is presented. An Extended Kaiman Filter is proposed to predict the motions of the obstacles. A Neural Network implementation of this method is analysed. Simulation results are shown.Ministerio de Ciencia y Tecnología TAP93-0408Ministerio de Ciencia y Tecnología TAP93-058
    corecore