4,528 research outputs found

    Neural network controller against environment: A coevolutive approach to generalize robot navigation behavior

    Get PDF
    In this paper, a new coevolutive method, called Uniform Coevolution, is introduced to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn high-performance reactive behavior for navigation and collisions avoidance. The introduction of coevolutive over evolutionary strategies allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on a mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.Publicad

    Air vehicle simulator: an application for a cable array robot

    Get PDF
    The development of autonomous air vehicles can be an expensive research pursuit. To alleviate some of the financial burden of this process, we have constructed a system consisting of four winches each attached to a central pod (the simulated air vehicle) via cables - a cable-array robot. The system is capable of precisely controlling the three dimensional position of the pod allowing effective testing of sensing and control strategies before experimentation on a free-flying vehicle. In this paper, we present a brief overview of the system and provide a practical control strategy for such a system. ©2005 IEEE

    On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration

    Get PDF
    Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented

    Robust navigation control and headland turning optimization of agricultural vehicles

    Get PDF
    Autonomous agricultural robots have experienced rapid development during the last decade. They are capable of automating numerous field operations such as data collection, spraying, weeding, and harvesting. Because of the increasing demand of field work load and the diminishing labor force on the contrary, it is expected that more and more autonomous agricultural robots will be utilized in future farming systems. The development of a four-wheel-steering (4WS) and four-wheel-driving (4WD) robotic vehicle, AgRover, was carried out at Agricultural Automation and Robotics Lab at Iowa State University. As a 4WS/4WD robotic vehicle, AgRover was able to work under four steering modes, including crabbing, front steering, rear steering, and coordinated steering. These steering modes provided extraordinary flexibilities to cope with off-road path tracking and turning situations. AgRover could be manually controlled by a remote joystick to perform activities under individual PID controller of each motor. Socket based software, written in Visual C#, was developed at both AgRover side and remote PC side to manage bi-directional data communication. Safety redundancy was also considered and implemented during the software development. One of the prominent challenges in automated navigation control for off-road vehicles is to overcome the inaccuracy of vehicle modeling and the complexity of soil-tire interactions. Further, the robotic vehicle is a multiple-input and multiple-output (MIMO) high-dimensional nonlinear system, which is hard to be controlled or incorporated by conventional linearization methods. To this end, a robust nonlinear navigation controller was developed based on the Sliding Mode Control (SMC) theory and AgRover was used as the test platform to validate the controller performance. Based on the theoretical framework of such robust controller development, a series of field experiments on robust trajectory tracking control were carried out and promising results were achieved. Another vitally important component in automated agricultural field equipment navigation is automatic headland turning. Until now automated headland turning still remains as a challenging task for most auto-steer agricultural vehicles. This is particularly true after planting where precise alignment between crop row and tractor or tractor-implement is critical when equipment entering the next path. Given the motion constraints originated from nonholonomic agricultural vehicles and allowable headland turning space, to realize automated headland turning, an optimized headland turning trajectory planner is highly desirable. In this dissertation research, an optimization scheme was developed to incorporate vehicle system models, a minimum turning-time objective, and a set of associated motion constraints through a direct collocation nonlinear programming (DCNLP) optimization approach. The optimization algorithms were implemented using Matlab scripts and TOMLAB/SNOPT tool boxes. Various case studies including tractor and tractor-trailer combinations under different headland constraints were conducted. To validate the soundness of the developed optimization algorithm, the planner generated turning trajectory was compared with the hand-calculated trajectory when analytical approach was possible. The overall trajectory planning results clearly demonstrated the great potential of utilizing DCNLP methods for headland turning trajectory optimization for a tractor with or without towed implements

    NASA Center for Intelligent Robotic Systems for Space Exploration

    Get PDF
    NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP
    • 

    corecore