243,073 research outputs found

    Real-time automated road, lane and car detection for autonomous driving

    Get PDF
    In this paper, we discuss a vision based system for autonomous guidance of vehicles. An autonomous intelligent vehicle has to perform a number of functionalities. Segmentation of the road, determining the boundaries to drive in and recognizing the vehicles and obstacles around are the main tasks for vision guided vehicle navigation. In this article we propose a set of algorithms which lead to the solution of road and vehicle segmentation using data from a color camera. The algorithms described here combine gray value difference and texture analysis techniques to segment the road from the image, several geometric transformations and contour processing algorithms are used to segment lanes, and moving cars are extracted with the help of background modeling and estimation. The techniques developed have been tested in real road images and the results are presented

    Autonomous proximity operations using machine vision for trajectory control and pose estimation

    Get PDF
    A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method

    Development of a tabletop guidance system for educational robots

    Get PDF
    The guidance of a vehicle in an outdoor setting is typically implemented using a Real Time Kinematic Global Positioning System (RTK-GPS) potentially enhanced by auxiliary sensors such as electronic compasses, rotation encoders, gyroscopes, and vision systems. Since GPS does not function in an indoor setting where educational competitions are often held, an alternative guidance system was developed. This article describes a guidance method that contains a laser-based localization system, which uses a robot-borne single laser transmitter spinning in a horizontal plane at an angular velocity up to 81 radians per second. Sensor arrays positioned in the corners of a flat rectangular table with dimensions of 1.22 m × 1.83 m detected the laser beam passages. The relative time differences among the detections of the laser passages gave an indication of the angles of the sensors with respect to the laser beam transmitter on the robot. These angles were translated into Cartesian coordinates. The guidance of the robot was implemented using a uni-directional wireless serial connection and position feedback from the localization system. Three experiments were conducted to test the system: 1) the accuracy of the static localization system was determined while the robot stood still. In this test the average error among valid measurements was smaller than 0.3 %. However, a maximum of 3.7 % of the measurements were invalid due to several causes. 2) The accuracy of the guidance system was assessed while the robot followed a straight line. The average deviation from this straight line was 3.6 mm while the robot followed a path with a length of approximately 0.9 m. 3) The overall performance of the guidance system was studied while the robot followed a complex path consisting of 33 sub-paths. The conclusion was that the system worked reasonably accurate, unless the robot came in close proximity

    Three-Dimensional, Vision-Based Proportional Navigation for UAV Collision Avoidance

    Get PDF
    As the number of potential applications for Unmanned Aerial Vehicles (UAVs) keeps rising steadily, the chances that these devices will operate in close proximity to static or dynamic obstacles also increases. Therefore, collision avoidance is an important challenge to overcome for Unmanned Aerial Vehicle operations. Electro-optical devices have several advantages such as light weight, low cost, low algorithm requirements with respect to computational power and possibly night vision capabilities. Therefore, vision-based Unmanned Aerial Vehicle collision avoidance has received considerable attention. Although much progress has been made in collision avoidance systems (CAS), most approaches are focused on two-dimensional environments. In order to operate in complex three-dimensional urban environments, three-dimensional collision avoidance systems are required. This thesis develops a three-dimensional vision-based collision avoidance system to provide sense and avoid capabilities for unmanned aerial vehicles (UAVs) operating in complex urban environments with multiple static and dynamic collision threats. This collision avoidance system is based on the principle of proportional navigation (Pro-Nav), which states that a collision will occur when the line-of-sight (LOS) angles to another object remain constant. According to this guidance law, monocular electro-optical devices can be implemented on Unmanned Aerial Vehicles, which can provide measurements of the line-of-sight angles, indicating potential collision threats. In this thesis, the guidance laws were applied to a nonlinear, six degree-of-freedom Unmanned Aerial Vehicles model in different two-dimensional or three dimensional simulation environments with a varying number of static and dynamic obstacles

    Guidance and control of an autonomous underwater vehicle

    Get PDF
    Merged with duplicate record 10026.1/856 on 07.03.2017 by CS (TIS)A cooperative project between the Universities of Plymouth and Cranfield was aimed at designing and developing an autonomous underwater vehicle named Hammerhead. The work presented herein is to formulate an advance guidance and control system and to implement it in the Hammerhead. This involves the description of Hammerhead hardware from a control system perspective. In addition to the control system, an intelligent navigation scheme and a state of the art vision system is also developed. However, the development of these submodules is out of the scope of this thesis. To model an underwater vehicle, the traditional way is to acquire painstaking mathematical models based on laws of physics and then simplify and linearise the models to some operating point. One of the principal novelties of this research is the use of system identification techniques on actual vehicle data obtained from full scale in water experiments. Two new guidance mechanisms have also been formulated for cruising type vehicles. The first is a modification of the proportional navigation guidance for missiles whilst the other is a hybrid law which is a combination of several guidance strategies employed during different phases of the Right. In addition to the modelling process and guidance systems, a number of robust control methodologies have been conceived for Hammerhead. A discrete time linear quadratic Gaussian with loop transfer recovery based autopilot is formulated and integrated with the conventional and more advance guidance laws proposed. A model predictive controller (MPC) has also been devised which is constructed using artificial intelligence techniques such as genetic algorithms (GA) and fuzzy logic. A GA is employed as an online optimization routine whilst fuzzy logic has been exploited as an objective function in an MPC framework. The GA-MPC autopilot has been implemented in Hammerhead in real time and results demonstrate excellent robustness despite the presence of disturbances and ever present modelling uncertainty. To the author's knowledge, this is the first successful application of a GA in real time optimization for controller tuning in the marine sector and thus the thesis makes an extremely novel and useful contribution to control system design in general. The controllers are also integrated with the proposed guidance laws and is also considered to be an invaluable contribution to knowledge. Moreover, the autopilots are used in conjunction with a vision based altitude information sensor and simulation results demonstrate the efficacy of the controllers to cope with uncertain altitude demands.J&S MARINE LTD., QINETIQ, SUBSEA 7 AND SOUTH WEST WATER PL

    Autonomous Quadrotor Navigation and Guidance

    Get PDF
    This project involves the design of a vision-based navigation and guidance system for a quadrotor unmanned aerial vehicle (UAV), to enable the UAV to follow a planned route specified by navigational markers, such as brightly colored squares, on the ground. A commercially available UAV is modified by attaching a camera and an embedded computer called Raspberry Pi. An image processing algorithm is designed using the open-source software library OpenCV to capture streaming video data from the camera and recognize the navigational markers. A guidance algorithm, also executed by the Raspberry Pi, is designed to command with the UAV autopilot to move from the currently recognized marker to the next marker. Laboratory bench tests and flight tests are performed to validate the designs
    corecore