1,371 research outputs found

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    Deep Learning Localization for Self-driving Cars

    Get PDF
    Identifying the location of an autonomous car with the help of visual sensors can be a good alternative to traditional approaches like Global Positioning Systems (GPS) which are often inaccurate and absent due to insufficient network coverage. Recent research in deep learning has produced excellent results in different domains leading to the proposition of this thesis which uses deep learning to solve the problem of localization in smart cars with visual data. Deep Convolutional Neural Networks (CNNs) were used to train models on visual data corresponding to unique locations throughout a geographic location. In order to evaluate the performance of these models, multiple datasets were created from Google Street View as well as manually by driving a golf cart around the campus while collecting GPS tagged frames. The efficacy of the CNN models was also investigated across different weather/light conditions. Validation accuracies as high as 98% were obtained from some of these models, proving that this novel method has the potential to act as an alternative or aid to traditional GPS based localization methods for cars. The root mean square (RMS) precision of Google Maps is often between 2-10m. However, the precision required for the navigation of self-driving cars is between 2-10cm. Empirically, this precision has been achieved with the help of different error-correction systems on GPS feedback. The proposed method was able to achieve an approximate localization precision of 25 cm without the help of any external error correction system

    Visual navigation and path tracking using street geometry information for image alignment and servoing

    Get PDF
    Single camera-based navigation systems need information from other sensors or from the work environment to produce reliable and accurate position measurements. Providing such trustable, accurate, and available information in the environment is very important. The work highlights that the availability of well-described streets in urban environments can be exploited by drones for navigation and path tracking purposes, thus benefitting from such structures is not limited to only automated driving cars. While the drone position is continuously computed using visual odometry, scene matching is used to correct the position drift depending on some landmarks. The drone path is defined by several waypoints, and landmarks centralized by those waypoints are carefully chosen in the street intersections. The known streets’ geometry and dimensions are used to estimate the image scale and orientation which are necessary for images alignment, to compensate for the visual odometry drift, and to pass closer to the landmark center by the visual servoing process. Probabilistic Hough transform is used to detect and extract the street borders. The system is realized in a simulation environment consisting of the Robot Operating System ROS, 3D dynamic simulator Gazebo, and IRIS drone model. The results prove the suggested system efficiency with a 1.4 m position RMS error

    System of Terrain Analysis, Energy Estimation and Path Planning for Planetary Exploration by Robot Teams

    Get PDF
    NASA’s long term plans involve a return to manned moon missions, and eventually sending humans to mars. The focus of this project is the use of autonomous mobile robotics to enhance these endeavors. This research details the creation of a system of terrain classification, energy of traversal estimation and low cost path planning for teams of inexpensive and potentially expendable robots. The first stage of this project was the creation of a model which estimates the energy requirements of the traversal of varying terrain types for a six wheel rocker-bogie rover. The wheel/soil interaction model uses Shibly’s modified Bekker equations and incorporates a new simplified rocker-bogie model for estimating wheel loads. In all but a single trial the relative energy requirements for each soil type were correctly predicted by the model. A path planner for complete coverage intended to minimize energy consumption was designed and tested. It accepts as input terrain maps detailing the energy consumption required to move to each adjacent location. Exploration is performed via a cost function which determines the robot’s next move. This system was successfully tested for multiple robots by means of a shared exploration map. At peak efficiency, the energy consumed by our path planner was only 56% that used by the best case back and forth coverage pattern. After performing a sensitivity analysis of Shibly’s equations to determine which soil parameters most affected energy consumption, a neural network terrain classifier was designed and tested. The terrain classifier defines all traversable terrain as one of three soil types and then assigns an assumed set of soil parameters. The classifier performed well over all, but had some difficulty distinguishing large rocks from sand. This work presents a system which successfully classifies terrain imagery into one of three soil types, assesses the energy requirements of terrain traversal for these soil types and plans efficient paths of complete coverage for the imaged area. While there are further efforts that can be made in all areas, the work achieves its stated goals

    Automated Visual Database Creation For A Ground Vehicle Simulator

    Get PDF
    This research focuses on extracting road models from stereo video sequences taken from a moving vehicle. The proposed method combines color histogram based segmentation, active contours (snakes) and morphological processing to extract road boundary coordinates for conversion into Matlab or Multigen OpenFlight compatible polygonal representations. Color segmentation uses an initial truth frame to develop a color probability density function (PDF) of the road versus the terrain. Subsequent frames are segmented using a Maximum Apostiori Probability (MAP) criteria and the resulting templates are used to update the PDFs. Color segmentation worked well where there was minimal shadowing and occlusion by other cars. A snake algorithm was used to find the road edges which were converted to 3D coordinates using stereo disparity and vehicle position information. The resulting 3D road models were accurate to within 1 meter

    Exploiting the Internet Resources for Autonomous Robots in Agriculture

    Get PDF
    Autonomous robots in the agri-food sector are increasing yearly, promoting the application of precision agriculture techniques. The same applies to online services and techniques implemented over the Internet, such as the Internet of Things (IoT) and cloud computing, which make big data, edge computing, and digital twins technologies possible. Developers of autonomous vehicles understand that autonomous robots for agriculture must take advantage of these techniques on the Internet to strengthen their usability. This integration can be achieved using different strategies, but existing tools can facilitate integration by providing benefits for developers and users. This study presents an architecture to integrate the different components of an autonomous robot that provides access to the cloud, taking advantage of the services provided regarding data storage, scalability, accessibility, data sharing, and data analytics. In addition, the study reveals the advantages of integrating new technologies into autonomous robots that can bring significant benefits to farmers. The architecture is based on the Robot Operating System (ROS), a collection of software applications for communication among subsystems, and FIWARE (Future Internet WARE), a framework of open-source components that accelerates the development of intelligent solutions. To validate and assess the proposed architecture, this study focuses on a specific example of an innovative weeding application with laser technology in agriculture. The robot controller is distributed into the robot hardware, which provides real-time functions, and the cloud, which provides access to online resources. Analyzing the resulting characteristics, such as transfer speed, latency, response and processing time, and response status based on requests, enabled positive assessment of the use of ROS and FIWARE for integrating autonomous robots and the Internet

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016
    corecore