60 research outputs found

    Computer Vision-Based Traffic Sign Detection and Extraction: A Hybrid Approach Using GIS And Machine Learning

    Get PDF
    Traffic sign detection and positioning have drawn considerable attention because of the recent development of autonomous driving and intelligent transportation systems. In order to detect and pinpoint traffic signs accurately, this research proposes two methods. In the first method, geo-tagged Google Street View images and road networks were utilized to locate traffic signs. In the second method, both traffic signs categories and locations were identified and extracted from the location-based GoPro video. TensorFlow is the machine learning framework used to implement these two methods. To that end, 363 stop signs were detected and mapped accurately using the first method (Google Street View image-based approach). Then 32 traffic signs were recognized and pinpointed using the second method (GoPro video-based approach) for better location accuracy, within 10 meters. The average distance from the observation points to the 32 ground truth references was 7.78 meters. The advantages of these methods were discussed. GoPro video-based approach has higher location accuracy, while Google Street View image-based approach is more accessible in most major cities around the world. The proposed traffic sign detection workflow can thus extract and locate traffic signs in other cities. For further consideration and development of this research, IMU (Inertial Measurement Unit) and SLAM (Simultaneous Localization and Mapping) methods could be integrated to incorporate more data and improve location prediction accuracy

    A unified vision and inertial navigation system for planetary hoppers

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (pages 139-146).In recent years, considerable attention has been paid to hopping as a novel mode of planetary exploration. Hopping vehicles provide advantages over traditional surface exploration vehicles, such as wheeled rovers, by enabling in-situ measurements in otherwise inaccessible terrain. However, significant development over previously demonstrated vehicle navigation technologies is required to overcome the inherent challenges involved in navigating a hopping vehicle, especially in adverse terrain. While hoppers are in many ways similar to traditional landers and surface explorers, they incorporate additional, unique motions that must be accounted for beyond those of conventional planetary landing and surface navigation systems. This thesis describes a unified vision and inertial navigation system for propulsive planetary hoppers and provides demonstration of this technology. An architecture for a navigation system specific to the motions and mission profiles of hoppers is presented, incorporating unified inertial and terrain-relative navigation solutions. A modular sensor testbed, including a stereo vision package and inertial measurement unit, was developed to act as a proof-of-concept for this navigation system architecture. The system is shown to be capable of real-time output of an accurate navigation state estimate for motions and trajectories similar to those of planetary hoppers.by Theodore J. Steiner, III.S.M

    Pose estimation and data fusion algorithms for an autonomous mobile robot based on vision and IMU in an indoor environment

    Get PDF
    Thesis (PhD(Computer Engineering))--University of Pretoria, 2021.Autonomous mobile robots became an active research direction during the past few years, and they are emerging in different sectors such as companies, industries, hospital, institutions, agriculture and homes to improve services and daily activities. Due to technology advancement, the demand for mobile robot has increased due to the task they perform and services they render such as carrying heavy objects, monitoring, delivering of goods, search and rescue missions, performing dangerous tasks in places like underground mines. Instead of workers being exposed to hazardous chemicals or environments that could affect health and put lives at risk, humans are being replaced with mobile robot services. It is with these concerns that the enhancement of mobile robot operation is necessary, and the process is assisted through sensors. Sensors are used as instrument to collect data or information that aids the robot to navigate and localise in its environment. Each sensor type has inherent strengths and weaknesses, therefore inappropriate combination of sensors could result into high cost of sensor deployment with low performance. Regardless, the potential and prospect of autonomous mobile robot, they are yet to attain optimal performance, this is because of integral challenges they are faced with most especially localisation. Localisation is one the fundamental issues encountered in mobile robot which demands attention and the challenging part is estimating the robot position and orientation of which this information can be acquired from sensors and other relevant systems. To tackle the issue of localisation, a good technique should be proposed to deal with errors, downgrading factors, improper measurement and estimations. Different approaches are recommended in estimating the position of a mobile robot. Some studies estimated the trajectory of the mobile robot and indoor scene reconstruction using a monocular visual odmometry. This approach cannot be feasible for large zone and complex environment. Radio frequency identification (RFID) technology on the other hand provides accuracy and robustness, but the method depend on the distance between the tags, and the distance between the tags and the reader. To increase the localisation accuracy, the number of RFID tags per unit area has to be increased. Therefore, this technique may not result in economical and easily scalable solution because of the increasing number of required tags and the associated cost of their deployment. Global Positioning System (GPS) is another approach that offers proved results in most scenarios, however, indoor localization is one of the settings in which GPS cannot be used because the signal strength is not reliable inside a building. Most approaches are not able to precisely localise autonomous mobile robot even with the high cost of equipment and complex implementation. Most the devices and sensors either requires additional infrastructures or they are not suitable to be used in an indoor environment. Therefore, this study proposes using data from vision and inertial sensors which comprise 3-axis of accelerometer and 3-axis of gyroscope, also known as 6-degree of freedom (6-DOF) to determine pose estimation of mobile robot. The inertial measurement unit (IMU) based tracking provides fast response, therefore, they can be considered to assist vision whenever it fails due to loss of visual features. The use of vision sensor helps to overcome the characteristic limitation of the acoustic sensor for simultaneous multiple object tracking. With this merit, vision is capable of estimating pose with respect to the object of interest. A singular sensor or system is not reliable to estimate the pose of a mobile robot due to limitations, therefore, data acquired from sensors and sources are combined using data fusion algorithm to estimate position and orientation within specific environment. The resulting model is more accurate because it balances the strengths of the different sensors. Information provided through sensor or data fusion can be used to support more-intelligent actions. The proposed algorithms are expedient to combine data from each of the sensor types to provide the most comprehensive and accurate environmental model possible. The algorithms use a set of mathematical equations that provides an efficient computational means to estimate the state of a process. This study investigates the state estimation methods to determine the state of a desired system that is continuously changing given some observations or measurements. From the performance and evaluation of the system, it can be observed that the integration of sources of information and sensors is necessary. This thesis has provided viable solutions to the challenging problem of localisation in autonomous mobile robot through its adaptability, accuracy, robustness and effectiveness.NRFUniversity of PretoriaElectrical, Electronic and Computer EngineeringPhD(Computer Engineering)Unrestricte

    DEVELOPMENT OF A NOVEL VEHICLE GUIDANCE SYSTEM: VEHICLE RISK MITIGATION AND CONTROL

    Get PDF
    Over a half of fatal vehicular crashes occur due to vehicles leaving their designated travel lane and entering other lanes or leaving the roadway. Lane departure accidents also result in billions of dollars in cost to society. Recent vehicle technology research into driver assistance and vehicle autonomy has developed to assume various driving tasks. However, these systems are do not work for all roads and travel conditions. The purpose of this research study was to begin the development a novel vehicle guidance approach, specifically studying how the vehicle interacts with the system to detect departures and control the vehicle A literature review was conducted, covering topics such as vehicle sensors, control methods, environment recognition, driver assistance methods, vehicle autonomy methods, communication, positioning, and regulations. Researchers identified environment independence, recognition accuracy, computational load, and industry collaboration as areas of need in intelligent transportation. A novel method of vehicle guidance was conceptualized known as the MwRSF Smart Barrier. The vision of this method is to send verified road path data, based AASHTO design and vehicle dynamic aspects, to guide the vehicle. To further development research was done to determine various aspects of vehicle dynamics and trajectory trends can be used to predict departures and control the vehicle. Tire-to-road friction capacity and roll stability were identified as traits that can be prevented with future road path knowledge. Road departure characteristics were mathematically developed. It was shown that lateral departure, orientation error, and curvature error are parametrically linked, and discussion was given for these metrics as the basis for of departure prediction. A three parallel PID controller for modulating vehicle steering inputs to a virtual vehicle to remain on the path was developed. The controller was informed by a matrix of XY road coordinates, road curvature and future road curvature and was able to keep the simulated vehicle to within 1 in of the centerline target path. Recommendations were made for the creation of warning modules, threshold levels, improvements to be applied to vehicle controller, and ultimately full-scale testing. Advisor: Cody S. Stoll

    A Camera-Only Based Approach to Traffic Parameter Estimation Using Mobile Observer Methods

    Get PDF
    As vehicles become more modern, a large majority of vehicles on the road will have the required sensors to smoothly interact with other vehicles and infrastructure on the road. There will be many benefits of this new connectivity between vehicles on the road but one of the most profound improvements will be in the area of road accident prevention. Vehicles will be able to share information vital to road safety to oncoming vehicles and vehicles that are occluded so they do not have a direct line of sight to see a pedestrian or another vehicle on the road. Another advantage of these modern connected vehicles is that different traffic parameters can be more easily estimated using the onboard sensors and technologies in the vehicles. For many decades traffic engineers have been able to estimate different traffic parameters like traffic flow, density, and velocity based on how many vehicles the primary vehicle passes and how many vehicles pass the primary vehicles. For much of the time that traffic engineers have been working on traffic estimation, it has been done using more manual and tedious methods. In this paper, a more novel approach of determining these traffic parameters is used. Also, one of the problems with traffic parameter estimation is that sometimes the results are not accurate because of vehicles that might not have been counted because of occlusion. In this paper, a proposal is put forward on how this can be remedied utilizing the connected vehicle\u27s framework

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Frequency Modulated Continuous Wave Radar and Video Fusion for Simultaneous Localization and Mapping

    Get PDF
    There has been a push recently to develop technology to enable the use of UAVs in GPS-denied environments. As UAVs become smaller, there is a need to reduce the number and sizes of sensor systems on board. A video camera on a UAV can serve multiple purposes. It can return imagery for processing by human users. The highly accurate bearing information provided by video makes it a useful tool to be incorporated into a navigation and tracking system. Radars can provide information about the types of objects in a scene and can operate in adverse weather conditions. The range and velocity measurements provided by the radar make it a good tool for navigation. FMCW radar and color video were fused to perform SLAM in an outdoor environment. A radar SLAM solution provided the basis for the fusion. Correlations between radar returns were used to estimate dead-reckoning parameters to obtain an estimate of the platform location. A new constraint was added in the radar detection process to prevent detecting poorly observable reflectors while maintaining a large number of measurements on highly observable reflectors. The radar measurements were mapped as landmarks, further improving the platform location estimates. As images were received from the video camera, changes in platform orientation were estimated, further improving the platform orientation estimates. The expected locations of radar measurements, whose uncertainty was modeled as Gaussian, were projected onto the images and used to estimate the location of the radar reflector in the image. The colors of the most likely reflector were saved and used to detect the reflector in subsequent images. The azimuth angles obtained from the image detections were used to improve the estimates of the landmarks in the SLAM map over previous estimates where only the radar was used

    Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms

    Get PDF
    This work is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences

    Integrated vehicle-based safety systems first annual report

    Full text link
    The IVBSS program is a four-year, two phase cooperative research program being conducted by an industry team led by the University of Michigan Transportation Research Institute (UMTRI). The program began in November 2005 and will continue through December 2009 if results from vehicle verification tests conducted in the second year of the program indicate that the prototype system meets its performance guidelines and is safe for use by lay drivers in a field operational test planned for July 2008. The decision to execute Phase II of the program will take place in December 2007. The goal of the IVBSS program is to assess the safety benefits and driver acceptance associated with a prototype integrated crash warning system designed to address rear-end, road departure and lane change/merge crashes on light vehicles and heavy commercial trucks. This report describes accomplishments and progress made during the first year of the program (November 2005-December 2006). Activities during the first year focused on system specification, design and development and construction of the prototype vehicles.National Highway Traffic Safety Administrationhttp://deepblue.lib.umich.edu/bitstream/2027.42/57183/1/99863.pd

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction
    • …
    corecore