12 research outputs found

    Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

    Full text link
    Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in Robotic

    Improving Collision Avoidance Behavior of a Target-Searching Algorithm for Kilobots

    Get PDF
    Collision avoidance in the area of swarm robotics is very important. The lacking ability of such collision avoidance is mentioned as one important reason for the sparse distribution of the small test robots named Kilobots. In this research paper, two new algorithms providing a collision avoidance strategy are presented and compared with previous research results. The first algorithm uses randomness to decide which one of several approaching Kilobots are stopped for a defined time before starting to move again. The second algorithm tries   to determine the assumed position of approaching Kilobots based on its radio signal strength and then to move away in the opposite direction by rotation. The results, especially of the second algorithm, are promising as the number of collisions can be significantly reduced

    Position and Orientation Tracking System three-dimensional graphical user interface. CRADA final report

    Full text link

    Advanced compact laser scanning system enhancements for gear and thread measurements. Final CRADA report

    Full text link

    Obstacle avoidance using stereo vision and deep reinforcement learning in an animal-like robot

    Get PDF
    Obstacle avoidance is a fundamental behavior required to achieve safety and stability in both animals and robots. Many animals perceive and safely navigate their environment using two eyes with overlapping visual fields, allowing the use of stereopsis to compute distances to surfaces and to support collision avoidance. In this paper we develop an obstacle avoidance behavior for the biomimetic robot MiRo that combines stereo vision with deep reinforcement learning. We further show that avoidance strategies, learned for a simulated robot and environment, can be effectively transferred to a physical robot

    Acquisition and modeling of 3D irregular objects.

    Get PDF
    by Sai-bun Wong.Thesis (M.Phil.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 127-131).Abstract --- p.vAcknowledgment --- p.viiChapter 1 --- Introduction --- p.1-8Chapter 1.1 --- Overview --- p.2Chapter 1.2 --- Survey --- p.4Chapter 1.3 --- Objectives --- p.6Chapter 1.4 --- Thesis Organization --- p.7Chapter 2 --- Range Sensing --- p.9-30Chapter 2.1 --- Alternative Approaches to Range Sensing --- p.9Chapter 2.1.1 --- Size Constancy --- p.9Chapter 2.1.2 --- Defocusing --- p.11Chapter 2.1.3 --- Deconvolution --- p.14Chapter 2.1.4 --- Binolcular Vision --- p.18Chapter 2.1.5 --- Active Triangulation --- p.20Chapter 2.1.6 --- Time-of-Flight --- p.22Chapter 2.2 --- Transmitter and Detector in Active Sensing --- p.26Chapter 2.2.1 --- Acoustics --- p.26Chapter 2.2.2 --- Optics --- p.28Chapter 2.2.3 --- Microwave --- p.29Chapter 2.3 --- Conclusion --- p.29Chapter 3 --- Scanning Mirror --- p.31-47Chapter 3.1 --- Scanning Mechanisms --- p.31Chapter 3.2 --- Advantages of Scanning Mirror --- p.32Chapter 3.3 --- Feedback of Scanning Mirror --- p.33Chapter 3.4 --- Scanning Mirror Controller --- p.35Chapter 3.5 --- Point-to-Point Scanning --- p.39Chapter 3.6 --- Line Scanning --- p.39Chapter 3.7 --- Specifications and Measurements --- p.41Chapter 4 --- The Rangefinder with Reflectance Sensing --- p.48-58Chapter 4.1 --- Ambient Noises --- p.49Chapter 4.2 --- Occlusion/Shadow --- p.49Chapter 4.3 --- Accuracy and Precision --- p.50Chapter 4.4 --- Optics --- p.53Chapter 4.5 --- Range/Reflectance Crosstalk --- p.56Chapter 4.6 --- Summary --- p.58Chapter 5 --- Computer Generation of Range Map --- p.59-75Chapter 5.1 --- Homogenous Transformation --- p.61Chapter 5.2 --- From Global to Viewer Coordinate --- p.63Chapter 5.3 --- Z-buffering --- p.55Chapter 5.4 --- Generation of Range Map --- p.66Chapter 5.5 --- Experimental Results --- p.68Chapter 6 --- Characterization of Range Map --- p.76-90Chapter 6.1 --- Mean and Gaussian Curvature --- p.76Chapter 6.2 --- Methods of Curvature Generation --- p.78Chapter 6.2.1 --- Convolution --- p.78Chapter 6.2.2 --- Local Surface Patching --- p.81Chapter 6.3 --- Feature Extraction --- p.84Chapter 6.4 --- Conclusion --- p.85Chapter 7 --- Merging Multiple Characteristic Views --- p.91-119Chapter 7.1 --- Rigid Body Model --- p.91Chapter 7.2 --- Sub-rigid Body Model --- p.94Chapter 7.3 --- Probabilistic Relaxation Matching --- p.95Chapter 7.4 --- Merging the Sub-rigid Body Model --- p.99Chapter 7.5 --- Illustration --- p.101Chapter 7.6 --- Merging Multiple Characteristic Views --- p.104Chapter 7.7 --- Mislocation of Feature Extraction --- p.105Chapter 7.7.1 --- The Transform Matrix for Perfect Matching --- p.106Chapter 7.7.2 --- Introducing The Errors in Feature Set --- p.108Chapter 7.8 --- Summary --- p.113Chapter 8 --- Conclusion --- p.120-126References --- p.127-131Appendix A - Projection of Object --- p.A1-A2Appendix B - Performance Analysis on Rangefinder System --- p.B1-B16Appendix C - Matching of Two Characteristic views --- p.C1-C

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation solves the collision avoidance problem for single- and multi-robot systems where dynamic effects are significant. In many robotic systems (e.g., highly maneuverable and agile unmanned aerial vehicles) the dynamics cannot be ignored and collision avoidance schemes based on kinematic models can result in collisions or provide limited performance, especially at high operating speeds. Herein, real-time, model-based collision avoidance algorithms that explicitly consider the robots' dynamics and perform real-time input changes to alter the trajectory and steer the robot away from potential collisions are developed, implemented, and verified in simulations and physical experiments. Such algorithms are critical in applications where a high degree of autonomy and performance are needed, for example in robot-assisted first response where aerial and/or mobile ground robots are required to maneuver quickly through cluttered and dangerous environments in search of survivors. Firstly, the research extends reciprocal collision avoidance to robots with dynamics by unifying previous approaches to reciprocal collision avoidance under a single, generalized representation using control obstacles. In fact, it is shown how velocity obstacles, acceleration velocity obstacles, continuous control obstacles, and linear quadratic regulator (LQR)-obstacles are special instances of the generalized framework. Furthermore, an extension of control obstacles to general reciprocal collision avoidance for nonlinear, nonhomogeneous systems where the robots may have different state spaces and different nonlinear equations of motion from one another is described. Both simulations and physical experiments are provided for a combination of differential-drive, differential-drive with a trailer, and car-like robots to demonstrate that the approach is capable of letting a nonhomogeneous group of robots with nonlinear equations of motion safely avoid collisions at real-time computation rates. Secondly, the research develops a stochastic collision avoidance algorithm for a tele-operated unmanned aerial vehicle (UAV) that considers uncertainty in the robot's dynamics model and the obstacles' position as measured from sensors. The model-based automatic collision avoidance algorithm is implemented on a custom-designed quadcopter UAV system with on-board computation and the sensor data are processed using a split-and-merge segmentation algorithm and an approximate Minkowski difference. Flight tests are conducted to validate the algorithm's capabilities for providing tele-operated collision-free operation. Finally, a set of human subject studies are performed to quantitatively compare the performance between the model-based algorithm, the basic risk field algorithm (a variant on potential field), and full manual control. The results show that the model-based algorithm performs significantly better than manual control in both the number of collisions and the UAV's average speed, both of which are extremely vital, for example, for UAV-assisted search and rescue applications. Compared to the potential-field-based algorithm, the model-based algorithm allowed the pilot to operate the UAV with higher average speeds

    Matching Points to Lines: Sonar-based Localization for the PSUBOT

    Get PDF
    The PSUBOT (pronounced pea-es-you-bought) is an autonomous wheelchair robot for persons with certain disabilities. Its use of voice recognition and autonomous navigation enable it to carry out high level commands with little or no user assistance. We first describe the goals, constraints, and capabilities of the overall system including path planning and obstacle avoidance. We then focus on localization-the ability of the robot to locate itself in space. Odometry, a compass, and an algorithm which matches points to lines are each employed to accomplish this task. The matching algorithm (which matches points to lines ) is the main contribution to this work. The .. points are acquired from a rotating sonar device, and the lines are extracted from a user-entered line-segment model of the building. The algorithm assumes that only small corrections are necessary to correct for odometry errors which inherently accumulate, and makes a correction by shifting and rotating the sonar image so that the data points are as close as possible to the lines. A modification of the basic algorithm to accommodate parallel lines was developed as well as an improvement to the basic noise removal algorithm. We found that the matching algorithm was able to determine the location of the robot to within one foot even when required to correct for as many as five feet of simulated odometry error. Finally, the algorithm\u27s complexity was found to be well within the processing power of currently available hardware
    corecore