3 research outputs found

    An embarrassingly simple approach for visual navigation of forest environments

    Get PDF
    Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and weather conditions. The challenges are further compounded for inexpensive small-sized (portable) rovers. While such rovers may be useful for collaboratively monitoring large tracts of forests as a swarm, with low environmental impact, their small-size affords them only a low viewpoint of their proximal terrain. Moreover, their limited view may frequently be partially occluded by compliant obstacles in close proximity such as shrubs and tall grass. Perhaps, consequently, most studies on off-road navigation typically use large-sized rovers equipped with expensive exteroceptive navigation sensors. We design a low-cost navigation system tailored for small-sized forest rovers. For navigation, a light-weight convolution neural network is used to predict depth images from RGB input images from a low-viewpoint monocular camera. Subsequently, a simple coarse-grained navigation algorithm aggregates the predicted depth information to steer our mobile platform towards open traversable areas in the forest while avoiding obstacles. In this study, the steering commands output from our navigation algorithm direct an operator pushing the mobile platform. Our navigation algorithm has been extensively tested in high-fidelity forest simulations and in field trials. Using no more than a 16 × 16 pixel depth prediction image from a 32 × 32 pixel RGB image, our algorithm running on a Raspberry Pi was able to successfully navigate a total of over 750 m of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, small ditches and mounds, and standing trees, under five different weather conditions and four different times of day. Furthermore, our algorithm exhibits robustness to changes in the mobile platform’s camera pitch angle, motion blur, low lighting at dusk, and high-contrast lighting conditions

    Master of Science

    Get PDF
    thesisThere has been much research on how to get unmanned aerial vehicles (UAVs) to perch on many different types of surfaces and objects, including flat surfaces, ramps, tree branches, power lines, etc. Many of these surfaces are static and it is easy to detect falls using inertial sensors such as accelerometers or gyroscopes. However, some perches, such as tree branches and power lines, are not static. When the UAV is perched on these perches, it will move with them, making the detection of falls from such a perch much more difficult than simply trying to sense motion. This thesis proposes two methods for fall detection of a UAV perched on such a dynamic perch. Computer vision is used on a feed from a camera mounted on the bottom of the UAV. Optical flow is used in conjunction with a filter that segments the perch in the image from the background to estimate the relative motion between the UAV and the perch. If the motion exceeds certain bounds, the UAV is considered falling. The second method tries to find the instantaneous center of rotation (ICR) of the UAV utilizing accelerometers and a gyroscope mounted to the UAV frame. Two methods are proposed to do this, one based on integrating the accelerometers to find the velocity at a point, the other finds the distance between the ICR and three points on the rigid frame of the UAV. The ICR estimates from these two methods are compared to an ICR estimate derived from data from an external Vicon motion capture system. The estimated ICR is then compared to the ICR of the perch that the UAV has perched on, if the two diverge enough, the perch is considered to be falling. The proposed methods were tested experimentally by placing a test quadrotor fitted with the appropriate sensors on three different test perches: a painted PVC pipe, a PVC pipe with a swirl pattern on it, and a tree branch. The quadrotor and perch are then actuated in three different tests. The first test has the quadrotor rotating about the perch while the perch is static. The second test has the quadrotor swinging on the perch without slipping. In the final test, the swing perch angle is increased until the quadrotor falls off. Each of the 9 tests is performed 5 times. Accelerometer, gyroscope, and vision data are gathered during these tests and analyzed using the methods described in this thesis. These experiments show that the vision method works fairly well, and that the ICR method works to a degree, but there is more work to be done in that area

    A Probabilistic Treatment To Point Cloud Matching And Motion Estimation

    Get PDF
    Probabilistic and efficient motion estimation is paramount in many robotic applications, including state estimation and position tracking. Iterative closest point (ICP) is a popular algorithm that provides ego-motion estimates for mobile robots by matching point cloud pairs. Estimating motion efficiently using ICP is challenging due to the large size of point clouds. Further, sensor noise and environmental uncertainties result in uncertain motion and state estimates. Probabilistic inference is a principled approach to quantify uncertainty but is computationally expensive and thus challenging to use in complex real-time robotics tasks. In this thesis, we address these challenges by leveraging recent advances in optimization and probabilistic inference and present four core contributions. First is SGD-ICP, which employs stochastic gradient descent (SGD) to align two point clouds efficiently. The second is Bayesian-ICP, which combines SGD-ICP with stochastic gradient Langevin dynamics to obtain distributions over transformations efficiently. We also propose an adaptive motion model that employs Bayesian-ICP to produce environment-aware proposal distributions for state estimation. The third is Stein-ICP, a probabilistic ICP technique that exploits GPU parallelism for speed gains. Stein-ICP exploits the Stein variational gradient descent framework to provide non-parametric estimates of the transformation and can model complex multi-modal distributions. The fourth contribution is Stein particle filter, capable of filtering non-Gaussian, high-dimensional dynamical systems. This method can be seen as a deterministic flow of particles from an initial to the desired state. This transport of particles is embedded in a reproducing kernel Hilbert space where particles interact with each other through a repulsive force that brings diversity among the particles
    corecore