20 research outputs found

    Water Detection Based on Color Variation

    Get PDF
    This software has been designed to detect water bodies that are out in the open on cross-country terrain at close range (out to 30 meters), using imagery acquired from a stereo pair of color cameras mounted on a terrestrial, unmanned ground vehicle (UGV). This detector exploits the fact that the color variation across water bodies is generally larger and more uniform than that of other naturally occurring types of terrain, such as soil and vegetation. Non-traversable water bodies, such as large puddles, ponds, and lakes, are detected based on color variation, image intensity variance, image intensity gradient, size, and shape. At ranges beyond 20 meters, water bodies out in the open can be indirectly detected by detecting reflections of the sky below the horizon in color imagery. But at closer range, the color coming out of a water body dominates sky reflections, and the water cue from sky reflections is of marginal use. Since there may be times during UGV autonomous navigation when a water body does not come into a perception system s field of view until it is at close range, the ability to detect water bodies at close range is critical. Factors that influence the perceived color of a water body at close range are the amount and type of sediment in the water, the water s depth, and the angle of incidence to the water body. Developing a single model of the mixture ratio of light reflected off the water surface (to the camera) to light coming out of the water body (to the camera) for all water bodies would be fairly difficult. Instead, this software detects close water bodies based on local terrain features and the natural, uniform change in color that occurs across the surface from the leading edge to the trailing edge

    Single-Frame Terrain Mapping Software for Robotic Vehicles

    Get PDF
    This software is a component in an unmanned ground vehicle (UGV) perception system that builds compact, single-frame terrain maps for distribution to other systems, such as a world model or an operator control unit, over a local area network (LAN). Each cell in the map encodes an elevation value, terrain classification, object classification, terrain traversability, terrain roughness, and a confidence value into four bytes of memory. The input to this software component is a range image (from a lidar or stereo vision system), and optionally a terrain classification image and an object classification image, both registered to the range image. The single-frame terrain map generates estimates of the support surface elevation, ground cover elevation, and minimum canopy elevation; generates terrain traversability cost; detects low overhangs and high-density obstacles; and can perform geometry-based terrain classification (ground, ground cover, unknown). A new origin is automatically selected for each single-frame terrain map in global coordinates such that it coincides with the corner of a world map cell. That way, single-frame terrain maps correctly line up with the world map, facilitating the merging of map data into the world map. Instead of using 32 bits to store the floating-point elevation for a map cell, the vehicle elevation is assigned to the map origin elevation and reports the change in elevation (from the origin elevation) in terms of the number of discrete steps. The single-frame terrain map elevation resolution is 2 cm. At that resolution, terrain elevation from 20.5 to 20.5 m (with respect to the vehicle's elevation) is encoded into 11 bits. For each four-byte map cell, bits are assigned to encode elevation, terrain roughness, terrain classification, object classification, terrain traversability cost, and a confidence value. The vehicle s current position and orientation, the map origin, and the map cell resolution are all included in a header for each map. The map is compressed into a vector prior to delivery to another system

    Water Detection Based on Sky Reflections

    Get PDF
    This software has been designed to detect water bodies that are out in the open on cross-country terrain at mid- to far-range (approximately 20 100 meters), using imagery acquired from a stereo pair of color cameras mounted on a terrestrial, unmanned ground vehicle (UGV). Non-traversable water bodies, such as large puddles, ponds, and lakes, are indirectly detected by detecting reflections of the sky below the horizon in color imagery. The appearance of water bodies in color imagery largely depends on the ratio of light reflected off the water surface to the light coming out of the water body. When a water body is far away, the angle of incidence is large, and the light reflected off the water surface dominates. We have exploited this behavior to detect water bodies out in the open at mid- to far-range. When a water body is detected at far range, a UGV s path planner can begin to look for alternate routes to the goal position sooner, rather than later. As a result, detecting water hazards at far range generally reduces the time required to reach a goal position during autonomous navigation. This software implements a new water detector based on sky reflections that geometrically locates the exact pixel in the sky that is reflecting on a candidate water pixel on the ground, and predicts if the ground pixel is water based on color similarity and local terrain feature

    Multi-Sensor Mud Detection

    Get PDF
    Robust mud detection is a critical perception requirement for Unmanned Ground Vehicle (UGV) autonomous offroad navigation. A military UGV stuck in a mud body during a mission may have to be sacrificed or rescued, both of which are unattractive options. There are several characteristics of mud that may be detectable with appropriate UGV-mounted sensors. For example, mud only occurs on the ground surface, is cooler than surrounding dry soil during the daytime under nominal weather conditions, is generally darker than surrounding dry soil in visible imagery, and is highly polarized. However, none of these cues are definitive on their own. Dry soil also occurs on the ground surface, shadows, snow, ice, and water can also be cooler than surrounding dry soil, shadows are also darker than surrounding dry soil in visible imagery, and cars, water, and some vegetation are also highly polarized. Shadows, snow, ice, water, cars, and vegetation can all be disambiguated from mud by using a suite of sensors that span multiple bands in the electromagnetic spectrum. Because there are military operations when it is imperative for UGV's to operate without emitting strong, detectable electromagnetic signals, passive sensors are desirable. JPL has developed a daytime mud detection capability using multiple passive imaging sensors. Cues for mud from multiple passive imaging sensors are fused into a single mud detection image using a rule base, and the resultant mud detection is localized in a terrain map using range data generated from a stereo pair of color cameras

    Using Thermal Radiation in Detection of Negative Obstacles

    Get PDF
    A method of automated detection of negative obstacles (potholes, ditches, and the like) ahead of ground vehicles at night involves processing of imagery from thermal-infrared cameras aimed at the terrain ahead of the vehicles. The method is being developed as part of an overall obstacle-avoidance scheme for autonomous and semi-autonomous offroad robotic vehicles. The method could also be applied to help human drivers of cars and trucks avoid negative obstacles -- a development that may entail only modest additional cost inasmuch as some commercially available passenger cars are already equipped with infrared cameras as aids for nighttime operation

    Dig Hazard Assessment Using a Stereo Pair of Cameras

    Get PDF
    This software evaluates the terrain within reach of a lander s robotic arm for dig hazards using a stereo pair of cameras that are part of the lander s sensor system. A relative level of risk is calculated for a set of dig sectors. There are two versions of this software; one is designed to run onboard a lander as part of the flight software, and the other runs on a PC under Linux as a ground tool that produces the same results generated on the lander, given stereo images acquired by the lander and downlinked to Earth. Onboard dig hazard assessment is accomplished by executing a workspace panorama command sequence. This sequence acquires a set of stereo pairs of images of the terrain the arm can reach, generates a set of candidate dig sectors, and assesses the dig hazard of each candidate dig sector. The 3D perimeter points of candidate dig sectors are generated using configurable parameters. A 3D reconstruction of the terrain in front of the lander is generated using a set of stereo images acquired from the mast cameras. The 3D reconstruction is used to evaluate the dig goodness of each candidate dig sector based on a set of eight metrics. The eight metrics are: 1. The maximum change in elevation in each sector, 2. The elevation standard deviation in each sector, 3. The forward tilt of each sector with respect to the payload frame, 4. The side tilt of each sector with respect to the payload frame, 5. The maximum size of missing data regions in each sector, 6. The percentage of a sector that has missing data, 7. The roughness of each sector, and 8. Monochrome intensity standard deviation of each sector. Each of the eight metrics forms a goodness image layer where the goodness value of each sector ranges from 0 to 1. Goodness values of 0 and 1 correspond to high and low risk, respectively. For each dig sector, the eight goodness values are merged by selecting the lowest one. Including the merged goodness image layer, there are nine goodness image layers for each stereo pair of mast images

    Predictive Sea State Estimation for Automated Ride Control and Handling - PSSEARCH

    Get PDF
    PSSEARCH provides predictive sea state estimation, coupled with closed-loop feedback control for automated ride control. It enables a manned or unmanned watercraft to determine the 3D map and sea state conditions in its vicinity in real time. Adaptive path-planning/ replanning software and a control surface management system will then use this information to choose the best settings and heading relative to the seas for the watercraft. PSSEARCH looks ahead and anticipates potential impact of waves on the boat and is used in a tight control loop to adjust trim tabs, course, and throttle settings. The software uses sensory inputs including IMU (Inertial Measurement Unit), stereo, radar, etc. to determine the sea state and wave conditions (wave height, frequency, wave direction) in the vicinity of a rapidly moving boat. This information can then be used to plot a safe path through the oncoming waves. The main issues in determining a safe path for sea surface navigation are: (1) deriving a 3D map of the surrounding environment, (2) extracting hazards and sea state surface state from the imaging sensors/map, and (3) planning a path and control surface settings that avoid the hazards, accomplish the mission navigation goals, and mitigate crew injuries from excessive heave, pitch, and roll accelerations while taking into account the dynamics of the sea surface state. The first part is solved using a wide baseline stereo system, where 3D structure is determined from two calibrated pairs of visual imagers. Once the 3D map is derived, anything above the sea surface is classified as a potential hazard and a surface analysis gives a static snapshot of the waves. Dynamics of the wave features are obtained from a frequency analysis of motion vectors derived from the orientation of the waves during a sequence of inputs. Fusion of the dynamic wave patterns with the 3D maps and the IMU outputs is used for efficient safe path planning

    Systems and Methods for Automated Vessel Navigation Using Sea State Prediction

    Get PDF
    Systems and methods for sea state prediction and autonomous navigation in accordance with embodiments of the invention are disclosed. One embodiment of the invention includes a method of predicting a future sea state including generating a sequence of at least two 3D images of a sea surface using at least two image sensors, detecting peaks and troughs in the 3D images using a processor, identifying at least one wavefront in each 3D image based upon the detected peaks and troughs using the processor, characterizing at least one propagating wave based upon the propagation of wavefronts detected in the sequence of 3D images using the processor, and predicting a future sea state using at least one propagating wave characterizing the propagation of wavefronts in the sequence of 3D images using the processor. Another embodiment includes a method of autonomous vessel navigation based upon a predicted sea state and target location

    Daytime Water Detection Based on Color Variation

    Full text link
    Robust water detection is a critical perception requirement for unmanned ground vehicle (UGV) autonomous navigation. This is particularly true in wide open areas where water can collect in naturally occurring terrain depressions during periods of heavy precipitation and form large water bodies (such as ponds). At far range, reflections of the sky provide a strong cue for water. But at close range, the color coming out of a water body dominates sky reflections and the water cue from sky reflections is of marginal use. We model this behavior by using water body intensity data from multiple frames of RGB imagery to estimate the total reflection coefficient contribution from surface reflections and the combination of all other factors. Then we describe an algorithm that uses one of the color cameras in a forward- looking, UGV-mounted stereo-vision perception system to detect water bodies in wide open areas. This detector exploits the knowledge that the change in saturation-to-brightness ratio across a water body from the leading to trailing edge is uniform and distinct from other terrain types. In test sequences approaching a pond under clear, overcast, and cloudy sky conditions, the true positive and false negative water detection rates were (95.76%, 96.71%, 98.77%) and (0.45%, 0.60%, 0.62%), respectively. This software has been integrated on an experimental unmanned vehicle and field tested at Ft. Indiantown Gap, PA
    corecore