6 research outputs found

    Some Useful Results for Closed-Form Propagation of Error in Vehicle Odometry

    No full text
    Odometry can be modelled as a nonlinear dynamical system. The linearized error propagation equations for both deterministic and random errors in the odometry process have time varying coefficients and therefore may not be easy to solve. However, the odometry process exibits a property here called “commutable dynamics” which makes the transition matrix easy to compute. As a result, an essentially closed form solution to both deterministic and random linearized error propagation is available. Examination of the general solution indicates that error expressions depend on a few simple path functionals which are analogous to the moments of mechanics and equal to the first two coefficients of the power and Fourier series of the path followed. The resulting intuitive understanding of error dynamics is a valuable tool for many problems of mobile robotics. Required sensor performance can be computed from tolerable error, trajectories can be designed to minimize error for operation or to maximize it for calibration and evaluation purposes. Optimal estimation algorithms can be implemented in nearly closed form for small footprint embedded applications, etc.</p

    Photogeometric Sensing for Mobile Robot Control and Visualisation Tasks

    No full text
    Photogeometric sensing is a relatively new sensor modality that tightly integrates geometry and appearance sensing into a single package. Such a sensor produces imagery that encodes the appearance and the range to every sensed point in the scene. This new type of sensor enables much higher fidelity virtualized reality displays that can be produced in real time from the data gathered by a moving robot. Such displays exhibit several ideal characteristics for human robot interaction tasks that enable new approaches to supervisory control and remote visualization. Photogeometric sensors suitable for HRI applications cannot yet be purchased but they can be constructed by co-locating ranging and appearance sensors and combining the data at the pixel level. This paper outlines our approach to the construction of such sensors as well as their successful use in several applications.</p

    Integrated Air/Ground Vehicle System for Semi-Autonomous Off-Road Navigation

    No full text
    Current unmanned vehicle systems enable exploration of and travel through remote areas, but demand significant communications resources and constant human operation. DARPA and the US Army have recognized these limitations and are now pursuing semi-autonomous vehicle systems in the Future Combat Systems (FCS) program. FCS places high demands on robotic systems, which must assess mobility hazards under all weather conditions, day and night, in the presence of smoke and other airborne obscurants, and with the possibility of communications dropouts. Perhaps the most challenging mobility hazard is the "negative obstacle", such as a hole or ditch. These hazards are difficult to see from the ground, limiting maximum vehicle speed. From the air, however, these obstacles are often far easier to detect. In this context, we present a novel semi-autonomous unmanned ground vehicle (UGV) that includes a dedicated unmanned air vehicle - a "Flying Eye" (FE) - that flies ahead of the UGV to detect holes and other hazards before the onboard UGV sensors would otherwise be able detect them. This concept is shown in Figure 1. The FE can be considered a longer-range “scout” to explore terrain before the UGV must traverse it, allowing the UGV to avoid certain areas entirely because of hazards or cul-de-sacs detected by the FE. The UGV itself carries its own extensive sensor suite to address the broad range of operating conditions at slower speeds and to confirm the hazards seen from the air. In the fully developed system, the UGV will deploy the FE from a landing bay on the back of the UGV. For covert operations, the FE will return to its landing bay on the back of the UGV. This paper presents the current prototype system as well as initial field experiments on the performance of the system

    Real-Time, Multi-Perspective Perception for Unmanned Ground Vehicles

    No full text
    The most challenging technical problems facing successful autonomous UGV operation in off-road environments are reliable sensing and perception. In this paper, we describe our progress over the last year toward solving these problems in Phase II of DARPA’s PerceptOR program. We have developed a perception system that combines laser, camera, and proprioceptive sensing elements on both ground and air platforms to detect and avoid obstacles in natural terrain environments. The perception system has been rigorously tested in a variety of environments and has improved over time as problems have been identified and systematically solved. The paper describes the perception system and the autonomous vehicles, presents results from some experiments, and summarizes the current capabilities and limitations

    Real-Time Photorealistic Virtualized Reality Interface for Remote Mobile Robot Control

    No full text
    The task of teleoperating a robot over a wireless video link is known to be very difficult. Teleoperation becomes even more difficult when the robot is surrounded by dense obstacles, or speed requirements are high, or video quality is poor, or wireless links are subject to latency. Due to high quality lidar data, and improvements in computing and video compression, virtualized reality has the capacity to dramatically improve teleoperation performance – even in high speed situations that were formerly impossible. In this paper, we demonstrate the conversion of dense geometry and appearance data, generated on-the-move by a mobile robot, into a photorealistic rendering model that gives the user a synthetic exterior line-of-sight view of the robot including the context of its surrounding terrain. This technique converts teleoperation into virtual line-of-sight remote control. The underlying metrically consistent environment model also introduces the capacity to remove latency and enhance video compression. Display quality is sufficiently high that the user experience is similar to a driving video game where the surfaces used are textured with live video.</p

    Tartan Racing: A Multi-Modal Approach to the DARPA Urban Challenge

    No full text
    The Urban Challenge represents a technological leap beyond the previous Grand Challenges. The challenge encompasses three primary behaviors: driving on roads, handling intersections and maneuvering in zones. In implementing urban driving we have decomposed the problem into five components. Mission Planning determines an efficient route through an urban network of roads. A behavioral layer executes the route through the environment, adapting to local traffic and exceptional situations as necessary. A motion planning layer safeguards the robot by considering the feasible trajectories available, and selecting the best option. Perception combines data from lidar, radar and vision systems to estimate the location of other vehicles, static obstacles and the shape of the road. Finally, the robot is a mechatronic system engineered to provide the power, sensing and mobility necessary to navigate an urban course. Rigorous component and system testing evaluates progress using standardized tests. Observations from these experiments shape the design of subsequent development spirals and enable the rapid detection and correction of bugs. The system described in the paper exhibits a majority of the basic navigation and traffic skills required for the Urban Challenge. From these building blocks more advanced capabilities will quickly develop.</p
    corecore