3 research outputs found

    Knowledge modelling for the motion detection task

    Get PDF
    In this article knowledge modelling at the knowledge level for the task of moving objects detection in image sequences is introduced. Three items have been the focus of the approach: (1) the convenience of knowledge modelling of tasks and methods in terms of a library of reusable components and in advance to the phase of operationalization of the primitive inferences; (2) the potential utility of looking for inspiration in biology; (3) the convenience of using these biologically inspired problem-solving methods (PSMs) to solve motion detection tasks. After studying a summary of the methods used to solve the motion detection task, the moving targets in indefinite sequences of images detection task is approached by means of the algorithmic lateral inhibition (ALI) PSM. The task is decomposed in four subtasks: (a) thresholded segmentation; (b) motion detection; (c) silhouettes parts obtaining; and (d) moving objects silhouettes fusion. For each one of these subtasks, first, the inferential scheme is obtained and then each one of the inferences is operationalized. Finally, some experimental results are presented along with comments on the potential value of our approach

    Visual target tracking for rover-based planetary exploration

    Get PDF
    Abstract-To command a rover to go to a location of scientific interest on a remote planet, the rover must be capable of reliably tracking the target designated by a scientist from about ten rover lengths away. The rover must maintain lock on the target while traversing rough terrain and avoiding obstacles without the need for communication with Earth. Among the challenges of tracking targets from a rover are the large changes in the appearance and shape of the selected target as the rover approaches it, the limited frame rate at which images can be acquired and processed, and the sudden changes in camera pointing as the rover goes over rocky terrain. We have investigated various techniques for combining 2D and 3D information in order to increase the reliability of visually tracking targets under Mars like conditions. We will present the approaches that we have examined on simulated data and tested onboard the Rocky 8 rover in the JPL Mars Yard and the K9 rover in the ARC Marscape. These techniques include results for 2D trackers, ICP, visual odometry, and 2D/3D trackers

    Initial Results from Vision-based Control of the Ames Marsokhod Rover

    No full text
    A terrestrial geologist investigates an area by systematically moving among and inspecting surface features, such as outcrops, boulders, contacts, and faults. A planetary geologist must explore remotely and use a robot to approach and image surface features. To date, positionbased control has been developed to accomplish this task. This method requires an accurate estimate of the feature position, and frequent update of the robot’s position. In practice this is error prone, since it relies on interpolation and continuous integration of data from inertial or odometric sensors or other position determination techniques. The development of vision-based control of robot manipulators suggests an alternative approach for mobile robots. We have developed a vision-based control system that enables our Marsokhod mobile robot to drive autonomously to within sampling distance of a visually designated natural feature. This system utilizes a robust correlation technique based on matching the sign of the difference of the Gaussian of images. We will describe our system and our initial results using it during a field experiment in the Painted Desert of Arizona.
    corecore