20 research outputs found

    Lasers for robot control : navigation, telecommands and sensor needs for the future

    No full text
    Summary form only given. For designing robots, nature is a large source of inspiration. Since lasers does not exist in nature they offers new challenging possibilities for robot control. We outline two cases: laser based beacon navigation used in industry and current effort for fusing laser and vision so as to get reliable navigation in unstructured environments.Godkänd; 2000; 20061124 (ysko

    Active uncertainty reduction during gripping using range cameras-dual control

    No full text
    This paper is on sensor based control for guiding a robot to correct gripping of objects having a large position uncertainty. An eye-in-hand mounted range camera is considered. A probabilistic problem formulation based on the requested posture at gripping and corresponding tolerances is presented. The problem is solved approximately using dynamic programming for a 1-degree-of-freedom manipulator. A five-step dual control law is studied in more detail. A typical case is that in the first part of the control sequence the robot steers towards the optimal sensing position and in the last part the error with respect to the gripping posture is minimized. Since range camera sensing introduces both range dependent noise and occlusion there is a need for `exploratory moves'. This behavior is formalized and includes `dual control'.Godkänd; 1995; 20061125 (ysko

    On covariances for fusing laser rangers and vision with sensors onboard a moving robot

    No full text
    Consider a robot to measure or operate on man made objects randomly located in the workspace. The optronic sensing onboard the robot are a scanning range measuring time-of-flight laser and a CCD camera. The goal of the paper is to give explicit covariance matrices for the extracted geometric primitives in the surrounding workspace. Emphasis is on correlation properties of the stochastic error models during motion. Topics studied include: (i) covariance of Radon/Hough peaks for plane surfaces; (ii) covariances for the intersection of two planes; (iii) equations for combining vision features, plane surfaces and range discontinuities; and (iv) explicit equations of how the covariance matrices are transformed during the robot motion. Typical applications are; models for verification and updating of CAD-models when navigating inside buildings and industrial plants, and accumulating sensor readings for a telecommanded robotGodkänd; 1998; 20061125 (ysko

    Specular objects in range cameras : reducing ambiguities by motion

    No full text
    Range cameras using structured light and triangulation are essentially based on the assumption of one diffuse reflection from the measured surfaces, Specular and transparent objects usually give multiple reflections and direct triangulation can give different types of `ghosts' in the range images. These `ghosts' are likely to cause serious errors during gripping operations. As the robot moves some of the `ghosts' move in an inconsistent way. In this paper, the authors study, experimentally and theoretically, how the range measurements can be integrated in a consistent way during the motion of the robot. Emphasis is on parts with `optical complications' including multiple scattering. For a scene with one planar mirror the `ghosts' are shown to lie in a plane separated from the laser plane. In this case the orientation and position of the mirror can be estimatedGodkänd; 1994; 20061126 (ysko

    Model based fusion of laser and camera : range discontinuities and motion consistency

    No full text
    Consider a robot to measure or operate on man made objects randomly located in the workspace. The optronic sensing onboard the robot are a scanning range measuring time-of-flight laser and a CCD camera. The plane surfaces are modeled and parameters extracted using the Radon/Hough transform. This extraction is very robust and motion is also included in a natural way. This paper gives additional results for range discontinuities. A multiple model framework for fusion of sensor information from laser and camera using parametric models of planar and cylindrical surfaces is suggested. An important issue is the mutual consistency between the motion, the range discontinuing, occlusion and properties of the sensor combination. Typical applications are; Robust features for use during navigation in cluttered areas. Model for verification and updating of CAD-models when navigating inside buildings and industrial plants. Accumulating sensor readings into a map during operation of a telecommanded robot.Godkänd; 2000; 20061125 (ysko

    On motion estimation for a mobile robot navigating in natural environment : matching laser range measurements using the distance transform

    No full text
    The goal behind this paper is to find a generic method for controlling the motion of a robot relative to an object of an arbitrary shape. In this paper we study; - modelling laser range measurements for different type of objectslsurface properties. Outdoor scenes are emphasized. - testing the distance transform on measurements for estimating the motion of a robot relative to an object of arbitmy shape. Emphasis in the present study is to get experience of the error mechanism in the distance transform when tested on cluttered laser measurements. Both natural and man-made objects are used and compared in the tests.Godkänd; 1992; 20061125 (ysko

    Remote CAN operations in Matlab over the Internet

    No full text
    This paper describes the implementation of a CAN server that acts as a CAN tool to a client. It can be used to monitor, observe and send messages to a distant CAN network over IEEE802.11b (Wave-LAN). The CAN server is controlled by one or several clients that can connect to it by TCP/IP. It is possible to send and receive CAN messages over Internet from a MATLAB environment since the client software is written in Java. The CAN server collects CAN messages and stores them into a ring buffer. The messages in the ring buffer are classified by their identifier and stored into a database. The CAN tool has been used in a demonstrative application example that consist of a remotely controlled wheelchair. In the example the wheelchair was programmed to run in a square. The positions obtained by odometric CAN messages are compared with the position from the navigation system onboard the wheelchair.Godkänd; 2004; 20060929 (ysko)</p

    Mobile robot localization: integrating measurements from a time-of-flight laser

    No full text
    This paper presents an algorithm for environment mapping by integrating scans from a time-of-flight laser and odometer readings from a mobile robot. The range weighted Hough transform (RWHT) is used as a robust method to extract lines from the range data. The resulting peaks in the RWHT are used as feature coordinates when these lines/walls are used as landmarks during navigation. The associations between observations over the time sequence are made in a systematic way using a decision directed classifier. Natural geometrical landmarks are described in the robot frame together with a covariance matrix representing the spatial uncertainty. The map is thus built up incrementally as the robot moves. If the map is given in advance, the robot can find its location and navigate relative to this a priori given map. Experimental results are presented for a mobile robot with a scanning range measuring laser having 2-cm resolution. The algorithm was also used for an autonomous plastering robot on a construction site. The sensor fusion algorithm makes few erroneous associations.Godkänd; 1996; 20061125 (ysko)</p

    On robot navigation using identical landmarks : integrating measurements from a time-of-flight laser

    No full text
    This paper presents an algorithm for fusing scans from a time-of-flight laser and odometer readings from the robot. The range weighted Hough transform is used as a robust method to extract lines from the range data. The resulting peaks are used as feature coordinates when these lines/walls are used as landmarks during navigation. The associations between observations over the time sequence are made in a systematic way using a decision directed classifier. Natural geometrical landmarks are described in the robot frame together with a covariance matrix representing the spatial uncertainty. The map is thus built incrementally as the robot moves. If the map is given in advance the robot can find its location and navigate relative to the map. Experimental results and simulations are presented for a mobile robot with a scanning range measuring laser with 2 cm resolutionGodkänd; 1994; 20061126 (ysko

    Docking to pallets with feedback from a sheet-of-light range camera

    No full text
    The problem studied is feedback for docking a mobile robot or AGV, to a pallet. The pallet is of known size but with an essentially unknown load. The pallet has an initial uncertainty in pose (position and orientation) of the order ±15 cm and ±20 degrees. The docking error is required to be within ±1 cm and ±1 degree with “very low” failure rate. For the docking a combination of a range camera and a video camera is used. In the paper the range camera is emphasized. Experimental results from this work in progress are presented. Successful docking has been made with typical ±5 mm as errors. Currently one weak part is the integration with the control system on board the robot. Our persistent experience from this and earlier tests is that the weak part when using non-contact sensing for feedback in robots is the association problem. It should be mentioned that the resolution of a range camera is strongly distance dependent. One finding in the paper is that this type of docking is feasible and can be made self-monitoring.Godkänd; 2000; 20061124 (ysko
    corecore