5 research outputs found

    Seeing the Fruit for the Leaves: Robotically Mapping Apple Fruitlets in a Commercial Orchard

    Full text link
    Aotearoa New Zealand has a strong and growing apple industry but struggles to access workers to complete skilled, seasonal tasks such as thinning. To ensure effective thinning and make informed decisions on a per-tree basis, it is crucial to accurately measure the crop load of individual apple trees. However, this task poses challenges due to the dense foliage that hides the fruitlets within the tree structure. In this paper, we introduce the vision system of an automated apple fruitlet thinning robot, developed to tackle the labor shortage issue. This paper presents the initial design, implementation,and evaluation specifics of the system. The platform straddles the 3.4 m tall 2D apple canopy structures to create an accurate map of the fruitlets on each tree. We show that this platform can measure the fruitlet load on an apple tree by scanning through both sides of the branch. The requirement of an overarching platform was justified since two-sided scans had a higher counting accuracy of 81.17 % than one-sided scans at 73.7 %. The system was also demonstrated to produce size estimates within 5.9% RMSE of their true size.Comment: Accepted at the International Conference on Intelligent Robots and Systems (IROS 2023

    Automatic Robot Hand-Eye Calibration Enabled by Learning-Based 3D Vision

    Full text link
    Hand-eye calibration, as a fundamental task in vision-based robotic systems, aims to estimate the transformation matrix between the coordinate frame of the camera and the robot flange. Most approaches to hand-eye calibration rely on external markers or human assistance. We proposed Look at Robot Base Once (LRBO), a novel methodology that addresses the hand-eye calibration problem without external calibration objects or human support, but with the robot base. Using point clouds of the robot base, a transformation matrix from the coordinate frame of the camera to the robot base is established as I=AXB. To this end, we exploit learning-based 3D detection and registration algorithms to estimate the location and orientation of the robot base. The robustness and accuracy of the method are quantified by ground-truth-based evaluation, and the accuracy result is compared with other 3D vision-based calibration methods. To assess the feasibility of our methodology, we carried out experiments utilizing a low-cost structured light scanner across varying joint configurations and groups of experiments. The proposed hand-eye calibration method achieved a translation deviation of 0.930 mm and a rotation deviation of 0.265 degrees according to the experimental results. Additionally, the 3D reconstruction experiments demonstrated a rotation error of 0.994 degrees and a position error of 1.697 mm. Moreover, our method offers the potential to be completed in 1 second, which is the fastest compared to other 3D hand-eye calibration methods. Code is released at github.com/leihui6/LRBO.Comment: 17 pages, 19 figures, 6 tables, submitted to MSS

    Modeling and Control of Flexible Link Manipulators

    Get PDF
    Autonomous maritime navigation and offshore operations have gained wide attention with the aim of reducing operational costs and increasing reliability and safety. Offshore operations, such as wind farm inspection, sea farm cleaning, and ship mooring, could be carried out autonomously or semi-autonomously by mounting one or more long-reach robots on the ship/vessel. In addition to offshore applications, long-reach manipulators can be used in many other engineering applications such as construction automation, aerospace industry, and space research. Some applications require the design of long and slender mechanical structures, which possess some degrees of flexibility and deflections because of the material used and the length of the links. The link elasticity causes deflection leading to problems in precise position control of the end-effector. So, it is necessary to compensate for the deflection of the long-reach arm to fully utilize the long-reach lightweight flexible manipulators. This thesis aims at presenting a unified understanding of modeling, control, and application of long-reach flexible manipulators. State-of-the-art dynamic modeling techniques and control schemes of the flexible link manipulators (FLMs) are discussed along with their merits, limitations, and challenges. The kinematics and dynamics of a planar multi-link flexible manipulator are presented. The effects of robot configuration and payload on the mode shapes and eigenfrequencies of the flexible links are discussed. A method to estimate and compensate for the static deflection of the multi-link flexible manipulators under gravity is proposed and experimentally validated. The redundant degree of freedom of the planar multi-link flexible manipulator is exploited to minimize vibrations. The application of a long-reach arm in autonomous mooring operation based on sensor fusion using camera and light detection and ranging (LiDAR) data is proposed.publishedVersio

    Semantic models of scenes and objects for service and industrial robotics

    Get PDF
    What may seem straightforward for the human perception system is still challenging for robots. Automatically segmenting the elements with highest relevance or salience, i.e. the semantics, is non-trivial given the high level of variability in the world and the limits of vision sensors. This stands up when multiple ambiguous sources of information are available, which is the case when dealing with moving robots. This thesis leverages on the availability of contextual cues and multiple points of view to make the segmentation task easier. Four robotic applications will be presented, two designed for service robotics and two for an industrial context. Semantic models of indoor environments will be built enriching geometric reconstructions with semantic information about objects, structural elements and humans. Our approach leverages on the importance of context, the availability of multiple source of information, as well as multiple view points showing with extensive experiments on several datasets that these are all crucial elements to boost state-of-the-art performances. Furthermore, moving to applications with robots analyzing object surfaces instead of their surroundings, semantic models of Carbon Fiber Reinforced Polymers will be built augmenting geometric models with accurate measurements of superficial fiber orientations, and inner defects invisible to the human-eye. We succeeded in reaching an industrial grade accuracy making these models useful for autonomous quality inspection and process optimization. In all applications, special attention will be paid towards fast methods suitable for real robots like the two prototypes presented in this thesis
    corecore