55 research outputs found

    Robotic Crop Interaction in Agriculture for Soft Fruit Harvesting

    Get PDF
    Autonomous tree crop harvesting has been a seemingly attainable, but elusive, robotics goal for the past several decades. Limiting grower reliance on uncertain seasonal labour is an economic driver of this, but the ability of robotic systems to treat each plant individually also has environmental benefits, such as reduced emissions and fertiliser use. Over the same time period, effective grasping and manipulation (G&M) solutions to warehouse product handling, and more general robotic interaction, have been demonstrated. Despite research progress in general robotic interaction and harvesting of some specific crop types, a commercially successful robotic harvester has yet to be demonstrated. Most crop varieties, including soft-skinned fruit, have not yet been addressed. Soft fruit, such as plums, present problems for many of the techniques employed for their more robust relatives and require special focus when developing autonomous harvesters. Adapting existing robotics tools and techniques to new fruit types, including soft skinned varieties, is not well explored. This thesis aims to bridge that gap by examining the challenges of autonomous crop interaction for the harvesting of soft fruit. Aspects which are known to be challenging include mixed obstacle planning with both hard and soft obstacles present, poor outdoor sensing conditions, and the lack of proven picking motion strategies. Positioning an actuator for harvesting requires solving these problems and others specific to soft skinned fruit. Doing so effectively means addressing these in the sensing, planning and actuation areas of a robotic system. Such areas are also highly interdependent for grasping and manipulation tasks, so solutions need to be developed at the system level. In this thesis, soft robotics actuators, with simplifying assumptions about hard obstacle planes, are used to solve mixed obstacle planning. Persistent target tracking and filtering is used to overcome challenging object detection conditions, while multiple stages of object detection are applied to refine these initial position estimates. Several picking motions are developed and tested for plums, with varying degrees of effectiveness. These various techniques are integrated into a prototype system which is validated in lab testing and extensive field trials on a commercial plum crop. Key contributions of this thesis include I. The examination of grasping & manipulation tools, algorithms, techniques and challenges for harvesting soft skinned fruit II. Design, development and field-trial evaluation of a harvester prototype to validate these concepts in practice, with specific design studies of the gripper type, object detector architecture and picking motion for this III. Investigation of specific G&M module improvements including: o Application of the autocovariance least squares (ALS) method to noise covariance matrix estimation for visual servoing tasks, where both simulated and real experiments demonstrated a 30% improvement in state estimation error using this technique. o Theory and experimentation showing that a single range measurement is sufficient for disambiguating scene scale in monocular depth estimation for some datasets. o Preliminary investigations of stochastic object completion and sampling for grasping, active perception for visual servoing based harvesting, and multi-stage fruit localisation from RGB-Depth data. Several field trials were carried out with the plum harvesting prototype. Testing on an unmodified commercial plum crop, in all weather conditions, showed promising results with a harvest success rate of 42%. While a significant gap between prototype performance and commercial viability remains, the use of soft robotics with carefully chosen sensing and planning approaches allows for robust grasping & manipulation under challenging conditions, with both hard and soft obstacles

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Unifying Foundation Models with Quadrotor Control for Visual Tracking Beyond Object Categories

    Full text link
    Visual control enables quadrotors to adaptively navigate using real-time sensory data, bridging perception with action. Yet, challenges persist, including generalization across scenarios, maintaining reliability, and ensuring real-time responsiveness. This paper introduces a perception framework grounded in foundation models for universal object detection and tracking, moving beyond specific training categories. Integral to our approach is a multi-layered tracker integrated with the foundation detector, ensuring continuous target visibility, even when faced with motion blur, abrupt light shifts, and occlusions. Complementing this, we introduce a model-free controller tailored for resilient quadrotor visual tracking. Our system operates efficiently on limited hardware, relying solely on an onboard camera and an inertial measurement unit. Through extensive validation in diverse challenging indoor and outdoor environments, we demonstrate our system's effectiveness and adaptability. In conclusion, our research represents a step forward in quadrotor visual tracking, moving from task-specific methods to more versatile and adaptable operations

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Robust and Multi-Objective Model Predictive Control Design for Nonlinear Systems

    Get PDF
    The multi-objective trade-off paradigm has become a very valuable design tool in engineering problems that have conflicting objectives. Recently, many control designers have worked on the design methods which satisfy multiple design specifications called multi-objective control design. However,the main challenge posed for the MPC design lies in the high computation load preventing its application to the fast dynamic system control in real-time. To meet this challenge, this thesis has proposed several methods covering nonlinear system modeling, on-line MPC design and multi-objective optimization. First, the thesis has proposed a robust MPC to control the shimmy vibration of the landing gear with probabilistic uncertainty. Then, an on-line MPC method has been proposed for image-based visual servoing control of a 6 DOF Denso robot. Finally, a multi-objective MPC has been introduced to allow the designers consider multiple objectives in MPC design. In this thesis, Tensor Product (TP) model transformation as a powerful tool in the modeling of the complex nonlinear systems is used to find the linear parameter-varying (LPV) models of the nonlinear systems. Higher-order singular value decomposition (HOSVD) technique is used to obtain a minimal order of the model tensor. Furthermore, to design a robust MPC for nonlinear systems in the presence of uncertainties which degrades the system performance and can lead to instability, we consider the parameters of the nonlinear systems with probabilistic uncertainties in the modeling using TP transformation. In this thesis, a computationally efficient methods for MPC design of image-based visual servoing, i.e. a fast dynamic system has been proposed. The controller is designed considering the robotic visual servoing system's input and output constraints, such as robot physical limitations and visibility constraints. The main contributions of this thesis are: (i) design MPC for nonlinear systems with probabilistic uncertainties that guarantees robust stability and performance of the systems; (ii) develop a real-time MPC method for a fast dynamical system; (iii) to propose a new multi-objective MPC for nonlinear systems using game theory. A diverse range of systems with nonlinearities and uncertainties including landing gear system, 6 DOF Denso robot are studied in this thesis. The simulation and real-time experimental results are presented and discussed in this thesis to verify the effectiveness of the proposed methods

    Robust Position-based Visual Servoing of Industrial Robots

    Get PDF
    Recently, the researchers have tried to use dynamic pose correction methods to improve the accuracy of industrial robots. The application of dynamic path tracking aims at adjusting the end-effector’s pose by using a photogrammetry sensor and eye-to-hand PBVS scheme. In this study, the research aims to enhance the accuracy of industrial robot by designing a chattering-free digital sliding mode controller integrated with a novel adaptive robust Kalman filter (ARKF) validated on Puma 560 model on simulation. This study includes Gaussian noise generation, pose estimation, design of adaptive robust Kalman filter, and design of chattering-free sliding mode controller. The designed control strategy has been validated and compared with other control strategies in Matlab 2018a Simulink on a 64bits PC computer. The main contributions of the research work are summarized as follows. First, the noise removal in the pose estimation is carried out by the novel ARKF. The proposed ARKF deals with experimental noise generated from photogrammetry observation sensor C-track 780. It exploits the advantages of adaptive estimation method for states noise covariance (Q), least square identification for measurement noise covariance (R) and a robust mechanism for state variables error covariance (P). The Gaussian noise generation is based on the collected data from the C-track when the robot is in a stationary status. A novel method for estimating covariance matrix R considering both effects of the velocity and pose is suggested. Next, a robust PBVS approach for industrial robots based on fast discrete sliding mode controller (FDSMC) and ARKF is proposed. The FDSMC takes advantage of a nonlinear reaching law which results in faster and more accurate trajectory tracking compared to standard DSMC. Substituting the switching function with a continuous nonlinear reaching law leads to a continuous output and thus eliminating the chattering. Additionally, the sliding surface dynamics is considered to be a nonlinear one, which results in increasing the convergence speed and accuracy. Finally, the analysis techniques related to various types of sliding mode controller have been used for comparison. Also, the kinematic and dynamic models with revolutionary joints for Puma 560 are built for simulation validation. Based on the computed indicators results, it is proven that after tuning the parameters of designed controller, the chattering-free FDSMC integrated with ARKF can essentially reduce the effect of uncertainties on robot dynamic model and improve the tracking accuracy of the 6 degree-of-freedom (DOF) robot

    Autonomous Visual Servo Robotic Capture of Non-cooperative Target

    Get PDF
    This doctoral research develops and validates experimentally a vision-based control scheme for the autonomous capture of a non-cooperative target by robotic manipulators for active space debris removal and on-orbit servicing. It is focused on the final capture stage by robotic manipulators after the orbital rendezvous and proximity maneuver being completed. Two challenges have been identified and investigated in this stage: the dynamic estimation of the non-cooperative target and the autonomous visual servo robotic control. First, an integrated algorithm of photogrammetry and extended Kalman filter is proposed for the dynamic estimation of the non-cooperative target because it is unknown in advance. To improve the stability and precision of the algorithm, the extended Kalman filter is enhanced by dynamically correcting the distribution of the process noise of the filter. Second, the concept of incremental kinematic control is proposed to avoid the multiple solutions in solving the inverse kinematics of robotic manipulators. The proposed target motion estimation and visual servo control algorithms are validated experimentally by a custom built visual servo manipulator-target system. Electronic hardware for the robotic manipulator and computer software for the visual servo are custom designed and developed. The experimental results demonstrate the effectiveness and advantages of the proposed vision-based robotic control for the autonomous capture of a non-cooperative target. Furthermore, a preliminary study is conducted for future extension of the robotic control with consideration of flexible joints

    Biomimetic Manipulator Control Design for Bimanual Tasks in the Natural Environment

    Get PDF
    As robots become more prolific in the human environment, it is important that safe operational procedures are introduced at the same time; typical robot control methods are often very stiff to maintain good positional tracking, but this makes contact (purposeful or accidental) with the robot dangerous. In addition, if robots are to work cooperatively with humans, natural interaction between agents will make tasks easier to perform with less effort and learning time. Stability of the robot is particularly important in this situation, especially as outside forces are likely to affect the manipulator when in a close working environment; for example, a user leaning on the arm, or task-related disturbance at the end-effector. Recent research has discovered the mechanisms of how humans adapt the applied force and impedance during tasks. Studies have been performed to apply this adaptation to robots, with promising results showing an improvement in tracking and effort reduction over other adaptive methods. The basic algorithm is straightforward to implement, and allows the robot to be compliant most of the time and only stiff when required by the task. This allows the robot to work in an environment close to humans, but also suggests that it could create a natural work interaction with a human. In addition, no force sensor is needed, which means the algorithm can be implemented on almost any robot. This work develops a stable control method for bimanual robot tasks, which could also be applied to robot-human interactive tasks. A dynamic model of the Baxter robot is created and verified, which is then used for controller simulations. The biomimetic control algorithm forms the basis of the controller, which is developed into a hybrid control system to improve both task-space and joint-space control when the manipulator is disturbed in the natural environment. Fuzzy systems are implemented to remove the need for repetitive and time consuming parameter tuning, and also allows the controller to actively improve performance during the task. Experimental simulations are performed, and demonstrate how the hybrid task/joint-space controller performs better than either of the component parts under the same conditions. The fuzzy tuning method is then applied to the hybrid controller, which is shown to slightly improve performance as well as automating the gain tuning process. In summary, a novel biomimetic hybrid controller is presented, with a fuzzy mechanism to avoid the gain tuning process, finalised with a demonstration of task-suitability in a bimanual-type situation.EPSR
    • …
    corecore