190 research outputs found

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    NASA Automated Rendezvous and Capture Review. Executive summary

    Get PDF
    In support of the Cargo Transfer Vehicle (CTV) Definition Studies in FY-92, the Advanced Program Development division of the Office of Space Flight at NASA Headquarters conducted an evaluation and review of the United States capabilities and state-of-the-art in Automated Rendezvous and Capture (AR&C). This review was held in Williamsburg, Virginia on 19-21 Nov. 1991 and included over 120 attendees from U.S. government organizations, industries, and universities. One hundred abstracts were submitted to the organizing committee for consideration. Forty-two were selected for presentation. The review was structured to include five technical sessions. Forty-two papers addressed topics in the five categories below: (1) hardware systems and components; (2) software systems; (3) integrated systems; (4) operations; and (5) supporting infrastructure

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Autonomous Grasping Using Novel Distance Estimator

    Get PDF
    This paper introduces a novel distance estimator using monocular vision for autonomous underwater grasping. The presented method is also applicable to topside grasping operations. The estimator is developed for robot manipulators with a monocular camera placed near the gripper. The fact that the camera is attached near the gripper makes it possible to design a method for capturing images from different positions, as the relative position change can be measured. The presented system can estimate relative distance to an object of unknown size with good precision. The manipulator applied in the presented work is the SeaArm-2, a fully electric underwater small modular manipulator. The manipulator is unique in its integrated monocular camera in the end-effector module, and its design facilitates the use of different end-effector tools. The camera is used for supervision, object detection, and tracking. The distance estimator was validated in a laboratory setting through autonomous grasping experiments. The manipulator was able to search for and find, estimate the relative distance of, grasp, and retrieve the relevant object in 12 out of 12 trials.publishedVersio

    Real-Time Stereo Visual Servoing of a 6-DOF Robot for Tracking and Grasping Moving Objects

    Get PDF
    Robotic systems have been increasingly employed in various industrial, urban, mili-tary and exploratory applications during last decades. To enhance the robot control per-formance, vision data are integrated into the robot control systems. Using visual feedback has a great potential for increasing the flexibility of conventional robotic and mechatronic systems to deal with changing and less-structured environments. How to use visual in-formation in control systems has always been a major research area in robotics and mechatronics. Visual servoing methods which utilize direct feedback from image features to motion control have been proposed to handle many stability and reliability issues in vision-based control systems. This thesis introduces a stereo Image-based Visual Servoing (IBVS) (to the contrary Position-based Visual Servoing (PBVS)) with eye‐in‐hand configuration that is able to track and grasp a moving object in real time. The robustness of the control system is in-creased by the means of accurate 3-D information extracted from binocular images. At first, an image-based visual servoing (IBVS) approach based on stereo vision is proposed for 6 DOF robots. A classical proportional control strategy has been designed and the ste-reo image interaction matrix which relates the image feature velocity to the cameras’ ve-locity screw has been developed for two cases of parallel and non-parallel cameras in-stalled on the end-effector of the robot. Then, the properties of tracking a moving target and corresponding variant feature points on visual servoing system has been investigated. Second, a method for position prediction and trajectory estimation of the moving tar-get in order to use in the proposed image-based stereo visual servoing for a real-time grasping task has been proposed and developed through the linear and nonlinear model-ing of the system dynamics. Three trajectory estimation algorithms, “Kalman Filter”, “Recursive Least Square (RLS)” and “Extended Kalman Filter (EKF)” have been applied to predict the position of moving object in image planes. Finally, computer simulations and real implementation have been carried out to verify the effectiveness of the proposed method for the task of tracking and grasping a moving object using a 6-DOF manipulator

    Near-Minimum Time Visual Servo Control Of An Underactuated Robotic Arm

    Get PDF
    In industrial robotics, grasping an object is required to happen fast since the position and orientation of such an object is a-priori known. However, if such information about the position and orientation is unavailable and objects are spread randomly on a conveyor, it may be challenging to keep the dexterity and speed at which the task is carried out. Nowadays, the use of vision sensors to compute the position and orientation of an object and to reposition the robotic system is being used accordingly. This technology has indirectly introduced a disparity in time that varies according to the nature of the control technique

    Toward Robots with Peripersonal Space Representation for Adaptive Behaviors

    Get PDF
    The abilities to adapt and act autonomously in an unstructured and human-oriented environment are necessarily vital for the next generation of robots, which aim to safely cooperate with humans. While this adaptability is natural and feasible for humans, it is still very complex and challenging for robots. Observations and findings from psychology and neuroscience in respect to the development of the human sensorimotor system can inform the development of novel approaches to adaptive robotics. Among these is the formation of the representation of space closely surrounding the body, the Peripersonal Space (PPS) , from multisensory sources like vision, hearing, touch and proprioception, which helps to facilitate human activities within their surroundings. Taking inspiration from the virtual safety margin formed by the PPS representation in humans, this thesis first constructs an equivalent model of the safety zone for each body part of the iCub humanoid robot. This PPS layer serves as a distributed collision predictor, which translates visually detected objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities of a collision between those objects and body parts. This leads to adaptive avoidance behaviors in the robot via an optimization-based reactive controller. Notably, this visual reactive control pipeline can also seamlessly incorporate tactile input to guarantee safety in both pre- and post-collision phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components, namely the PPS, the multi-target motion planner (for manipulation reaching tasks), the reaching-with-avoidance controller and the humancentred visual perception, are combined harmoniously to form a hybrid control framework designed to provide safety for robots\u2019 interactions in a cluttered environment shared with human partners. Later, motivated by the development of manipulation skills in infants, in which the multisensory integration is thought to play an important role, a learning framework is proposed to allow a robot to learn the processes of forming sensory representations, namely visuomotor and visuotactile, from their own motor activities in the environment. Both multisensory integration models are constructed with Deep Neural Networks (DNNs) in such a way that their outputs are represented in motor space to facilitate the robot\u2019s subsequent actions
    corecore