505 research outputs found

    Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review

    Get PDF
    The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools

    Robotic Crop Interaction in Agriculture for Soft Fruit Harvesting

    Get PDF
    Autonomous tree crop harvesting has been a seemingly attainable, but elusive, robotics goal for the past several decades. Limiting grower reliance on uncertain seasonal labour is an economic driver of this, but the ability of robotic systems to treat each plant individually also has environmental benefits, such as reduced emissions and fertiliser use. Over the same time period, effective grasping and manipulation (G&M) solutions to warehouse product handling, and more general robotic interaction, have been demonstrated. Despite research progress in general robotic interaction and harvesting of some specific crop types, a commercially successful robotic harvester has yet to be demonstrated. Most crop varieties, including soft-skinned fruit, have not yet been addressed. Soft fruit, such as plums, present problems for many of the techniques employed for their more robust relatives and require special focus when developing autonomous harvesters. Adapting existing robotics tools and techniques to new fruit types, including soft skinned varieties, is not well explored. This thesis aims to bridge that gap by examining the challenges of autonomous crop interaction for the harvesting of soft fruit. Aspects which are known to be challenging include mixed obstacle planning with both hard and soft obstacles present, poor outdoor sensing conditions, and the lack of proven picking motion strategies. Positioning an actuator for harvesting requires solving these problems and others specific to soft skinned fruit. Doing so effectively means addressing these in the sensing, planning and actuation areas of a robotic system. Such areas are also highly interdependent for grasping and manipulation tasks, so solutions need to be developed at the system level. In this thesis, soft robotics actuators, with simplifying assumptions about hard obstacle planes, are used to solve mixed obstacle planning. Persistent target tracking and filtering is used to overcome challenging object detection conditions, while multiple stages of object detection are applied to refine these initial position estimates. Several picking motions are developed and tested for plums, with varying degrees of effectiveness. These various techniques are integrated into a prototype system which is validated in lab testing and extensive field trials on a commercial plum crop. Key contributions of this thesis include I. The examination of grasping & manipulation tools, algorithms, techniques and challenges for harvesting soft skinned fruit II. Design, development and field-trial evaluation of a harvester prototype to validate these concepts in practice, with specific design studies of the gripper type, object detector architecture and picking motion for this III. Investigation of specific G&M module improvements including: o Application of the autocovariance least squares (ALS) method to noise covariance matrix estimation for visual servoing tasks, where both simulated and real experiments demonstrated a 30% improvement in state estimation error using this technique. o Theory and experimentation showing that a single range measurement is sufficient for disambiguating scene scale in monocular depth estimation for some datasets. o Preliminary investigations of stochastic object completion and sampling for grasping, active perception for visual servoing based harvesting, and multi-stage fruit localisation from RGB-Depth data. Several field trials were carried out with the plum harvesting prototype. Testing on an unmodified commercial plum crop, in all weather conditions, showed promising results with a harvest success rate of 42%. While a significant gap between prototype performance and commercial viability remains, the use of soft robotics with carefully chosen sensing and planning approaches allows for robust grasping & manipulation under challenging conditions, with both hard and soft obstacles

    Vision-based Autonomous Tracking of a Non-cooperative Mobile Robot by a Low-cost Quadrotor Vehicle

    Get PDF
    The goal of this thesis is the detection and tracking of a ground vehicle, in particular a car-like robot, by a quadrotor. The first challenge to address in any pursuit or tracking scenario is the detection and unique identification of the target. From this first challenge, comes the need to precisely localize the target in a coordinate system that is common to the tracking and tracked vehicles. In most real-life scenarios, the tracked vehicle does not directly communicate information such as its position to the tracking one. From this fact, arises a non-cooperative constraint problem. The autonomous tracking aspect of the mission requires, for both the aerial and ground vehicles, robust pose estimation during the mission. The primary and crucial functions to achieve autonomous behaviors are control and navigation. The principal-agent being the quadrotor, this thesis explains in detail the derivation and analysis of the equations of motion that govern its natural behavior along with the control methods that permit to achieve desired performances. The analysis of these equations reveals a naturally unstable system, subject to non-linearities. Therefore, we explored three different control methods capable of guaranteeing stability while mitigating non-linearities. The first two control methods operate in the linear region and consist of the intuitive Proportional Integrate Derivative controller (PID). The second linear control strategy is represented by an optimal controller that is the Linear Quadratic Regulator controller (LQR). The last and final control method is a nonlinear controller designed from the Sliding Mode Control Theory. In addition to the in-depth analysis, we provide assets and limitations of each control method. In order to achieve the tracking mission, we address the detection and localization problems using respectively visual servoing and frame transform techniques. The pose estimation challenge for the aerial robot is cleared up using Kalman Filtering estimation methods that are also explored in depth. The same estimation method is used to mitigate the ground vehicle’s real-time pose estimation and tracking problem. Analysis results are illustrated using Matlab. A simulation and a real implementation using the Robot Operating System are used to support the obtained results

    Robust Image-Based Visual Servo Control of an Uncertain Missile Airframe

    Get PDF
    A nonlinear vision-based guidance law is presented for a missile-target scenario in the presence of model uncertainty and unknown target evasive maneuvers. To ease the readability of this thesis, detailed explanations of any relevant mathematical tools are provided, including stability definitions, the procedure of Lyapunov-based stability analysis, sliding mode control fundamentals, basics on visual servo control, and other basic nonlinear control tools. To develop the vision-based guidance law, projective geometric relationships are utilized to combine the image kinematics with the missile dynamics in an integrated visual dynamic system. The guidance law is designed using an image-based visual servo control method in conjunction with a sliding-mode control strategy, which is shown to achieve asymptotic target interception in the presence of the aforementioned uncertainties. A Lyapunov-based stability analysis is presented to prove the theoretical result, and numerical simulation results are provided to demonstrate the performance of the proposed robust controller for both stationary and non-stationary targets

    Grasping, Perching, And Visual Servoing For Micro Aerial Vehicles

    Get PDF
    Micro Aerial Vehicles (MAVs) have seen a dramatic growth in the consumer market because of their ability to provide new vantage points for aerial photography and videography. However, there is little consideration for physical interaction with the environment surrounding them. Onboard manipulators are absent, and onboard perception, if existent, is used to avoid obstacles and maintain a minimum distance from them. There are many applications, however, which would benefit greatly from aerial manipulation or flight in close proximity to structures. This work is focused on facilitating these types of close interactions between quadrotors and surrounding objects. We first explore high-speed grasping, enabling a quadrotor to quickly grasp an object while moving at a high relative velocity. Next, we discuss planning and control strategies, empowering a quadrotor to perch on vertical surfaces using a downward-facing gripper. Then, we demonstrate that such interactions can be achieved using only onboard sensors by incorporating vision-based control and vision-based planning. In particular, we show how a quadrotor can use a single camera and an Inertial Measurement Unit (IMU) to perch on a cylinder. Finally, we generalize our approach to consider objects in motion, and we present relative pose estimation and planning, enabling tracking of a moving sphere using only an onboard camera and IMU

    Image-Guided Robot-Assisted Techniques with Applications in Minimally Invasive Therapy and Cell Biology

    Get PDF
    There are several situations where tasks can be performed better robotically rather than manually. Among these are situations (a) where high accuracy and robustness are required, (b) where difficult or hazardous working conditions exist, and (c) where very large or very small motions or forces are involved. Recent advances in technology have resulted in smaller size robots with higher accuracy and reliability. As a result, robotics is fi nding more and more applications in Biomedical Engineering. Medical Robotics and Cell Micro-Manipulation are two of these applications involving interaction with delicate living organs at very di fferent scales.Availability of a wide range of imaging modalities from ultrasound and X-ray fluoroscopy to high magni cation optical microscopes, makes it possible to use imaging as a powerful means to guide and control robot manipulators. This thesis includes three parts focusing on three applications of Image-Guided Robotics in biomedical engineering, including: Vascular Catheterization: a robotic system was developed to insert a catheter through the vasculature and guide it to a desired point via visual servoing. The system provides shared control with the operator to perform a task semi-automatically or through master-slave control. The system provides control of a catheter tip with high accuracy while reducing X-ray exposure to the clinicians and providing a more ergonomic situation for the cardiologists. Cardiac Catheterization: a master-slave robotic system was developed to perform accurate control of a steerable catheter to touch and ablate faulty regions on the inner walls of a beating heart in order to treat arrhythmia. The system facilitates touching and making contact with a target point in a beating heart chamber through master-slave control with coordinated visual feedback. Live Neuron Micro-Manipulation: a microscope image-guided robotic system was developed to provide shared control over multiple micro-manipulators to touch cell membranes in order to perform patch clamp electrophysiology. Image-guided robot-assisted techniques with master-slave control were implemented for each case to provide shared control between a human operator and a robot. The results show increased accuracy and reduced operation time in all three cases

    Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios

    Get PDF
    The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way
    • …
    corecore