34 research outputs found

    Generic decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform

    Distance-based and Orientation-based Visual Servoing from Three Points

    Get PDF
    International audienceThis paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Cartesian distances between the spherical projections of the three points. The second part of the set corresponds to the angle-axis representation of a rotation matrix measured from the image of two points. Regarding the theoretical comparison with the classical perspective coordinates of points, the new set does not present more singularities. In addition, using the new set inside its nonsingular domain, a classical control law is proven to be optimal for pure rotational motions. The theoretical results and the robustness to points range errors of the new control scheme are validated through simulations and experiments on a 6-DOF robot arm

    Visual servoing from three points using a spherical projection model

    Get PDF
    International audienceThis paper deals with visual servoing from three points. Using the geometric properties of the spherical projection of points, a new decoupled set of six visual features is proposed. The main originality lies in the use of the distances between spherical projection of points to define three features that are invariant to camera rotations. The three other features present a linear link with respect to camera rotations. In comparison with the classical perspective coordinates of points, the new decoupled set does not present more singularities. In addition, using the new set in its non-singular domain, a classical control law is proven to be ideal for rotational motions. These theoretical results as well as the robustness to errors of the new decoupled control scheme are illustrated through simulation results

    Robust image-based visual servoing using invariant visual information

    Get PDF
    This paper deals with the use of invariant visual features for visual servoing. New features are proposed to control the 6 degrees of freedom of a robotic system with better linearizing properties and robustness to noise than the state of the art in image-based visual servoing. We show in this paper that by using these features the behavior of image-based visual servoing in task space can be significantly improved. Several experimental results are provided and validate our proposal

    Visual Servoing

    Get PDF
    International audienceThis book chapter deals with visual servoing or vision-based control

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications

    Novel estimation and control techniques in micromanipulation using vision and force feedback

    Get PDF
    With the recent advances in the fields of micro and nanotechnology, there has been growing interest for complex micromanipulation and microassembly strategies. Despite the fact that many commercially available micro devices such as the key components in automobile airbags, ink-jet printers and projection display systems are currently produced in a batch technique with little assembly, many other products such as read/write heads for hard disks and fiber optics assemblies require flexible precision assemblies. Furthermore, many biological micromanipulations such as invitro-fertilization, cell characterization and treatment rely on the ability of human operators. Requirement of high-precision, repeatable and financially viable operations in these tasks has given rise to the elimination of direct human involvement, and autonomy in micromanipulation and microassembly. In this thesis, a fully automated dexterous micromanipulation strategy based on vision and force feedback is developed. More specifically, a robust vision based control architecture is proposed and implemented to compensate errors due to the uncertainties about the position, behavior and shape of the microobjects to be manipulated. Moreover, novel estimators are designed to identify the system and to characterize the mechanical properties of the biological structures through a synthesis of concepts from the computer vision, estimation and control theory. Estimated mechanical parameters are utilized to reconstruct the imposed force on a biomembrane and to provide the adequate information to control the position, velocity and acceleration of the probe without damaging the cell/tissue during an injection task

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    A Hybrid Visual Control Scheme to Assist the Visually Impaired with Guided Reaching Tasks

    Get PDF
    In recent years, numerous researchers have been working towards adapting technology developed for robotic control to use in the creation of high-technology assistive devices for the visually impaired. These types of devices have been proven to help visually impaired people live with a greater degree of confidence and independence. However, most prior work has focused primarily on a single problem from mobile robotics, namely navigation in an unknown environment. In this work we address the issue of the design and performance of an assistive device application to aid the visually-impaired with a guided reaching task. The device follows an eye-in-hand, IBLM visual servoing configuration with a single camera and vibrotactile feedback to the user to direct guided tracking during the reaching task. We present a model for the system that employs a hybrid control scheme based on a Discrete Event System (DES) approach. This approach avoids significant problems inherent in the competing classical control or conventional visual servoing models for upper limb movement found in the literature. The proposed hybrid model parameterizes the partitioning of the image state-space that produces a variable size targeting window for compensatory tracking in the reaching task. The partitioning is created through the positioning of hypersurface boundaries within the state space, which when crossed trigger events that cause DES-controller state transition that enable differing control laws. A set of metrics encompassing, accuracy (DD), precision (θe\theta_{e}), and overall tracking performance (ψ\psi) are also proposed to quantity system performance so that the effect of parameter variations and alternate controller configurations can be compared. To this end, a prototype called \texttt{aiReach} was constructed and experiments were conducted testing the functional use of the system and other supporting aspects of the system behaviour using participant volunteers. Results are presented validating the system design and demonstrating effective use of a two parameter partitioning scheme that utilizes a targeting window with additional hysteresis region to filtering perturbations due to natural proprioceptive limitations for precise control of upper limb movement. Results from the experiments show that accuracy performance increased with the use of the dual parameter hysteresis target window model (0.91D10.91 \leq D \leq 1, μ(D)=0.9644\mu(D)=0.9644, σ(D)=0.0172\sigma(D)=0.0172) over the single parameter fixed window model (0.82D0.980.82 \leq D \leq 0.98, μ(D)=0.9205\mu(D)=0.9205, σ(D)=0.0297\sigma(D)=0.0297) while the precision metric, θe\theta_{e}, remained relatively unchanged. In addition, the overall tracking performance metric produces scores which correctly rank the performance of the guided reaching tasks form most difficult to easiest
    corecore