12,868 research outputs found

    Motion from Fixation

    Get PDF
    We study the problem of estimating rigid motion from a sequence of monocular perspective images obtained by navigating around an object while fixating a particular feature point. The motivation comes from the mechanics of the buman eye, which either pursuits smoothly some fixation point in the scene, or "saccades" between different fixation points. In particular, we are interested in understanding whether fixation helps the process of estimating motion in the sense that it makes it more robust, better conditioned or simpler to solve. We cast the problem in the framework of "dynamic epipolar geometry", and propose an implicit dynamical model for recursively estimating motion from fixation. This allows us to compare directly the quality of the estimates of motion obtained by imposing the fixation constraint, or by assuming a general rigid motion, simply by changing the geometry of the parameter space while maintaining the same structure of the recursive estimator. We also present a closed-form static solution from two views, and a recursive estimator of the absolute attitude between the viewer and the scene. One important issue is how do the estimates degrade in presence of disturbances in the tracking procedure. We describe a simple fixation control that converges exponentially, which is complemented by a image shift-registration for achieving sub-pixel accuracy, and assess how small deviations from perfect tracking affect the estimates of motion

    Estimating Body Segment Orientation by Applying Inertial and Magnetic Sensing Near Ferromagnetic Materials

    Get PDF
    Inertial and magnetic sensors are very suitable for ambulatory monitoring of human posture and movements. However, ferromagnetic materials near the sensor disturb the local magnetic field and, therefore, the orientation estimation. A Kalman-based fusion algorithm was used to obtain dynamic orientations and to minimize the effect of magnetic disturbances. This paper compares the orientation output of the sensor fusion using three-dimensional inertial and magnetic sensors against a laboratory bound opto-kinetic system (Vicon) in a simulated work environment. With the tested methods, the difference between the optical reference system and the output of the algorithm was 2.6deg root mean square (rms) when no metal was near the sensor module. Near a large metal object instant errors up to 50deg were measured when no compensation was applied. Using a magnetic disturbance model, the error reduced significantly to 3.6deg rms

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Compensation of Magnetic Disturbances Improves Inertial and Magnetic Sensing of Human Body Segment Orientation

    Get PDF
    This paper describes a complementary Kalman filter design to estimate orientation of human body segments by fusing gyroscope, accelerometer, and magnetometer signals from miniature sensors. Ferromagnetic materials or other magnetic fields near the sensor module disturb the local earth magnetic field and, therefore, the orientation estimation, which impedes many (ambulatory) applications. In the filter, the gyroscope bias error, orientation error, and magnetic disturbance error are estimated. The filter was tested under quasi-static and dynamic conditions with ferromagnetic materials close to the sensor module. The quasi-static experiments implied static positions and rotations around the three axes. In the dynamic experiments, three-dimensional rotations were performed near a metal tool case. The orientation estimated by the filter was compared with the orientation obtained with an optical reference system Vicon. Results show accurate and drift-free orientation estimates. The compensation results in a significant difference (p<0.01) between the orientation estimates with compensation of magnetic disturbances in comparison to no compensation or only gyroscopes. The average static error was 1.4/spl deg/ (standard deviation 0.4) in the magnetically disturbed experiments. The dynamic error was 2.6/spl deg/ root means square

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    A 3D stereo camera system for precisely positioning animals in space and time

    Get PDF
    PLT was supported by the Scottish Funding Council (grant HR09011) through the Marine Alliance for Science and Technology for Scotland.Here, we describe a portable stereo camera system that integrates a GPS receiver, an attitude sensor and 3D stereo photogrammetry to rapidly estimate the position of multiple animals in space and time. We demonstrate the performance of the system during a field test by simultaneously tracking the individual positions of six long-finned pilot whales, Globicephala melas. In shore-based accuracy trials, a system with a 50-cm stereo baseline had an average range estimation error of 0.09 m at a 5-m distance increasing up to 3.2 at 50 m. The system is especially useful in field situations where it is necessary to follow groups of animals travelling over relatively long distances and time periods whilst obtaining individual positions with high spatial and temporal resolution (up to 8 Hz). These positions provide quantitative estimates of a variety of key parameters and indicators for behavioural studies such as inter-animal distances, group dispersion, speed and heading. This system can additionally be integrated with other techniques such as archival tags, photo-identification methods or acoustic playback experiments to facilitate fieldwork investigating topics ranging from natural social behaviour to how animals respond to anthropogenic disturbance. By grounding observations in quantitative metrics, the system can characterize fine-scale behaviour or detect changes as a result of disturbance that might otherwise be difficult to observe.PostprintPeer reviewe

    Design and Implementation of Moving Object Visual Tracking System using μ-Synthesis Controller

    Get PDF
    Considering the increasing use of security and surveillance systems, moving object tracking systems are an interesting research topic in the field of computer vision. In general, a moving object tracking system consists of two integrated parts, namely the video tracking part that predicts the position of the target in the image plane, and the visual servo part that controls the movement of the camera following the movement of objects in the image plane. For tracking purposes, the camera is used as a visual sensor and applied to a 2-DOF (yaw-pitch) manipulator platform with an eye-in-hand camera configuration. Although its operation is relatively simple, the yaw-pitch camera platform still needs a good control method to improve its performance. In this study, we propose a moving object tracking system on a prototype yaw-pitch platform. A m-synthesis controller was used to control the movement of the visual servo part and keep the target in the center of the image plane. The experimental results showed relatively good results from the proposed system to work in real-time conditions with high tracking accuracy in both indoor and outdoor environments

    Development of a Computer Vision-Based Three-Dimensional Reconstruction Method for Volume-Change Measurement of Unsaturated Soils during Triaxial Testing

    Get PDF
    Problems associated with unsaturated soils are ubiquitous in the U.S., where expansive and collapsible soils are some of the most widely distributed and costly geologic hazards. Solving these widespread geohazards requires a fundamental understanding of the constitutive behavior of unsaturated soils. In the past six decades, the suction-controlled triaxial test has been established as a standard approach to characterizing constitutive behavior for unsaturated soils. However, this type of test requires costly test equipment and time-consuming testing processes. To overcome these limitations, a photogrammetry-based method has been developed recently to measure the global and localized volume-changes of unsaturated soils during triaxial test. However, this method relies on software to detect coded targets, which often requires tedious manual correction of incorrectly coded target detection information. To address the limitation of the photogrammetry-based method, this study developed a photogrammetric computer vision-based approach for automatic target recognition and 3D reconstruction for volume-changes measurement of unsaturated soils in triaxial tests. Deep learning method was used to improve the accuracy and efficiency of coded target recognition. A photogrammetric computer vision method and ray tracing technique were then developed and validated to reconstruct the three-dimensional models of soil specimen

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method
    corecore