517 research outputs found

    Towards binocular active vision in a robot head system

    Get PDF
    This paper presents the first results of an investigation and pilot study into an active, binocular vision system that combines binocular vergence, object recognition and attention control in a unified framework. The prototype developed is capable of identifying, targeting, verging on and recognizing objects in a highly-cluttered scene without the need for calibration or other knowledge of the camera geometry. This is achieved by implementing all image analysis in a symbolic space without creating explicit pixel-space maps. The system structure is based on the ‘searchlight metaphor’ of biological systems. We present results of a first pilot investigation that yield a maximum vergence error of 6.4 pixels, while seven of nine known objects were recognized in a high-cluttered environment. Finally a “stepping stone” visual search strategy was demonstrated, taking a total of 40 saccades to find two known objects in the workspace, neither of which appeared simultaneously within the Field of View resulting from any individual saccade

    Vergence control system for stereo depth recovery

    Get PDF
    This paper describes a vergence control algorithm for a 3D stereo recovery system. This work has been developed within framework of the project ROBTET. This project has the purpose of designing a Teleoperated Robotic System for live power lines maintenance. The tasks involved suppose the automatic calculation of path for standard tasks, collision detection to avoid electrical shocks, force feedback and accurate visual data, and the generation of collision free real paths. To accomplish these tasks the system needs an exact model of the environment that is acquired through an active stereoscopic head. A cooperative algorithm using vergence and stereo correlation is shown. The proposed system is carried out through an algorithm based on the phase correlation, trying to keep the vergence on the interest object. The sharp vergence changes produced by the variation of the interest objects are controlled through an estimation of the depth distance generated by a stereo correspondence system. In some elements of the scene, those aligned with the epipolar plane, large errors in the depth estimation as well as in the phase correlation, are produced. To minimize these errors a laser lighting system is used to help fixation, assuring an adequate vergence and depth extraction .The work presented in this paper has been supported by electric utility IBERDROLA, S.A. under project PIE No. 132.198

    General Dynamic Scene Reconstruction from Multiple View Video

    Get PDF
    This paper introduces a general approach to dynamic scene reconstruction from multiple moving cameras without prior knowledge or limiting constraints on the scene structure, appearance, or illumination. Existing techniques for dynamic scene reconstruction from multiple wide-baseline camera views primarily focus on accurate reconstruction in controlled environments, where the cameras are fixed and calibrated and background is known. These approaches are not robust for general dynamic scenes captured with sparse moving cameras. Previous approaches for outdoor dynamic scene reconstruction assume prior knowledge of the static background appearance and structure. The primary contributions of this paper are twofold: an automatic method for initial coarse dynamic scene segmentation and reconstruction without prior knowledge of background appearance or structure; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes from multiple wide-baseline static or moving cameras. Evaluation is performed on a variety of indoor and outdoor scenes with cluttered backgrounds and multiple dynamic non-rigid objects such as people. Comparison with state-of-the-art approaches demonstrates improved accuracy in both multiple view segmentation and dense reconstruction. The proposed approach also eliminates the requirement for prior knowledge of scene structure and appearance

    Analyses of stone surfaces by optical methods

    Get PDF
    Ornamental stone products are generally used for decorative cladding. A major quality parameter is their aesthetical appearance, which directly impacts their commercial value. The surface quality of stone products depends on the presence of defects both due to the unpredictability of natural materials and to the actual manufacturing process. This work starts reviewing the literature about optical methods for stone surface inspection. A classification is then proposed focusing on their industrial applicability in order to provide a guideline for future investigations. Three innovative systems are proposed and described in details: a vision system, an optical profilometer and a reflectometer for the inspection of polished, bush-hammered, sand-blasted, flame-finished, waterjet processed, and laser engraved surfaces

    Scene understanding by robotic interactive perception

    Get PDF
    This thesis presents a novel and generic visual architecture for scene understanding by robotic interactive perception. This proposed visual architecture is fully integrated into autonomous systems performing object perception and manipulation tasks. The proposed visual architecture uses interaction with the scene, in order to improve scene understanding substantially over non-interactive models. Specifically, this thesis presents two experimental validations of an autonomous system interacting with the scene: Firstly, an autonomous gaze control model is investigated, where the vision sensor directs its gaze to satisfy a scene exploration task. Secondly, autonomous interactive perception is investigated, where objects in the scene are repositioned by robotic manipulation. The proposed visual architecture for scene understanding involving perception and manipulation tasks has four components: 1) A reliable vision system, 2) Camera-hand eye calibration to integrate the vision system into an autonomous robot’s kinematic frame chain, 3) A visual model performing perception tasks and providing required knowledge for interaction with scene, and finally, 4) A manipulation model which, using knowledge received from the perception model, chooses an appropriate action (from a set of simple actions) to satisfy a manipulation task. This thesis presents contributions for each of the aforementioned components. Firstly, a portable active binocular robot vision architecture that integrates a number of visual behaviours are presented. This active vision architecture has the ability to verge, localise, recognise and simultaneously identify multiple target object instances. The portability and functional accuracy of the proposed vision architecture is demonstrated by carrying out both qualitative and comparative analyses using different robot hardware configurations, feature extraction techniques and scene perspectives. Secondly, a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot are described. For this purpose, the forward kinematic model of the active robot head is derived and the methodology for calibrating and integrating the robot head is described in detail. A rigid calibration methodology has been implemented to provide a closed-form hand-to-eye calibration chain and this has been extended with a mechanism to allow the camera external parameters to be updated dynamically for optimal 3D reconstruction to meet the requirements for robotic tasks such as grasping and manipulating rigid and deformable objects. It is shown from experimental results that the robot head achieves an overall accuracy of fewer than 0.3 millimetres while recovering the 3D structure of a scene. In addition, a comparative study between current RGB-D cameras and our active stereo head within two dual-arm robotic test-beds is reported that demonstrates the accuracy and portability of our proposed methodology. Thirdly, this thesis proposes a visual perception model for the task of category-wise objects sorting, based on Gaussian Process (GP) classification that is capable of recognising objects categories from point cloud data. In this approach, Fast Point Feature Histogram (FPFH) features are extracted from point clouds to describe the local 3D shape of objects and a Bag-of-Words coding method is used to obtain an object-level vocabulary representation. Multi-class Gaussian Process classification is employed to provide a probability estimate of the identity of the object and serves the key role of modelling perception confidence in the interactive perception cycle. The interaction stage is responsible for invoking the appropriate action skills as required to confirm the identity of an observed object with high confidence as a result of executing multiple perception-action cycles. The recognition accuracy of the proposed perception model has been validated based on simulation input data using both Support Vector Machine (SVM) and GP based multi-class classifiers. Results obtained during this investigation demonstrate that by using a GP-based classifier, it is possible to obtain true positive classification rates of up to 80\%. Experimental validation of the above semi-autonomous object sorting system shows that the proposed GP based interactive sorting approach outperforms random sorting by up to 30\% when applied to scenes comprising configurations of household objects. Finally, a fully autonomous visual architecture is presented that has been developed to accommodate manipulation skills for an autonomous system to interact with the scene by object manipulation. This proposed visual architecture is mainly made of two stages: 1) A perception stage, that is a modified version of the aforementioned visual interaction model, 2) An interaction stage, that performs a set of ad-hoc actions relying on the information received from the perception stage. More specifically, the interaction stage simply reasons over the information (class label and associated probabilistic confidence score) received from perception stage to choose one of the following two actions: 1) An object class has been identified with high confidence, so remove from the scene and place it in the designated basket/bin for that particular class. 2) An object class has been identified with less probabilistic confidence, since from observation and inspired from the human behaviour of inspecting doubtful objects, an action is chosen to further investigate that object in order to confirm the object’s identity by capturing more images from different views in isolation. The perception stage then processes these views, hence multiple perception-action/interaction cycles take place. From an application perspective, the task of autonomous category based objects sorting is performed and the experimental design for the task is described in detail

    The Active Stereo Probe: The Design and Implementation of an Active Videometrics System

    Get PDF
    This thesis describes research leading to the design and development of the Active Stereo Probe (ASP): an active vision based videometrics system. The ASP espouses both definitions of active vision by integrating structured illumination with a steerable binocular camera platform (or head). However, the primary function of the ASP is to recover quantitative 3D surface models of a scene from stereo images captured from the system's stereo pair of CCD video cameras. Stereo matching is performed using a development of Zhengping and Mowforth's Multiple Scale Signal Matcher (MSSM) stereo matcher. The performance of the original MSSM algorithm was dramatically improved, both in terms of speed of execution and dynamic range, by completely re-implementing it using an efficient scale space pyramid image representation. A range of quantitative performance tests for stereo matchers was developed, and these were applied to the newly developed MSSM stereo matcher to verify its suitability for use in the ASP. The performance of the stereo matcher is further improved by employing the ASP's structured illumination device to bathe the imaged scene in textured light. Few previously reported dynamic binocular camera heads have been able to perform any type of quantitative vision task. It is argued here that this failure has arisen mainly from the rudimentary nature of the design process applied to previous heads. Therefore, in order to address this problem, a new rigorous approach, suitable for the design of both dynamic and static stereo vision systems, was devised. This approach relies extensively upon system modelling as part of the design process. In order to support this new design approach, a general mathematical model of stereo imaging systems was developed and implemented within a software simulator. This simulator was then applied to the analysis of the requirements of the ASP and the MSSM stereo matcher. A specification for the imaging and actuation components of the ASP was hence obtained which was predicted to meet its performance requirements. This led directly to the fabrication of the completed ASP sensor head. The developed approach and model has subsequently been used successfully for the design of several other quantitative stereo vision systems. A vital requirement of any vision system that is intended to perform quantitative measurement is calibration. A novel calibration scheme was devised for the ASP by adopting advanced techniques from the field of photogrammetry and adapting them for use in the context of a dynamic computer vision system. The photogrammetric technique known as the Direct Linear Transform was used successfully in the implementation of the first, static stage of this calibration scheme. A significant aspect of the work reported in this thesis is the importance given to integrating the components developed for the ASP, i.e. the sensor head, the stereo matching software and the calibration software, into a complete videometric system. The success of this approach is demonstrated by the high quality of 3D surface models obtained using the integrated videometric system that was developed

    Contributions to virtual reality

    Get PDF
    153 p.The thesis contributes in three Virtual Reality areas: Âż Visual perception: a calibration algorithm is proposed to estimate stereo projection parameters in head-mounted displays, so that correct shapes and distances can be perceived, and calibration and control procedures are proposed to obtain desired accommodation stimuli at different virtual distances.Âż Immersive scenarios: the thesis analyzes several use cases demanding varying degrees of immersion and special, innovative visualization solutions are proposed to fulfil their requirements. Contributions focus on machinery simulators, weather radar volumetric visualization and manual arc welding simulation.Âż Ubiquitous visualization: contributions are presented to scenarios where users access interactive 3D applications remotely. The thesis follows the evolution of Web3D standards and technologies to propose original visualization solutions for volume rendering of weather radar data, e-learning on energy efficiency, virtual e-commerce and visual product configurators

    Wide baseline pose estimation from video with a density-based uncertainty model

    Get PDF
    International audienceRobust wide baseline pose estimation is an essential step in the deployment of smart camera networks. In this work, we highlight some current limitations of conventional strategies for relative pose estimation in difficult urban scenes. Then, we propose a solution which relies on an adaptive search of corresponding interest points in synchronized video streams which allows us to converge robustly toward a high-quality solution. The core idea of our algorithm is to build across the image space a nonstationary mapping of the local pose estimation uncertainty, based on the spatial distribution of interest points. Subsequently, the mapping guides the selection of new observations from the video stream in order to prioritize the coverage of areas of high uncertainty. With an additional step in the initial stage, the proposed algorithm may also be used for refining an existing pose estimation based on the video data; this mode allows for performing a data-driven self-calibration task for stereo rigs for which accuracy is critical, such as onboard medical or vehicular systems. We validate our method on three different datasets which cover typical scenarios in pose estimation. The results show a fast and robust convergence of the solution, with a significant improvement, compared to single image-based alternatives, of the RMSE of ground-truth matches, and of the maximum absolute error
    • 

    corecore