860 research outputs found

    Virtual Reality to Simulate Visual Tasks for Robotic Systems

    Get PDF
    Virtual reality (VR) can be used as a tool to analyze the interactions between the visual system of a robotic agent and the environment, with the aim of designing the algorithms to solve the visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system) and its interplay with the environment can be modeled through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. Differently from conventional applications, where VR is used for the perceptual rendering of the visual information to a human observer, in the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system. In this way, machine vision algorithms can be quantitatively validated by using the ground truth data provided by the knowledge of both the structure of the environment and the vision system

    Multisensorial Active Perception for Indoor Environment Modeling

    Get PDF

    Robot Arms with 3D Vision Capabilities

    Get PDF

    Sensor development for estimation of biomass yield applied to Miscanthus Giganteus

    Get PDF
    Precision Agriculture technologies such as yield monitoring have been available for traditional field crops for decades. However, there are currently none available for energy crops such as Miscanthus Giganteus (MxG), switch grass, and sugar cane. The availability of yield monitors would allow better organization and scheduling of harvesting operations. In addition, the real-time yield data would allow adaptive speed control of a harvester to optimize performance. A yield monitor estimates a total amount of biomass per coverage area in kg/m2 as a function of location. However, for herbaceous type crops such as MxG and switchgrass, directly measuring the biomass entering a harvester in the field is complicated and impractical. Therefore, a novel yield monitoring system was proposed. The approach taken was to employ an indirect measure by determining a volume of biomass entering the harvester as a function of time. The volume can be obtained by multiplying the diameter related cross-sectional area, the height and the crop density of MxG. Subsequently, this volume is multiplied by an assumed constant, material density of the crop, which results in a mass flow per unit of time. To determine the coverage area, typically the width of the cutting device is multiplied by the machine speed to give the coverage area per unit of time. The ratio between the mass flow and coverage area is now the yield per area, and adding GPS geo-references the yield. To measure the height of MxG stems, a light detection and ranging (LIDAR) sensor based height measurement approach was developed. The LIDAR was applied to scan to the MxG vertically. Two measurement modes: static and dynamic, were designed and tested. A geometrical MxG height measurement model was developed and analyzed to obtain the resolution of the height measurement. An inclination correction method was proposed to correct errors caused by the uneven ground surface. The relationship between yield and stem height was discussed and analyzed, resulting in a linear relationship. To estimate the MxG stem diameter, two types of sensors were developed and evaluated. Firstly, a LIDAR based diameter sensor was designed and tested. The LIDAR was applied to scan MxG stems horizontally. A measurement geometry model of the LIDAR was developed to determine the region of interest. An angle continuity based pre-grouping algorithm was applied to group the raw data from the LIDAR. Based on the analysis of the presentation of MxG stems in the LIDAR data, a fuzzy clustering technique was developed to identify the MxG stems within the clusters. The diameter was estimated based on the clustering result. Four types of clustering techniques were compared. Based on their performances, the Gustafson - Kessel Clustering algorithm was selected. A drawback of the LIDAR based diameter sensor was that it could only be used for static diameter measurement. An alternative system based on a machine vision based diameter sensor, which supported the dynamic measurement, was applied. A binocular stereo vision based diameter sensor and a structured lighting-based monocular vision diameter estimation system were developed and evaluated in sequence. Both systems worked with structured lighting provided by a downward slanted laser sheet to provide detectable features in the images. An image segmentation based algorithm was developed to detect these features. These features were used to identify the MxG stems in both the binocular and monocular based systems. A horizontally covered length per pixel model was built and validated to extract the diameter information from images. The key difference between the binocular and monocular stereo vision systems was the approach to estimate the depth. For the binocular system, the depth information was obtained based on disparities of matched features in image pairs. The features were matched based on a pixel similarity in both one dimensional and two dimensional based image matching algorithm. In the monocular system, the depth was obtained by a geometry perspective model of the diameter sensor unit. The relationship between yield and stem diameter was discussed and analyzed. The result showed that the yield was more strongly dependent upon the stem height than diameter, and the relationship between yield and stem volume was linear. The crop density estimation was also based on the monocular stereo vision system. To predict the crop density, the geometry perspective model of the sensor unit was further analyzed to calculate the coverage area of the sensor. A Monte Carlo model based method was designed to predict the number of occluded MxG stems based on the number of visible MxG stems in images. The results indicated that the yield has a linear relationship with the number of stems with a zero intercept and the average individual mass as the coefficient. All sensors were evaluated in the field during the growing seasons of 2009, 2010 and 2011 using manually measured parameters (height, diameter and crop density) as references. The results showed that the LIDAR based height sensor achieved an accuracy of 92% (0.3m error) to 98.2% (0.06m error) in static height measurements and accuracy of 93.5% (0.22m error) to 98.5% (0.05m error) in dynamic height measurements. For the diameter measurements, the machine vision based sensors showed a more accurate result than the LIDAR based sensor. The binocular stereo vision based and monocular vision based diameter measurement achieved an accuracy of 93.1% and 93.5% for individual stem diameter estimation, and 99.8% and 99.9% for average stem diameter estimation, while the achieved accuracy of LIDAR based sensor for average stem diameter estimation was 92.5%. Among three stem diameter sensors, the monocular vision based sensor was recommended due to its higher accuracy and lower cost in both device and computation. The achieved accuracy of machine vision based crop density measurement was 92.2%

    Measurement of crosstalk in stereoscopic display systems used for vision research

    Get PDF
    Studying binocular vision requires precise control over the stimuli presented to the left and right eyes. A popular technique is to segregate signals either temporally (frame interleaving), spectrally (using coloured filters) or through light polarization. None of these segregation methods achieves perfect isolation, and so a degree of ‘crosstalk’ is usually apparent in which signals intended for one eye are faintly visible to the other eye. Previous studies have reported crosstalk values mostly for consumer-grade systems. Here we measure crosstalk for eight systems, many of which are intended for use in vision research. We provide benchmark crosstalk values, report a negative crosstalk effect in some LCD-based systems, and give guidelines for dealing with crosstalk in different experimental paradigms

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    corecore