4,531 research outputs found

    Development of Moire machine vision

    Get PDF
    Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis

    Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement

    Full text link
    Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851

    Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

    Full text link
    Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 338)

    Get PDF
    This bibliography lists 139 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during June 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    Sensor development for estimation of biomass yield applied to Miscanthus Giganteus

    Get PDF
    Precision Agriculture technologies such as yield monitoring have been available for traditional field crops for decades. However, there are currently none available for energy crops such as Miscanthus Giganteus (MxG), switch grass, and sugar cane. The availability of yield monitors would allow better organization and scheduling of harvesting operations. In addition, the real-time yield data would allow adaptive speed control of a harvester to optimize performance. A yield monitor estimates a total amount of biomass per coverage area in kg/m2 as a function of location. However, for herbaceous type crops such as MxG and switchgrass, directly measuring the biomass entering a harvester in the field is complicated and impractical. Therefore, a novel yield monitoring system was proposed. The approach taken was to employ an indirect measure by determining a volume of biomass entering the harvester as a function of time. The volume can be obtained by multiplying the diameter related cross-sectional area, the height and the crop density of MxG. Subsequently, this volume is multiplied by an assumed constant, material density of the crop, which results in a mass flow per unit of time. To determine the coverage area, typically the width of the cutting device is multiplied by the machine speed to give the coverage area per unit of time. The ratio between the mass flow and coverage area is now the yield per area, and adding GPS geo-references the yield. To measure the height of MxG stems, a light detection and ranging (LIDAR) sensor based height measurement approach was developed. The LIDAR was applied to scan to the MxG vertically. Two measurement modes: static and dynamic, were designed and tested. A geometrical MxG height measurement model was developed and analyzed to obtain the resolution of the height measurement. An inclination correction method was proposed to correct errors caused by the uneven ground surface. The relationship between yield and stem height was discussed and analyzed, resulting in a linear relationship. To estimate the MxG stem diameter, two types of sensors were developed and evaluated. Firstly, a LIDAR based diameter sensor was designed and tested. The LIDAR was applied to scan MxG stems horizontally. A measurement geometry model of the LIDAR was developed to determine the region of interest. An angle continuity based pre-grouping algorithm was applied to group the raw data from the LIDAR. Based on the analysis of the presentation of MxG stems in the LIDAR data, a fuzzy clustering technique was developed to identify the MxG stems within the clusters. The diameter was estimated based on the clustering result. Four types of clustering techniques were compared. Based on their performances, the Gustafson - Kessel Clustering algorithm was selected. A drawback of the LIDAR based diameter sensor was that it could only be used for static diameter measurement. An alternative system based on a machine vision based diameter sensor, which supported the dynamic measurement, was applied. A binocular stereo vision based diameter sensor and a structured lighting-based monocular vision diameter estimation system were developed and evaluated in sequence. Both systems worked with structured lighting provided by a downward slanted laser sheet to provide detectable features in the images. An image segmentation based algorithm was developed to detect these features. These features were used to identify the MxG stems in both the binocular and monocular based systems. A horizontally covered length per pixel model was built and validated to extract the diameter information from images. The key difference between the binocular and monocular stereo vision systems was the approach to estimate the depth. For the binocular system, the depth information was obtained based on disparities of matched features in image pairs. The features were matched based on a pixel similarity in both one dimensional and two dimensional based image matching algorithm. In the monocular system, the depth was obtained by a geometry perspective model of the diameter sensor unit. The relationship between yield and stem diameter was discussed and analyzed. The result showed that the yield was more strongly dependent upon the stem height than diameter, and the relationship between yield and stem volume was linear. The crop density estimation was also based on the monocular stereo vision system. To predict the crop density, the geometry perspective model of the sensor unit was further analyzed to calculate the coverage area of the sensor. A Monte Carlo model based method was designed to predict the number of occluded MxG stems based on the number of visible MxG stems in images. The results indicated that the yield has a linear relationship with the number of stems with a zero intercept and the average individual mass as the coefficient. All sensors were evaluated in the field during the growing seasons of 2009, 2010 and 2011 using manually measured parameters (height, diameter and crop density) as references. The results showed that the LIDAR based height sensor achieved an accuracy of 92% (0.3m error) to 98.2% (0.06m error) in static height measurements and accuracy of 93.5% (0.22m error) to 98.5% (0.05m error) in dynamic height measurements. For the diameter measurements, the machine vision based sensors showed a more accurate result than the LIDAR based sensor. The binocular stereo vision based and monocular vision based diameter measurement achieved an accuracy of 93.1% and 93.5% for individual stem diameter estimation, and 99.8% and 99.9% for average stem diameter estimation, while the achieved accuracy of LIDAR based sensor for average stem diameter estimation was 92.5%. Among three stem diameter sensors, the monocular vision based sensor was recommended due to its higher accuracy and lower cost in both device and computation. The achieved accuracy of machine vision based crop density measurement was 92.2%

    Computing motion in the primate's visual system

    Get PDF
    Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle ‘the final optical flow should be as smooth as possible’ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory
    corecore