4,714 research outputs found

    The association between retinal vein ophthalmodynamometric force change and optic disc excavation

    Get PDF
    Aim: Retinal vein ophthalmodynamometric force (ODF) is predictive of future optic disc excavation in glaucoma, but it is not known if variation in ODF affects prognosis. We aimed to assess whether a change in ODF provides additional prognostic information. Methods: 135 eyes of 75 patients with glaucoma or being glaucoma suspects had intraocular pressure (IOP), visual fields, stereo optic disc photography and ODF measured on an initial visit and a subsequent visit at mean 82 (SD 7.3) months later. Corneal thickness and blood pressure were recorded on the latter visit. When venous pulsation was spontaneous, the ODF was recorded as 0 g. Change in ODF was calculated. Flicker stereochronoscopy was used to determine the occurrence of optic disc excavation, which was modelled against the measured variables using multiple mixed effects logistic regression. Results: Change in ODF (p=0.046) was associated with increased excavation. Average IOP (p=0.66) and other variables were not associated. Odds ratio for increased optic disc excavation of 1.045 per gram ODF change (95% CI 1.001 to 1.090) was calculated. Conclusion: Change in retinal vein ODF may provide additional information to assist with glaucoma prognostication and implies a significant relationship between venous change and glaucoma patho-physiology

    General Dynamic Scene Reconstruction from Multiple View Video

    Get PDF
    This paper introduces a general approach to dynamic scene reconstruction from multiple moving cameras without prior knowledge or limiting constraints on the scene structure, appearance, or illumination. Existing techniques for dynamic scene reconstruction from multiple wide-baseline camera views primarily focus on accurate reconstruction in controlled environments, where the cameras are fixed and calibrated and background is known. These approaches are not robust for general dynamic scenes captured with sparse moving cameras. Previous approaches for outdoor dynamic scene reconstruction assume prior knowledge of the static background appearance and structure. The primary contributions of this paper are twofold: an automatic method for initial coarse dynamic scene segmentation and reconstruction without prior knowledge of background appearance or structure; and a general robust approach for joint segmentation refinement and dense reconstruction of dynamic scenes from multiple wide-baseline static or moving cameras. Evaluation is performed on a variety of indoor and outdoor scenes with cluttered backgrounds and multiple dynamic non-rigid objects such as people. Comparison with state-of-the-art approaches demonstrates improved accuracy in both multiple view segmentation and dense reconstruction. The proposed approach also eliminates the requirement for prior knowledge of scene structure and appearance

    Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

    Full text link
    Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108

    Cross-Scale Cost Aggregation for Stereo Matching

    Full text link
    Human beings process stereoscopic correspondence across multiple scales. However, this bio-inspiration is ignored by state-of-the-art cost aggregation methods for dense stereo correspondence. In this paper, a generic cross-scale cost aggregation framework is proposed to allow multi-scale interaction in cost aggregation. We firstly reformulate cost aggregation from a unified optimization perspective and show that different cost aggregation methods essentially differ in the choices of similarity kernels. Then, an inter-scale regularizer is introduced into optimization and solving this new optimization problem leads to the proposed framework. Since the regularization term is independent of the similarity kernel, various cost aggregation methods can be integrated into the proposed general framework. We show that the cross-scale framework is important as it effectively and efficiently expands state-of-the-art cost aggregation methods and leads to significant improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.Comment: To Appear in 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014 (poster, 29.88%

    Shape from Lambertian Photometric Flow Fields

    Get PDF
    A new idea for the analysis of shape from reflectance maps is introduced in this paper. It is shown that local surface orientation and curvature constraints can be obtained at points on a smooth surface by computing the instantaneous rate of change of reflected scene radiance caused by angular variations in illumination geometry. The resulting instantaneous changes in image irradiance values across an optic sensing array of pixels constitute what is termed a photometric flow field. Unlike optic flow fields which are instantaneous changes in position across an optic array of pixels caused by relative motion, there is no correspondence problem with respect to obtaining the instantaneous change in image irradiance values between successive image frames. This is because the object and camera remain static relative to one another as the illumination geometry changes. There are a number of advantages to using photometric flow fields. One advantage is that local surface orientation and curvature at a point on a smooth surface can be uniquely determined by only slightly varying the incident orientation of an illuminator within a small local neighborhood about a specific incident orientation. Robot manipulators and rotation/positioning jigs can be accurately varied within small ranges of motion. Conventional implementation of photometric stereo requires the use of three vastly different incident orientations of an illuminator requiring either much calibration and/or gross and inaccurate robot arm motions. Another advantage of using photometric flow fields is the duality that exists between determining unknown local surface orientation from a known incident illuminator orientation and determining an unknown incident illuminator orientation from a known local surface orientation. The equations for photometric flow fields allow the quantitative determination of the incident orientation of an illuminator from an object having a known calibrated surface orientation. Computer simulations will be shown depicting photometric flow fields on a Lambertian sphere. Simulations will be shown depicting how photometric flow fields quantitatively determine local surface orientation from a known incident orientation of an illuminator as well as determining incident illuminator orientation from a known local surface orientation
    corecore