10,373 research outputs found

    Generalized Boundaries from Multiple Image Interpretations

    Full text link
    Boundary detection is essential for a variety of computer vision tasks such as segmentation and recognition. In this paper we propose a unified formulation and a novel algorithm that are applicable to the detection of different types of boundaries, such as intensity edges, occlusion boundaries or object category specific boundaries. Our formulation leads to a simple method with state-of-the-art performance and significantly lower computational cost than existing methods. We evaluate our algorithm on different types of boundaries, from low-level boundaries extracted in natural images, to occlusion boundaries obtained using motion cues and RGB-D cameras, to boundaries from soft-segmentation. We also propose a novel method for figure/ground soft-segmentation that can be used in conjunction with our boundary detection method and improve its accuracy at almost no extra computational cost

    Strain Analysis by a Total Generalized Variation Regularized Optical Flow Model

    Full text link
    In this paper we deal with the important problem of estimating the local strain tensor from a sequence of micro-structural images realized during deformation tests of engineering materials. Since the strain tensor is defined via the Jacobian of the displacement field, we propose to compute the displacement field by a variational model which takes care of properties of the Jacobian of the displacement field. In particular we are interested in areas of high strain. The data term of our variational model relies on the brightness invariance property of the image sequence. As prior we choose the second order total generalized variation of the displacement field. This prior splits the Jacobian of the displacement field into a smooth and a non-smooth part. The latter reflects the material cracks. An additional constraint is incorporated to handle physical properties of the non-smooth part for tensile tests. We prove that the resulting convex model has a minimizer and show how a primal-dual method can be applied to find a minimizer. The corresponding algorithm has the advantage that the strain tensor is directly computed within the iteration process. Our algorithm is further equipped with a coarse-to-fine strategy to cope with larger displacements. Numerical examples with simulated and experimental data demonstrate the very good performance of our algorithm. In comparison to state-of-the-art engineering software for strain analysis our method can resolve local phenomena much better

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Multi-Scale 3D Scene Flow from Binocular Stereo Sequences

    Full text link
    Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108

    A Primal-Dual Framework for Real-Time Dense RGB-D Scene Flow

    Get PDF
    This paper presents the first method to compute dense scene flow in real-time for RGB-D cameras. It is based on a variational formulation where brightness constancy and geometric consistency are imposed. Accounting for the depth data provided by RGB-D cameras, regularization of the flow field is imposed on the 3D surface (or set of surfaces) of the observed scene instead of on the image plane, leading to more geometrically consistent results. The minimization problem is efficiently solved by a primal-dual algorithm which is implemented on a GPU, achieving a previously unseen temporal performance. Several tests have been conducted to compare our approach with a state-of-the-art work (RGB-D flow) where quantitative and qualitative results are evaluated. Moreover, an additional set of experiments have been carried out to show the applicability of our work to estimate motion in realtime. Results demonstrate the accuracy of our approach, which outperforms the RGB-D flow, and which is able to estimate heterogeneous and non-rigid motions at a high frame rate.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Research supported by the Spanish Government under project DPI1011-25483 and the Spanish grant program FPI-MICINN 2012

    An optical flow based technique for the non-invasive measurement of microfluidic flows

    Get PDF
    A new approach for estimating motion in microfluidic flows is presented. Is is based on an extension of the brightness changes constraint equation (BCCE) to incorporate Taylor dispersion. This extended BCCE is then used for accurately estimating fluid flows in a two dimensional Molecular Tagging Velocimetry (2D-MTV) framework. Reference measurements were conducted to validate the accuracy and applicability of the novel technique. Due to the excellent agreement between measurement and ground truth, the method was also applied to inhomogeneous flows in a mixing chamber
    corecore