4,474 research outputs found

    A massively parallel multi-level approach to a domain decomposition method for the optical flow estimation with varying illumination

    Get PDF
    We consider a variational method to solve the optical flow problem with varying illumination. We apply an adaptive control of the regularization parameter which allows us to preserve the edges and fine features of the computed flow. To reduce the complexity of the estimation for high resolution images and the time of computations, we implement a multi-level parallel approach based on the domain decomposition with the Schwarz overlapping method. The second level of parallelism uses the massively parallel solver MUMPS. We perform some numerical simulations to show the efficiency of our approach and to validate it on classical and real-world image sequences

    Detection and localization of specular surfaces using image motion cues

    Get PDF
    Cataloged from PDF version of article.Successful identification of specularities in an image can be crucial for an artificial vision system when extracting the semantic content of an image or while interacting with the environment. We developed an algorithm that relies on scale and rotation invariant feature extraction techniques and uses motion cues to detect and localize specular surfaces. Appearance change in feature vectors is used to quantify the appearance distortion on specular surfaces, which has previously been shown to be a powerful indicator for specularity (Doerschner et al. in Curr Biol, 2011). The algorithm combines epipolar deviations (Swaminathan et al. in Lect Notes Comput Sci 2350:508-523, 2002) and appearance distortion, and succeeds in localizing specular objects in computer-rendered and real scenes, across a wide range of camera motions and speeds, object sizes and shapes, and performs well under image noise and blur conditions. © 2014 Springer-Verlag Berlin Heidelberg

    Bioinspired symmetry detection on resource limited embedded platforms

    Get PDF
    This work is inspired by the vision of flying insects which enables them to detect and locate a set of relevant objects with remarkable effectiveness despite very limited brainpower. The bioinspired approach worked out here focuses on detection of symmetric objects to be performed by resource-limited embedded platforms such as micro air vehicles. Symmetry detection is posed as a pattern matching problem which is solved by an approach based on the use of composite correlation filters. Two variants of the approach are proposed, analysed and tested in which symmetry detection is cast as 1) static and 2) dynamic pattern matching problems. In the static variant, images of objects are input to two dimentional spatial composite correlation filters. In the dynamic variant, a video (resulting from platform motion) is input to a composite correlation filter of which its peak response is used to define symmetry. In both cases, a novel method is used for designing the composite filter templates for symmetry detection. This method significantly reduces the level of detail which needs to be matched to achieve good detection performance. The resulting performance is systematically quantified using the ROC analysis; it is demonstrated that the bioinspired detection approach is better and with a lower computational cost compared to the best state-of-the-art solution hitherto available

    A local algorithm for the computation of image velocity via constructive interference of global Fourier components

    Get PDF
    A novel Fourier-based technique for local motion detection from image sequences is proposed. In this method, the instantaneous velocities of local image points are inferred directly from the global 3D Fourier components of the image sequence. This is done by selecting those velocities for which the superposition of the corresponding Fourier gratings leads to constructive interference at the image point. Hence, image velocities can be assigned locally even though position is computed from the phases and amplitudes of global Fourier components (spanning the whole image sequence) that have been filtered based on the motion-constraint equation, reducing certain aperture effects typically arising from windowing in other methods. Regularization is introduced for sequences having smooth flow fields. Aperture effects and their effect on optic-flow regularization are investigated in this context. The algorithm is tested on both synthetic and real image sequences and the results are compared to those of other local methods. Finally, we show that other motion features, i.e. motion direction, can be computed using the same algorithmic framework without requiring an intermediate representation of local velocity, which is an important characteristic of the proposed method.Postprint (author’s final draft

    A Fusion Approach for Multi-Frame Optical Flow Estimation

    Full text link
    To date, top-performing optical flow estimation methods only take pairs of consecutive frames into account. While elegant and appealing, the idea of using more than two frames has not yet produced state-of-the-art results. We present a simple, yet effective fusion approach for multi-frame optical flow that benefits from longer-term temporal cues. Our method first warps the optical flow from previous frames to the current, thereby yielding multiple plausible estimates. It then fuses the complementary information carried by these estimates into a new optical flow field. At the time of writing, our method ranks first among published results in the MPI Sintel and KITTI 2015 benchmarks. Our models will be available on https://github.com/NVlabs/PWC-Net.Comment: Work accepted at IEEE Winter Conference on Applications of Computer Vision (WACV 2019
    • …
    corecore