2,344 research outputs found

    Detection of dirt impairments from archived film sequences : survey and evaluations

    Get PDF
    Film dirt is the most commonly encountered artifact in archive restoration applications. Since dirt usually appears as a temporally impulsive event, motion-compensated interframe processing is widely applied for its detection. However, motion-compensated prediction requires a high degree of complexity and can be unreliable when motion estimation fails. Consequently, many techniques using spatial or spatiotemporal filtering without motion were also been proposed as alternatives. A comprehensive survey and evaluation of existing methods is presented, in which both qualitative and quantitative performances are compared in terms of accuracy, robustness, and complexity. After analyzing these algorithms and identifying their limitations, we conclude with guidance in choosing from these algorithms and promising directions for future research

    Learning to Extract Motion from Videos in Convolutional Neural Networks

    Full text link
    This paper shows how to extract dense optical flow from videos with a convolutional neural network (CNN). The proposed model constitutes a potential building block for deeper architectures to allow using motion without resorting to an external algorithm, \eg for recognition in videos. We derive our network architecture from signal processing principles to provide desired invariances to image contrast, phase and texture. We constrain weights within the network to enforce strict rotation invariance and substantially reduce the number of parameters to learn. We demonstrate end-to-end training on only 8 sequences of the Middlebury dataset, orders of magnitude less than competing CNN-based motion estimation methods, and obtain comparable performance to classical methods on the Middlebury benchmark. Importantly, our method outputs a distributed representation of motion that allows representing multiple, transparent motions, and dynamic textures. Our contributions on network design and rotation invariance offer insights nonspecific to motion estimation

    Tomographic Study of Internal Erosion of Particle Flows in Porous Media

    Full text link
    In particle-laden flows through porous media, porosity and permeability are significantly affected by the deposition and erosion of particles. Experiments show that the permeability evolution of a porous medium with respect to a particle suspension is not smooth, but rather exhibits significant jumps followed by longer periods of continuous permeability decrease. Their origin seems to be related to internal flow path reorganization by avalanches of deposited material due to erosion inside the porous medium. We apply neutron tomography to resolve the spatio-temporal evolution of the pore space during clogging and unclogging to prove the hypothesis of flow path reorganization behind the permeability jumps. This mechanistic understanding of clogging phenomena is relevant for a number of applications from oil production to filters or suffosion as the mechanisms behind sinkhole formation.Comment: 18 pages, 9 figure

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision

    A computational analysis of separating motion signals in transparent random dot kinematograms

    Get PDF
    When multiple motion directions are presented simultaneously within the same region of the visual field human observers see motion transparency. This perceptual phenomenon requires from the visual system to separate different motion signal distributions, which are characterised by distinct means that correspond to the different dot directions and variances that are determined by the signal and processing noise. Averaging of local motion signals can be employed to reduce noise components, but such pooling could at the same time lead to the averaging of different directional signal components, arising from spatially adjacent dots moving in different directions, which would reduce the visibility of transparent directions. To study the theoretical limitations of encoding transparent motion by a biologically plausible motion detector network, the distributions of motion directions signalled by a motion detector model (2DMD) were analysed here for Random Dot Kinematograms (RDKs). In sparse dot RDKs with two randomly interleaved motion directions, the angular separation that still allows us to separate two directions is limited by the internal noise in the system. Under the present conditions direction differences down to 30 deg could be separated. Correspondingly, in a transparent motion stimulus containing multiple motion directions, more than eight directions could be separated. When this computational analysis is compared to some published psychophysical data, it appears that the experimental results do not reach the predicted limits. Whereas the computer simulations demonstrate that even an unsophisticated motion detector network would be appropriate to represent a considerable number of motion directions simultaneously within the same region, human observers usually are restricted to seeing not more than two or three directions under comparable conditions. This raises the question why human observers do not make full use of information that could be easily extracted from the representation of motion signals at the early stages of the visual system

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
    • …
    corecore