530 research outputs found

    Point-wise mutual information-based video segmentation with high temporal consistency

    Full text link
    In this paper, we tackle the problem of temporally consistent boundary detection and hierarchical segmentation in videos. While finding the best high-level reasoning of region assignments in videos is the focus of much recent research, temporal consistency in boundary detection has so far only rarely been tackled. We argue that temporally consistent boundaries are a key component to temporally consistent region assignment. The proposed method is based on the point-wise mutual information (PMI) of spatio-temporal voxels. Temporal consistency is established by an evaluation of PMI-based point affinities in the spectral domain over space and time. Thus, the proposed method is independent of any optical flow computation or previously learned motion models. The proposed low-level video segmentation method outperforms the learning-based state of the art in terms of standard region metrics

    Local picture-repetition mode detector for video de-interlacing

    Get PDF
    The de-interlacing of video material converted from film can be perfect, provided it is possible to recognize the field-pairs that originate from the same film image. Various so-called film-detectors have been proposed for this purpose, mainly in the patent-literature. Typically, these detectors fail in cases where video overlays are merged with film material, or when nonstandard repetition patterns are used. Both problems occur frequently in television broadcast. For these hybrid and/or irregular cases, we propose a detector that can detect different picture-repetition patterns locally in the image. This detector combines fuzzy logic rules and spatio-temporal prediction to arrive at a highly robust decision signal, suitable for pixel-accurate de-interlacing of hybrid and irregular video material. In addition to an evaluation of the performance, the paper also provides a complexity analysis.Peer Reviewe

    XFVHDL4: A hardware synthesis tool for fuzzy systems

    Get PDF
    This paper presents a design technique that allows the automatic synthesis of fuzzy inference systems and accelerates the exploration of the design space of these systems. It is based on generic VHDL code generation which can be implemented on a programmable device (FPGA) or an application specific integrated circuit (ASIC). The set of CAD tools supporting this technique includes a specific environment for designing fuzzy systems, in combination with commercial VHDL simulation and synthesis tools. As demonstrated by the analyzed design examples, the described development strategy speeds up the stages of description, synthesis, and functional verification of fuzzy inference systems.Comunidad Europea FP7-IST-248858Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-0367

    Local picture-repetition mode detector for video de-interlacing

    Get PDF
    The de-interlacing of video material converted from film can be perfect, provided it is possible to recognize the field-pairs that originate from the same film image. Various so-called film-detectors have been proposed for this purpose, mainly in the patent-literature. Typically, these detectors fail in cases where video overlays are merged with film material, or when nonstandard repetition patterns are used. Both problems occur frequently in television broadcast. For these hybrid and/or irregular cases, we propose a detector that can detect different picture-repetition patterns locally in the image. This detector combines fuzzy logic rules and spatio-temporal prediction to arrive at a highly robust decision signal, suitable for pixel-accurate de-interlacing of hybrid and irregular video material. In addition to an evaluation of the performance, the paper also provides a complexity analysis

    Prototipado rápido de sistemas de procesado de vídeo basados en el VFBC de Xilinx

    Get PDF
    El presente trabajo desarrolla módulos hardware para el prototipado rápido de sistemas de procesado de vídeo basados en el controlador de memoria para fotogramas de vídeo (VFBC) de Xilinx. Esta implementación permite el almacenamiento de los fotogramas en memoria externa al dispositivo programable, así como su correcto manejo para el diseño de sistemas de procesado espacio-temporales utilizando el flujo de diseño basado en modelos de Xilinx System Generator. Los módulos hardware son los encargados de la configuración y control de las interfaces de escritura y lectura del VFBC, además de la manipulación de las señales de sincronismo de vídeo para la interconexión de periféricos de entrada y salida. El artículo incluye además la descripción de los módulos elaborados así como el análisis de los resultados del empleo de los mismos en el desarrollo de un demostrador de procesado temporal de vídeo utilizando un detector de movimiento simple sobre una placa Spartan-6 SP605 Evaluation Platform.This paper develops hardware modules for rapid prototyping of video processing systems based on the Xilinx video frame buffer controller (VFBC). This implementation allows the storage of video frames in memory external to the programmable device, as well as its proper handle for designing spatio-temporal processing systems using the Xilinx System Generator model-based design flow. The hardware modules are responsible for the configuration and control of writing and reading VFBC interfaces, as well as the manipulation of video synchronization signals for interconnecting input and output peripherals. The article also include the description of the elaborated modules and the analysis of the results of its use for the development of a temporal video processing demonstrator using a simple motion detector on a Spartan-6 SP605 Evaluation Platform board.Agencia Española de Cooperación Internacional para el Desarrollo (AECID) PCI D/024124/09, PCI D/030769/10, PCI A1/039607/1

    Dense Motion Estimation for Smoke

    Full text link
    Motion estimation for highly dynamic phenomena such as smoke is an open challenge for Computer Vision. Traditional dense motion estimation algorithms have difficulties with non-rigid and large motions, both of which are frequently observed in smoke motion. We propose an algorithm for dense motion estimation of smoke. Our algorithm is robust, fast, and has better performance over different types of smoke compared to other dense motion estimation algorithms, including state of the art and neural network approaches. The key to our contribution is to use skeletal flow, without explicit point matching, to provide a sparse flow. This sparse flow is upgraded to a dense flow. In this paper we describe our algorithm in greater detail, and provide experimental evidence to support our claims.Comment: ACCV201
    corecore