57 research outputs found

    A Dynamic Programming Solution to Bounded Dejittering Problems

    Full text link
    We propose a dynamic programming solution to image dejittering problems with bounded displacements and obtain efficient algorithms for the removal of line jitter, line pixel jitter, and pixel jitter.Comment: The final publication is available at link.springer.co

    Adaptive deinterlacing of video sequences using motion data

    Get PDF
    In this work an efficient motion adaptive deinterlacing method with considerable improvement in picture quality is proposed. A temporal deinterlacing method has a high performance in static images while a spatial method has a better performance in dynamic parts. In the proposed deinterlacing method, a motion adaptive interpolator combines the results of a spatial method and a temporal method based on motion activity level of video sequence. A high performance and low complexity algorithm for motion detection is introduced. This algorithm uses five consecutive interlaced video fields for motion detection. It is able to capture a wide range of motions from slow to fast. The algorithm benefits from a hierarchal structure. It starts with detecting motion in large partitions of a given field. Depending on the detected motion activity level for that partition, the motion detection algorithm might recursively be applied to sub-blocks of the original partition. Two different low pass filters are used during the motion detection to increase the algorithm accuracy. The result of motion detection is then used in the proposed motion adaptive interpolator. The performance of the proposed deinterlacing algorithm is compared to previous methods in the literature. Experimenting with several standard video sequences, the method proposed in this work shows excellent results for motion detection and deinterlacing performance

    08291 Abstracts Collection -- Statistical and Geometrical Approaches to Visual Motion Analysis

    Get PDF
    From 13.07.2008 to 18.07.2008, the Dagstuhl Seminar 08291 ``Statistical and Geometrical Approaches to Visual Motion Analysis\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general

    A variational method for dejittering large fluorescence line scanner images

    Get PDF
    International audienceWe propose a variational method dedicated to jitter correction of large fluorescence scanner images. Our method consists in minimizing a global energy functional to estimate a dense displacement field representing the spatially-varying jitter. The computational approach is based on a half-quadratic splitting of the energy functional, which decouples the realignment data term and the dedicated differential-based regularizer. The resulting problem amounts to alternatively solving two convex and nonconvex optimization subproblems with appropriate algorithms. Experimental results on artificial and large real fluorescence images demonstrate that our method is not only capable to handle large displacements but is also efficient in terms of subpixel precision without inducing additional intensity artifacts

    DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

    Get PDF
    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.Comment: Accepted by TPAM
    • …
    corecore