41,988 research outputs found

    Strong edge features for image coding

    Get PDF
    A two-component model is proposed for perceptual image coding. For the first component of the model, the watershed operator is used to detect strong edge features. Then, an efficient morphological interpolation algorithm reconstructs the smooth areas of the image from the extracted edge information, also known as sketch data. The residual component, containing fine textures, is separately coded by a subband coding scheme. The morphological operators involved in the coding of the primary component perform very efficiently compared to conventional techniques like the LGO operator, used for the edge extraction, or the diffusion filters, iteratively applied for the interpolation of smooth areas in previously reported sketch-based coding schemes.Peer ReviewedPostprint (published version

    Image interpolation using Shearlet based iterative refinement

    Get PDF
    This paper proposes an image interpolation algorithm exploiting sparse representation for natural images. It involves three main steps: (a) obtaining an initial estimate of the high resolution image using linear methods like FIR filtering, (b) promoting sparsity in a selected dictionary through iterative thresholding, and (c) extracting high frequency information from the approximation to refine the initial estimate. For the sparse modeling, a shearlet dictionary is chosen to yield a multiscale directional representation. The proposed algorithm is compared to several state-of-the-art methods to assess its objective as well as subjective performance. Compared to the cubic spline interpolation method, an average PSNR gain of around 0.8 dB is observed over a dataset of 200 images

    XAFS spectroscopy. I. Extracting the fine structure from the absorption spectra

    Get PDF
    Three independent techniques are used to separate fine structure from the absorption spectra, the background function in which is approximated by (i) smoothing spline. We propose a new reliable criterion for determination of smoothing parameter and the method for raising of stability with respect to k_min variation; (ii) interpolation spline with the varied knots; (iii) the line obtained from bayesian smoothing. This methods considers various prior information and includes a natural way to determine the errors of XAFS extraction. Particular attention has been given to the estimation of uncertainties in XAFS data. Experimental noise is shown to be essentially smaller than the errors of the background approximation, and it is the latter that determines the variances of structural parameters in subsequent fitting.Comment: 16 pages, 7 figures, for freeware XAFS analysis program, see http://www.crosswinds.net/~klmn/viper.htm

    Automated Markerless Extraction of Walking People Using Deformable Contour Models

    No full text
    We develop a new automated markerless motion capture system for the analysis of walking people. We employ global evidence gathering techniques guided by biomechanical analysis to robustly extract articulated motion. This forms a basis for new deformable contour models, using local image cues to capture shape and motion at a more detailed level. We extend the greedy snake formulation to include temporal constraints and occlusion modelling, increasing the capability of this technique when dealing with cluttered and self-occluding extraction targets. This approach is evaluated on a large database of indoor and outdoor video data, demonstrating fast and autonomous motion capture for walking people
    • …
    corecore