6,824 research outputs found

    Improvements on "Fast space-variant elliptical filtering using box splines"

    Full text link
    It is well-known that box filters can be efficiently computed using pre-integrations and local finite-differences [Crow1984,Heckbert1986,Viola2001]. By generalizing this idea and by combining it with a non-standard variant of the Central Limit Theorem, a constant-time or O(1) algorithm was proposed in [Chaudhury2010] that allowed one to perform space-variant filtering using Gaussian-like kernels. The algorithm was based on the observation that both isotropic and anisotropic Gaussians could be approximated using certain bivariate splines called box splines. The attractive feature of the algorithm was that it allowed one to continuously control the shape and size (covariance) of the filter, and that it had a fixed computational cost per pixel, irrespective of the size of the filter. The algorithm, however, offered a limited control on the covariance and accuracy of the Gaussian approximation. In this work, we propose some improvements by appropriately modifying the algorithm in [Chaudhury2010].Comment: 7 figure

    Filtering and Forecasting Spot Electricity Prices in the Increasingly Deregulated Australian Electricity Market

    Get PDF
    Modelling and forecasting the volatile spot pricing process for electricity presents a number of challenges. For increasingly deregulated electricity markets, like that in the Australian state of New South Wales, there is need to price a range of derivative securities used for hedging. Any derivative pricing model that hopes to capture the pricing dynamics within this market must be able to cope with the extreme volatility of the observed spot prices. By applying wavelet analysis, we examine both the price and demand series at different time locations and levels of resolution to reveal and differentiate what is signal and what is noise. Further, we cleanse the data of leakage from the high frequency, mean reverting price spikes into the more fundamental levels of frequency resolution. As it is from these levels that we base the reconstruction of our filtered series, we need to ensure they are least contaminated by noise. Using the filtered data, we explore time series models as possible candidates for explaining the pricing process and evaluate their forecasting ability. These models include one from the threshold autoregressive (AR) model. What we find is that models from the TAR class produce forecasts that best appear to capture the mean and variance components of the actual data.electricity; wavelets, time series models; forecasting

    Fast space-variant elliptical filtering using box splines

    Get PDF
    The efficient realization of linear space-variant (non-convolution) filters is a challenging computational problem in image processing. In this paper, we demonstrate that it is possible to filter an image with a Gaussian-like elliptic window of varying size, elongation and orientation using a fixed number of computations per pixel. The associated algorithm, which is based on a family of smooth compactly supported piecewise polynomials, the radially-uniform box splines, is realized using pre-integration and local finite-differences. The radially-uniform box splines are constructed through the repeated convolution of a fixed number of box distributions, which have been suitably scaled and distributed radially in an uniform fashion. The attractive features of these box splines are their asymptotic behavior, their simple covariance structure, and their quasi-separability. They converge to Gaussians with the increase of their order, and are used to approximate anisotropic Gaussians of varying covariance simply by controlling the scales of the constituent box distributions. Based on the second feature, we develop a technique for continuously controlling the size, elongation and orientation of these Gaussian-like functions. Finally, the quasi-separable structure, along with a certain scaling property of box distributions, is used to efficiently realize the associated space-variant elliptical filtering, which requires O(1) computations per pixel irrespective of the shape and size of the filter.Comment: 12 figures; IEEE Transactions on Image Processing, vol. 19, 201

    FRESH – FRI-based single-image super-resolution algorithm

    Get PDF
    In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels

    Blind deconvolution of medical ultrasound images: parametric inverse filtering approach

    Get PDF
    ©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.910179The problem of reconstruction of ultrasound images by means of blind deconvolution has long been recognized as one of the central problems in medical ultrasound imaging. In this paper, this problem is addressed via proposing a blind deconvolution method which is innovative in several ways. In particular, the method is based on parametric inverse filtering, whose parameters are optimized using two-stage processing. At the first stage, some partial information on the point spread function is recovered. Subsequently, this information is used to explicitly constrain the spectral shape of the inverse filter. From this perspective, the proposed methodology can be viewed as a ldquohybridizationrdquo of two standard strategies in blind deconvolution, which are based on either concurrent or successive estimation of the point spread function and the image of interest. Moreover, evidence is provided that the ldquohybridrdquo approach can outperform the standard ones in a number of important practical cases. Additionally, the present study introduces a different approach to parameterizing the inverse filter. Specifically, we propose to model the inverse transfer function as a member of a principal shift-invariant subspace. It is shown that such a parameterization results in considerably more stable reconstructions as compared to standard parameterization methods. Finally, it is shown how the inverse filters designed in this way can be used to deconvolve the images in a nonblind manner so as to further improve their quality. The usefulness and practicability of all the introduced innovations are proven in a series of both in silico and in vivo experiments. Finally, it is shown that the proposed deconvolution algorithms are capable of improving the resolution of ultrasound images by factors of 2.24 or 6.52 (as judged by the autocorrelation criterion) depending on the type of regularization method used

    Map-making in small field modulated CMB polarisation experiments: approximating the maximum-likelihood method

    Full text link
    Map-making presents a significant computational challenge to the next generation of kilopixel CMB polarisation experiments. Years worth of time ordered data (TOD) from thousands of detectors will need to be compressed into maps of the T, Q and U Stokes parameters. Fundamental to the science goal of these experiments, the observation of B-modes, is the ability to control noise and systematics. In this paper, we consider an alternative to the maximum-likelihood method, called destriping, where the noise is modelled as a set of discrete offset functions and then subtracted from the time-stream. We compare our destriping code (Descart: the DEStriping CARTographer) to a full maximum-likelihood map-maker, applying them to 200 Monte-Carlo simulations of time-ordered data from a ground based, partial-sky polarisation modulation experiment. In these simulations, the noise is dominated by either detector or atmospheric 1/f noise. Using prior information of the power spectrum of this noise, we produce destriped maps of T, Q and U which are negligibly different from optimal. The method does not filter the signal or bias the E or B-mode power spectra. Depending on the length of the destriping baseline, the method delivers between 5 and 22 times improvement in computation time over the maximum-likelihood algorithm. We find that, for the specific case of single detector maps, it is essential to destripe the atmospheric 1/f in order to detect B-modes, even though the Q and U signals are modulated by a half-wave plate spinning at 5-Hz.Comment: 18 pages, 17 figures, MNRAS accepted v2: content added (inc: table 2), typos correcte

    Fast and precise map-making for massively multi-detector CMB experiments

    Full text link
    Future cosmic microwave background (CMB) polarisation experiments aim to measure an unprecedentedly small signal - the primordial gravity wave component of the polarisation field B-mode. To achieve this, they will analyse huge datasets, involving years worth of time-ordered data (TOD) from massively multi-detector focal planes. This creates the need for fast and precise methods to complement the M-L approach in analysis pipelines. In this paper, we investigate fast map-making methods as applied to long duration, massively multi-detector, ground-based experiments, in the context of the search for B-modes. We focus on two alternative map-making approaches: destriping and TOD filtering, comparing their performance on simulated multi-detector polarisation data. We have written an optimised, parallel destriping code, the DEStriping CARTographer DESCART, that is generalised for massive focal planes, including the potential effect of cross-correlated TOD 1/f noise. We also determine the scaling of computing time for destriping as applied to a simulated full-season data-set for a realistic experiment. We find that destriping can out-perform filtering in estimating both the large-scale E and B-mode angular power spectra. In particular, filtering can produce significant spurious B-mode power via EB mixing. Whilst this can be removed, it contributes to the variance of B-mode bandpower estimates at scales near the primordial B-mode peak. For the experimental configuration we simulate, this has an effect on the possible detection significance for primordial B-modes. Destriping is a viable alternative fast method to the full M-L approach that does not cause the problems associated with filtering, and is flexible enough to fit into both M-L and Monte-Carlo pseudo-Cl pipelines.Comment: 16 pages, 14 figures. MNRAS accepted. Typos corrected and computing time/memory requirement orders-of-magnitude numbers in section 4 replaced by precise number
    • …
    corecore