77 research outputs found

    Image Scaling by de la Vallée-Poussin Filtered Interpolation

    Get PDF
    We present a new image scaling method both for downscaling and upscaling, running with any scale factor or desired size. The resized image is achieved by sampling a bivariate polynomial which globally interpolates the data at the new scale. The method’s particularities lay in both the sampling model and the interpolation polynomial we use. Rather than classical uniform grids, we consider an unusual sampling system based on Chebyshev zeros of the first kind. Such optimal distribution of nodes permits to consider near-best interpolation polynomials defined by a filter of de la Vallée-Poussin type. The action ray of this filter provides an additional parameter that can be suitably regulated to improve the approximation. The method has been tested on a significant number of different image datasets. The results are evaluated in qualitative and quantitative terms and compared with other available competitive methods. The perceived quality of the resulting scaled images is such that important details are preserved, and the appearance of artifacts is low. Competitive quality measurement values, good visual quality, limited computational effort, and moderate memory demand make the method suitable for real-world applications

    Image Scaling by de la Vallée-Poussin Filtered Interpolation

    Get PDF
    We present a new image scaling method both for downscaling and upscaling, running with any scale factor or desired size. The resized image is achieved by sampling a bivariate polynomial which globally interpolates the data at the new scale. The method’s particularities lay in both the sampling model and the interpolation polynomial we use. Rather than classical uniform grids, we consider an unusual sampling system based on Chebyshev zeros of the first kind. Such optimal distribution of nodes permits to consider near-best interpolation polynomials defined by a filter of de la Vallée-Poussin type. The action ray of this filter provides an additional parameter that can be suitably regulated to improve the approximation. The method has been tested on a significant number of different image datasets. The results are evaluated in qualitative and quantitative terms and compared with other available competitive methods. The perceived quality of the resulting scaled images is such that important details are preserved, and the appearance of artifacts is low. Competitive quality measurement values, good visual quality, limited computational effort, and moderate memory demand make the method suitable for real-world applications

    Nonlocal Co-occurrence for Image Downscaling

    Full text link
    Image downscaling is one of the widely used operations in image processing and computer graphics. It was recently demonstrated in the literature that kernel-based convolutional filters could be modified to develop efficient image downscaling algorithms. In this work, we present a new downscaling technique which is based on kernel-based image filtering concept. We propose to use pairwise co-occurrence similarity of the pixelpairs as the range kernel similarity in the filtering operation. The co-occurrence of the pixel-pair is learned directly from the input image. This co-occurrence learning is performed in a neighborhood based fashion all over the image. The proposed method can preserve the high-frequency structures, which were present in the input image, into the downscaled image. The resulting images retain visually important details and do not suffer from edge-blurring artifact. We demonstrate the effectiveness of our proposed approach with extensive experiments on a large number of images downscaled with various downscaling factors.Comment: 9 pages, 8 figure

    DeepCEL0 for 2D Single Molecule Localization in Fluorescence Microscopy

    Get PDF
    In fluorescence microscopy, Single Molecule Localization Microscopy (SMLM) techniques aim at localizing with high precision high density fluorescent molecules by stochastically activating and imaging small subsets of blinking emitters. Super Resolution (SR) plays an important role in this field since it allows to go beyond the intrinsic light diffraction limit. In this work, we propose a deep learning-based algorithm for precise molecule localization of high density frames acquired by SMLM techniques whose 2\ell_{2}-based loss function is regularized by positivity and 0\ell_{0}-based constraints. The 0\ell_{0} is relaxed through its Continuous Exact 0\ell_{0} (CEL0) counterpart. The arising approach, named DeepCEL0, is parameter-free, more flexible, faster and provides more precise molecule localization maps if compared to the other state-of-the-art methods. We validate our approach on both simulated and real fluorescence microscopy data

    End-to-end Alternating Optimization for Real-World Blind Super Resolution

    Full text link
    Blind Super-Resolution (SR) usually involves two sub-problems: 1) estimating the degradation of the given low-resolution (LR) image; 2) super-resolving the LR image to its high-resolution (HR) counterpart. Both problems are ill-posed due to the information loss in the degrading process. Most previous methods try to solve the two problems independently, but often fall into a dilemma: a good super-resolved HR result requires an accurate degradation estimation, which however, is difficult to be obtained without the help of original HR information. To address this issue, instead of considering these two problems independently, we adopt an alternating optimization algorithm, which can estimate the degradation and restore the SR image in a single model. Specifically, we design two convolutional neural modules, namely \textit{Restorer} and \textit{Estimator}. \textit{Restorer} restores the SR image based on the estimated degradation, and \textit{Estimator} estimates the degradation with the help of the restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, both \textit{Restorer} and \textit{Estimator} could get benefited from the intermediate results of each other, and make each sub-problem easier. Moreover, \textit{Restorer} and \textit{Estimator} are optimized in an end-to-end manner, thus they could get more tolerant of the estimation deviations of each other and cooperate better to achieve more robust and accurate final results. Extensive experiments on both synthetic datasets and real-world images show that the proposed method can largely outperform state-of-the-art methods and produce more visually favorable results. The codes are rleased at \url{https://github.com/greatlog/RealDAN.git}.Comment: Extension of our previous NeurIPS paper. Accepted to IJC

    High Dimensional Statistical Models: Applications to Climate

    Get PDF
    University of Minnesota Ph.D. dissertation. September 2015. Major: Computer Science. Advisor: Arindam Banerjee. 1 computer file (PDF); ix, 103 pages.Recent years have seen enormous growth in collection and curation of datasets in various domains which often involve thousands or even millions of variables. Examples include social networking websites, geophysical sensor networks, cancer genomics, climate science, and many more. In many applications, it is of prime interest to understand the dependencies between variables, such that predictive models may be designed from knowledge of such dependencies. However, traditional statistical methods, such as least squares regression, are often inapplicable for such tasks, since the available sample size is much smaller than problem dimensionality. Therefore we require new models and methods for statistical data analysis which provide provable estimation guarantees even in such high dimensional scenarios. Further, we also require that such models provide efficient implementation and optimization routines. Statistical models which satisfy both these criteria will be important for solving prediction problems in many scientific domains. High dimensional statistical models have attracted interest from both the theoretical and applied machine learning communities in recent years. Of particular interest are parametric models, which considers estimation of coefficient vectors in the scenario where sample size is much smaller than the dimensionality of the problem. Although most existing work focuses on analyzing sparse regression methods using L1 norm regularizers, there exist other ``structured'' norm regularizers that encode more interesting structure in the sparsity induced on the estimated regression coefficients. In the first part of this thesis, we conduct a theoretical study of such structured regression methods. First, we prove statistical consistency of regression with hierarchical tree-structured norm regularizer known as hiLasso. Second, we formulate a generalization of the popular Dantzig Selector for sparse linear regression to any norm regularizer, called Generalized Dantzig Selector, and provide statistical consistency guarantees of estimation. Further, we provide the first known results on non-asymptotic rates of consistency for the recently proposed kk-support norm regularizer. Finally, we show that in the presence of measurement errors in covariates, the tools we use for proving consistency in the noiseless setting are inadequate in proving statistical consistency. In the second part of the thesis, we consider application of regularized regression methods to statistical modeling problems in climate science. First, we consider application of Sparse Group Lasso, a special case of hiLasso, for predictive modeling of land climate variables from measurements of atmospheric variables over oceans. Extensive experiments illustrate that structured sparse regression provides both better performance and more interpretable models than unregularized regression and even unstructured sparse regression methods. Second, we consider application of regularized regression methods for discovering stable factors for predictive modeling in climate. Specifically, we consider the problem of determining dominant factors influencing winter precipitation over the Great Lakes Region of the US. Using a sparse linear regression method, followed by random permutation tests, we mine stable sets of predictive features from a pool of possible predictors. Some of the stable factors discovered through this process are shown to relate to known physical processes influencing precipitation over Great Lakes
    corecore