163 research outputs found

    A Hierarchical Bayesian Model for Frame Representation

    Get PDF
    In many signal processing problems, it may be fruitful to represent the signal under study in a frame. If a probabilistic approach is adopted, it becomes then necessary to estimate the hyper-parameters characterizing the probability distribution of the frame coefficients. This problem is difficult since in general the frame synthesis operator is not bijective. Consequently, the frame coefficients are not directly observable. This paper introduces a hierarchical Bayesian model for frame representation. The posterior distribution of the frame coefficients and model hyper-parameters is derived. Hybrid Markov Chain Monte Carlo algorithms are subsequently proposed to sample from this posterior distribution. The generated samples are then exploited to estimate the hyper-parameters and the frame coefficients of the target signal. Validation experiments show that the proposed algorithms provide an accurate estimation of the frame coefficients and hyper-parameters. Application to practical problems of image denoising show the impact of the resulting Bayesian estimation on the recovered signal quality

    A Proximal Approach for a Class of Matrix Optimization Problems

    Full text link
    In recent years, there has been a growing interest in mathematical models leading to the minimization, in a symmetric matrix space, of a Bregman divergence coupled with a regularization term. We address problems of this type within a general framework where the regularization term is split in two parts, one being a spectral function while the other is arbitrary. A Douglas-Rachford approach is proposed to address such problems and a list of proximity operators is provided allowing us to consider various choices for the fit-to-data functional and for the regularization term. Numerical experiments show the validity of this approach for solving convex optimization problems encountered in the context of sparse covariance matrix estimation. Based on our theoretical results, an algorithm is also proposed for noisy graphical lasso where a precision matrix has to be estimated in the presence of noise. The nonconvexity of the resulting objective function is dealt with a majorization-minimization approach, i.e. by building a sequence of convex surrogates and solving the inner optimization subproblems via the aforementioned Douglas-Rachford procedure. We establish conditions for the convergence of this iterative scheme and we illustrate its good numerical performance with respect to state-of-the-art approaches

    Wavelet-based distributed source coding of video

    Get PDF
    Publication in the conference proceedings of EUSIPCO, Antalya, Turkey, 200

    Graph-Cut Rate Distortion Algorithm for Contourlet-Based Image Compression

    Full text link
    The geometric features of images, such as edges, are difficult to represent. When a redundant transform is used for their extraction, the compression challenge is even more difficult. In this paper we present a new rate-distortion optimization al-gorithm based on graph theory that can encode efficiently the coefficients of a critically sampled, non-orthogonal or even redundant transform, like the contourlet decomposition. The basic idea is to construct a specialized graph such that its min-imum cut minimizes the energy functional. We propose to ap-ply this technique for rate-distortion Lagrangian optimization in subband image coding. The method yields good compres-sion results compared to the state-of-art JPEG2000 codec, as well as a general improvement in visual quality. Index Terms — subband image coding, rate- distortion allocation 1

    Report of AhG on Exploration in Wavelet Video Coding

    Get PDF
    The AHG on Exploration in Wavelet Video Coding [1] was established at the 73rd MPEG meeting in Poznan, Poland, with the following mandates: 1. To identify and describe new applications of wavelet video coding; 2. For such applications, define coding conditions and plan a performance comparison with other codecs; 3. Conduct the exploration experiments; 4. Maintain and validate the exploration reference software; 5. Maintain and edit the wavelet codec reference document. All discussions took place over the reflector, [email protected], i.e., ~ 100 emails have been exchanged. AhG meeting: Saturday October 24th, 14:30-18:30, Nice Acropolis AhG meeting agenda: 1.- review of conducted exploration experiments on wavelet video coding; 2.- review 74th ISO/MPEG meeting input documents of interest to this AhG; 3.- review this AhG mandates and prepare AhG report, includine recommendation

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    Endopolyploidy as a potential alternative adaptive strategy for Arabidopsis leaf size variation in response to UV-B

    Get PDF
    The extent of endoreduplication in leaf growth is group- or even species-specific, and its adaptive role is still unclear. A survey of Arabidopsis accessions for variation at the level of endopolyploidy, cell number, and cell size in leaves revealed extensive genetic variation in endopolyploidy level. High endopolyploidy is associated with increased leaf size, both in natural and in genetically unstructured (mapping) populations. The underlying genes were identified as quantitative trait loci that control endopolyploidy in nature by modulating the progression of successive endocycles during organ development. This complex genetic architecture indicates an adaptive mechanism that allows differential organ growth over a broad geographic range and under stressful environmental conditions. UV-B radiation was identified as a significant positive climatic predictor for high endopolyploidy. Arabidopsis accessions carrying the increasing alleles for endopolyploidy also have enhanced tolerance to UV-B radiation. UV-absorbing secondary metabolites provide an additional protective strategy in accessions that display low endopolyploidy. Taken together, these results demonstrate that high constitutive endopolyploidy is a significant predictor for organ size in natural populations and is likely to contribute to sustaining plant growth under high incident UV radiation. Endopolyploidy may therefore form part of the range of UV-B tolerance mechanisms that exist in natural populations

    A Convex Optimization Approach for Depth Estimation Under Illumination Variation

    Full text link

    Majorization-Minimization for sparse SVMs

    Full text link
    Several decades ago, Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F 1 score) as well as computational cost
    corecore