12,820 research outputs found

    Density theorems for ideal points in vector optimization.

    Get PDF
    Several results are established concerning the density of the set of ideal points in the set of minimal solutions of positive support functionals of sets in normed linear spaces. The above ideal points are defined and several characterizations and sufficient conditions for their existence are also statedMathematical programming; Vector optimization; Efficient point; Density theorems; Normed linear spaces;

    Ideal points in multiobjective programming

    Get PDF
    The main object of this paper is to give conditions under which a minimal solution to a problem of mathematical programming can be transformed into a minimum solution in the usual sense of the order relations, or in every case, conditions under which that solution is adherent to the set of the points wich verify this last property. The interest of this problem is clear, since many of the usual properties in optimization (like, for instance, the analysis of the sensitivity of the solutions) are studied more easily for minimum solutions than for minimal solutions

    Filterbank optimization with convex objectives and the optimality of principal component forms

    Get PDF
    This paper proposes a general framework for the optimization of orthonormal filterbanks (FBs) for given input statistics. This includes as special cases, many previous results on FB optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB has been known to satisfy the principal component property, i.e., it minimizes the mean-square error caused by reconstruction after dropping the P weakest (lowest variance) subbands for any P. We point out a much stronger connection between this property and the optimality of the FB. The main result is that a principal component FB (PCFB) is optimum whenever the minimization objective is a concave function of the subband variances produced by the FB. This result has its grounding in majorization and convex function theory and, in particular, explains the optimality of PCFBs for compression. We use the result to show various other optimality properties of PCFBs, especially for noise-suppression applications. Suppose the FB input is a signal corrupted by additive white noise, the desired output is the pure signal, and the subbands of the FB are processed to minimize the output noise. If each subband processor is a zeroth-order Wiener filter for its input, we can show that the expected mean square value of the output noise is a concave function of the subband signal variances. Hence, a PCFB is optimum in the sense of minimizing this mean square error. The above-mentioned concavity of the error and, hence, PCFB optimality, continues to hold even with certain other subband processors such as subband hard thresholds and constant multipliers, although these are not of serious practical interest. We prove that certain extensions of this PCFB optimality result to cases where the input noise is colored, and the FB optimization is over a larger class that includes biorthogonal FBs. We also show that PCFBs do not exist for the classes of DFT and cosine-modulated FBs

    Comparing Experiments to the Fault-Tolerance Threshold

    Full text link
    Achieving error rates that meet or exceed the fault-tolerance threshold is a central goal for quantum computing experiments, and measuring these error rates using randomized benchmarking is now routine. However, direct comparison between measured error rates and thresholds is complicated by the fact that benchmarking estimates average error rates while thresholds reflect worst-case behavior when a gate is used as part of a large computation. These two measures of error can differ by orders of magnitude in the regime of interest. Here we facilitate comparison between the experimentally accessible average error rates and the worst-case quantities that arise in current threshold theorems by deriving relations between the two for a variety of physical noise sources. Our results indicate that it is coherent errors that lead to an enormous mismatch between average and worst case, and we quantify how well these errors must be controlled to ensure fair comparison between average error probabilities and fault-tolerance thresholds.Comment: 5 pages, 2 figures, 13 page appendi

    Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization

    Full text link
    Generalized Linear Models (GLMs), where a random vector x\mathbf{x} is observed through a noisy, possibly nonlinear, function of a linear transform z=Ax\mathbf{z}=\mathbf{Ax} arise in a range of applications in nonlinear filtering and regression. Approximate Message Passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A\mathbf{A}. However, the algorithms can easily diverge for general A\mathbf{A}. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the Alternating Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minima of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well

    Statistically optimum pre- and postfiltering in quantization

    Get PDF
    We consider the optimization of pre- and postfilters surrounding a quantization system. The goal is to optimize the filters such that the mean square error is minimized under the key constraint that the quantization noise variance is directly proportional to the variance of the quantization system input. Unlike some previous work, the postfilter is not restricted to be the inverse of the prefilter. With no order constraint on the filters, we present closed-form solutions for the optimum pre- and postfilters when the quantization system is a uniform quantizer. Using these optimum solutions, we obtain a coding gain expression for the system under study. The coding gain expression clearly indicates that, at high bit rates, there is no loss in generality in restricting the postfilter to be the inverse of the prefilter. We then repeat the same analysis with first-order pre- and postfilters in the form 1+αz-1 and 1/(1+γz^-1 ). In specific, we study two cases: 1) FIR prefilter, IIR postfilter and 2) IIR prefilter, FIR postfilter. For each case, we obtain a mean square error expression, optimize the coefficients α and γ and provide some examples where we compare the coding gain performance with the case of α=γ. In the last section, we assume that the quantization system is an orthonormal perfect reconstruction filter bank. To apply the optimum preand postfilters derived earlier, the output of the filter bank must be wide-sense stationary WSS which, in general, is not true. We provide two theorems, each under a different set of assumptions, that guarantee the wide sense stationarity of the filter bank output. We then propose a suboptimum procedure to increase the coding gain of the orthonormal filter bank

    Results on principal component filter banks: colored noise suppression and existence issues

    Get PDF
    We have made explicit the precise connection between the optimization of orthonormal filter banks (FBs) and the principal component property: the principal component filter bank (PCFB) is optimal whenever the minimization objective is a concave function of the subband variances of the FB. This explains PCFB optimality for compression, progressive transmission, and various hitherto unnoticed white-noise, suppression applications such as subband Wiener filtering. The present work examines the nature of the FB optimization problems for such schemes when PCFBs do not exist. Using the geometry of the optimization search spaces, we explain exactly why these problems are usually analytically intractable. We show the relation between compaction filter design (i.e., variance maximization) and optimum FBs. A sequential maximization of subband variances produces a PCFB if one exists, but is otherwise suboptimal for several concave objectives. We then study PCFB optimality for colored noise suppression. Unlike the case when the noise is white, here the minimization objective is a function of both the signal and the noise subband variances. We show that for the transform coder class, if a common signal and noise PCFB (KLT) exists, it is, optimal for a large class of concave objectives. Common PCFBs for general FB classes have a considerably more restricted optimality, as we show using the class of unconstrained orthonormal FBs. For this class, we also show how to find an optimum FB when the signal and noise spectra are both piecewise constant with all discontinuities at rational multiples of π
    corecore