5,192 research outputs found

    Filterbank optimization with convex objectives and the optimality of principal component forms

    Get PDF
    This paper proposes a general framework for the optimization of orthonormal filterbanks (FBs) for given input statistics. This includes as special cases, many previous results on FB optimization for compression. It also solves problems that have not been considered thus far. FB optimization for coding gain maximization (for compression applications) has been well studied before. The optimum FB has been known to satisfy the principal component property, i.e., it minimizes the mean-square error caused by reconstruction after dropping the P weakest (lowest variance) subbands for any P. We point out a much stronger connection between this property and the optimality of the FB. The main result is that a principal component FB (PCFB) is optimum whenever the minimization objective is a concave function of the subband variances produced by the FB. This result has its grounding in majorization and convex function theory and, in particular, explains the optimality of PCFBs for compression. We use the result to show various other optimality properties of PCFBs, especially for noise-suppression applications. Suppose the FB input is a signal corrupted by additive white noise, the desired output is the pure signal, and the subbands of the FB are processed to minimize the output noise. If each subband processor is a zeroth-order Wiener filter for its input, we can show that the expected mean square value of the output noise is a concave function of the subband signal variances. Hence, a PCFB is optimum in the sense of minimizing this mean square error. The above-mentioned concavity of the error and, hence, PCFB optimality, continues to hold even with certain other subband processors such as subband hard thresholds and constant multipliers, although these are not of serious practical interest. We prove that certain extensions of this PCFB optimality result to cases where the input noise is colored, and the FB optimization is over a larger class that includes biorthogonal FBs. We also show that PCFBs do not exist for the classes of DFT and cosine-modulated FBs

    Graph isomorphism and volumes of convex bodies

    Full text link
    We show that a nontrivial graph isomorphism problem of two undirected graphs, and more generally, the permutation similarity of two given n×nn\times n matrices, is equivalent to equalities of volumes of the induced three convex bounded polytopes intersected with a given sequence of balls, centered at the origin with radii ti(0,n1)t_i\in (0,\sqrt{n-1}), where {ti}\{t_i\} is an increasing sequence converging to n1\sqrt{n-1}. These polytopes are characterized by n2n^2 inequalities in at most n2n^2 variables. The existence of fpras for computing volumes of convex bodies gives rise to a semi-frpas of order O(n14)O^*(n^{14}) at most to find if given two undirected graphs are isomorphic.Comment: 9 page

    Majorisation with applications to the calculus of variations

    Get PDF
    This paper explores some connections between rank one convexity, multiplicative quasiconvexity and Schur convexity. Theorem 5.1 gives simple necessary and sufficient conditions for an isotropic objective function to be rank one convex on the set of matrices with positive determinant. Theorem 6.2 describes a class of possible non-polyconvex but multiplicative quasiconvex isotropic functions. This class is not contained in a well known theorem of Ball (6.3 in this paper) which gives sufficient conditions for an isotropic and objective function to be polyconvex. We show here that there is a new way to prove directly the quasiconvexity (in the multiplicative form). Relevance of Schur convexity for the description of rank one convex hulls is explained.Comment: 13 page

    Constrained Consensus

    Full text link
    We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimate of each agent is restricted to lie in a different constraint set. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed ``projected consensus algorithm'' in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed ``projected subgradient algorithm'' which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.Comment: 35 pages. Included additional results, removed two subsections, added references, fixed typo
    corecore