742 research outputs found

    The minimum-error discrimination via Helstrom family of ensembles and Convex Optimization

    Full text link
    Using the convex optimization method and Helstrom family of ensembles introduced in Ref. [1], we have discussed optimal ambiguous discrimination in qubit systems. We have analyzed the problem of the optimal discrimination of N known quantum states and have obtained maximum success probability and optimal measurement for N known quantum states with equiprobable prior probabilities and equidistant from center of the Bloch ball, not all of which are on the one half of the Bloch ball and all of the conjugate states are pure. An exact solution has also been given for arbitrary three known quantum states. The given examples which use our method include: 1. Diagonal N mixed states; 2. N equiprobable states and equidistant from center of the Bloch ball which their corresponding Bloch vectors are inclined at the equal angle from z axis; 3. Three mirror-symmetric states; 4. States that have been prepared with equal prior probabilities on vertices of a Platonic solid. Keywords: minimum-error discrimination, success probability, measurement, POVM elements, Helstrom family of ensembles, convex optimization, conjugate states PACS Nos: 03.67.Hk, 03.65.TaComment: 15 page

    Quantum Error Correction via Convex Optimization

    Get PDF
    We show that the problem of designing a quantum information error correcting procedure can be cast as a bi-convex optimization problem, iterating between encoding and recovery, each being a semidefinite program. For a given encoding operator the problem is convex in the recovery operator. For a given method of recovery, the problem is convex in the encoding scheme. This allows us to derive new codes that are locally optimal. We present examples of such codes that can handle errors which are too strong for codes derived by analogy to classical error correction techniques.Comment: 16 page

    Bacterial vs. zooplankton control of sinking particle flux in the ocean\u27s twilight zone

    Get PDF
    The downward flux of particulate organic carbon (POC) decreases significantly in the oceanÂs mesopelagic or ‘twilight’ zone due both to abiotic processes and metabolism by resident biota. Bacteria and zooplankton solubilize and consume POC to support their metabolism, but the relative importance of bacteria vs. zooplankton in the consumption of sinking particles in the twilight zone is unknown. We compared losses of sinking POC, using differences in export flux measured by neutrally buoyant sediment traps at a range of depths, with bacteria and zooplankton metabolic requirements at the Hawaii Ocean Time‐series station ALOHA in the subtropical Pacific and the Japanese times‐series site K2 in the subarctic Pacific. Integrated (150‐1,000 m) mesopelagic bacterial C demand exceeded that of zooplankton by up to 3‐fold at ALOHA, while bacteria and zooplankton required relatively equal amounts of POC at K2. However, sinking POC flux was inadequate to meet metabolic demands at either site. Mesopelagic bacterial C demand was 3‐ to 4‐fold (ALOHA), and 10‐fold (K2) greater than the loss of sinking POC flux, while zooplankton C demand was 1‐ to 2‐fold (ALOHA), and 3‐ to 9‐fold (K2) greater (using our ‘middle’ estimate conversion factors to calculate C demand). Assuming the particle flux estimates are accurate, we posit that this additional C demand must be met by diel vertical migration of zooplankton feeding at the surface and by carnivory at depth—with both processes ultimately supplying organic C to mesopelagic bacteria. These pathways need to be incorporated into biogeochemical models that predict global C sequestration in the deep sea

    Strong duality in conic linear programming: facial reduction and extended duals

    Full text link
    The facial reduction algorithm of Borwein and Wolkowicz and the extended dual of Ramana provide a strong dual for the conic linear program (P)sup<c,x>AxKb (P) \sup {<c, x> | Ax \leq_K b} in the absence of any constraint qualification. The facial reduction algorithm solves a sequence of auxiliary optimization problems to obtain such a dual. Ramana's dual is applicable when (P) is a semidefinite program (SDP) and is an explicit SDP itself. Ramana, Tuncel, and Wolkowicz showed that these approaches are closely related; in particular, they proved the correctness of Ramana's dual using certificates from a facial reduction algorithm. Here we give a clear and self-contained exposition of facial reduction, of extended duals, and generalize Ramana's dual: -- we state a simple facial reduction algorithm and prove its correctness; and -- building on this algorithm we construct a family of extended duals when KK is a {\em nice} cone. This class of cones includes the semidefinite cone and other important cones.Comment: A previous version of this paper appeared as "A simple derivation of a facial reduction algorithm and extended dual systems", technical report, Columbia University, 2000, available from http://www.unc.edu/~pataki/papers/fr.pdf Jonfest, a conference in honor of Jonathan Borwein's 60th birthday, 201

    The quantum dynamic capacity formula of a quantum channel

    Get PDF
    The dynamic capacity theorem characterizes the reliable communication rates of a quantum channel when combined with the noiseless resources of classical communication, quantum communication, and entanglement. In prior work, we proved the converse part of this theorem by making contact with many previous results in the quantum Shannon theory literature. In this work, we prove the theorem with an "ab initio" approach, using only the most basic tools in the quantum information theorist's toolkit: the Alicki-Fannes' inequality, the chain rule for quantum mutual information, elementary properties of quantum entropy, and the quantum data processing inequality. The result is a simplified proof of the theorem that should be more accessible to those unfamiliar with the quantum Shannon theory literature. We also demonstrate that the "quantum dynamic capacity formula" characterizes the Pareto optimal trade-off surface for the full dynamic capacity region. Additivity of this formula simplifies the computation of the trade-off surface, and we prove that its additivity holds for the quantum Hadamard channels and the quantum erasure channel. We then determine exact expressions for and plot the dynamic capacity region of the quantum dephasing channel, an example from the Hadamard class, and the quantum erasure channel.Comment: 24 pages, 3 figures; v2 has improved structure and minor corrections; v3 has correction regarding the optimizatio

    Color confinement and dual superconductivity of the vacuum. III

    Full text link
    It is demonstrated that monopole condensation in the confined phase of SU(2) and SU(3) gauge theories is independent of the specific Abelian projection used to define the monopoles. Hence the dual excitations which condense in the vacuum to produce confinement must have magnetic U(1) charge in all the Abelian projections. Some physical implications of this result are discussed.Comment: 6 pages, 5 postscript figure

    Implementation of an Optimal First-Order Method for Strongly Convex Total Variation Regularization

    Get PDF
    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ\mu-strongly convex objective functions with LL-Lipschitz continuous gradient. In the framework of Nesterov both μ\mu and LL are assumed known -- an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ\mu and LL during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss the iteration complexity of several first-order methods, including the proposed algorithm, and we use a 3D tomography problem to compare the performance of these methods. The results show that for ill-conditioned problems solved to high accuracy, the proposed method significantly outperforms state-of-the-art first-order methods, as also suggested by theoretical results.Comment: 23 pages, 4 figure

    Horseplay, care and hands on hard work: gendered strategies of a project manager on a construction site

    Get PDF
    The discourse of managerial expertise favours rational analysis and masculine ideals but contemporary management literature also recognises the value of well-being and employee voice in the workplace. Drawing upon narrative analysis of interview data, we share unique insights into the lived experiences of Laura, one female project manager who recently managed a construction site in the Midlands in the UK. In contrast to previous research which indicates that female managers tend to conform to quite a traditional set of gender behaviours, Laura embraces a range of workplace appropriate gendered strategies, such as hard work and horseplay, together with sensitivity and caring. She draws from this mix of gendered strategies in negotiating between two different discourses of construction; one professional and one tough and practical. Her behaviour both reproduces the masculine ideals (through horseplay and heroic management) and opens up possibilities for modernising construction management (by caring). It is this combination of strategies that is at the heart of tacit expertise for Laura. Theoretically, the discussion adds to the development of a more nuanced understanding of management expertise as situated and person specific knowledge that draws on both the explicit and tacit. Specifically, the centrality of gendered strategies beyond the masculine ideals to success on site is highlighted

    Symmetries of a class of nonlinear fourth order partial differential equations

    Full text link
    In this paper we study symmetry reductions of a class of nonlinear fourth order partial differential equations \be u_{tt} = \left(\kappa u + \gamma u^2\right)_{xx} + u u_{xxxx} +\mu u_{xxtt}+\alpha u_x u_{xxx} + \beta u_{xx}^2, \ee where α\alpha, β\beta, γ\gamma, κ\kappa and μ\mu are constants. This equation may be thought of as a fourth order analogue of a generalization of the Camassa-Holm equation, about which there has been considerable recent interest. Further equation (1) is a ``Boussinesq-type'' equation which arises as a model of vibrations of an anharmonic mass-spring chain and admits both ``compacton'' and conventional solitons. A catalogue of symmetry reductions for equation (1) is obtained using the classical Lie method and the nonclassical method due to Bluman and Cole. In particular we obtain several reductions using the nonclassical method which are no} obtainable through the classical method

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of 2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    corecore