29 research outputs found

    Interior point methods and simulated annealing for nonsymmetric conic optimization

    Get PDF
    This thesis explores four methods for convex optimization. The first two are an interior point method and a simulated annealing algorithm that share a theoretical foundation. This connection is due to the interior point method’s use of the so-called entropic barrier, whose derivatives can be approximated through sampling. Here, the sampling will be carried out with a technique known as hit-and-run. By carefully analyzing the properties of hit-and-run sampling, it is shown that both the interior point method and the simulated annealing algorithm can solve a convex optimization problem in the membership oracle setting. The number of oracle calls made by these methods is bounded by a polynomial in the input size. The third method is an analytic center cutting plane method that shows promising performance for copositive optimization. It outperforms the first two methods by a significant margin on the problem of separating a matrix from the completely positive cone. The final method is based on Mosek’s algorithm for nonsymmetric conic optimization. With their scaling matrix, search direction, and neighborhood, we define a method that converges to a near-optimal solution in polynomial time

    Quantum dichotomies and coherent thermodynamics beyond first-order asymptotics

    Full text link
    We address the problem of exact and approximate transformation of quantum dichotomies in the asymptotic regime, i.e., the existence of a quantum channel E\mathcal E mapping ρ1⊗n\rho_1^{\otimes n} into ρ2⊗Rnn\rho_2^{\otimes R_nn} with an error Ï”n\epsilon_n (measured by trace distance) and σ1⊗n\sigma_1^{\otimes n} into σ2⊗Rnn\sigma_2^{\otimes R_n n} exactly, for a large number nn. We derive second-order asymptotic expressions for the optimal transformation rate RnR_n in the small, moderate, and large deviation error regimes, as well as the zero-error regime, for an arbitrary pair (ρ1,σ1)(\rho_1,\sigma_1) of initial states and a commuting pair (ρ2,σ2)(\rho_2,\sigma_2) of final states. We also prove that for σ1\sigma_1 and σ2\sigma_2 given by thermal Gibbs states, the derived optimal transformation rates in the first three regimes can be attained by thermal operations. This allows us, for the first time, to study the second-order asymptotics of thermodynamic state interconversion with fully general initial states that may have coherence between different energy eigenspaces. Thus, we discuss the optimal performance of thermodynamic protocols with coherent inputs and describe three novel resonance phenomena allowing one to significantly reduce transformation errors induced by finite-size effects. What is more, our result on quantum dichotomies can also be used to obtain, up to second-order asymptotic terms, optimal conversion rates between pure bipartite entangled states under local operations and classical communication.Comment: 51 pages, 6 figures, comments welcom

    Applications of Lattice Codes in Communication Systems

    Get PDF
    In the last decade, there has been an explosive growth in different applications of wireless technology, due to users' increasing expectations for multi-media services. With the current trend, the present systems will not be able to handle the required data traffic. Lattice codes have attracted considerable attention in recent years, because they provide high data rate constellations. In this thesis, the applications of implementing lattice codes in different communication systems are investigated. The thesis is divided into two major parts. Focus of the first part is on constellation shaping and the problem of lattice labeling. The second part is devoted to the lattice decoding problem. In constellation shaping technique, conventional constellations are replaced by lattice codes that satisfy some geometrical properties. However, a simple algorithm, called lattice labeling, is required to map the input data to the lattice code points. In the first part of this thesis, the application of lattice codes for constellation shaping in Orthogonal Frequency Division Multiplexing (OFDM) and Multi-Input Multi-Output (MIMO) broadcast systems are considered. In an OFDM system a lattice code with low Peak to Average Power Ratio (PAPR) is desired. Here, a new lattice code with considerable PAPR reduction for OFDM systems is proposed. Due to the recursive structure of this lattice code, a simple lattice labeling method based on Smith normal decomposition of an integer matrix is obtained. A selective mapping method in conjunction with the proposed lattice code is also presented to further reduce the PAPR. MIMO broadcast systems are also considered in the thesis. In a multiple antenna broadcast system, the lattice labeling algorithm should be such that different users can decode their data independently. Moreover, the implemented lattice code should result in a low average transmit energy. Here, a selective mapping technique provides such a lattice code. Lattice decoding is the focus of the second part of the thesis, which concerns the operation of finding the closest point of the lattice code to any point in N-dimensional real space. In digital communication applications, this problem is known as the integer least-square problem, which can be seen in many areas, e.g. the detection of symbols transmitted over the multiple antenna wireless channel, the multiuser detection problem in Code Division Multiple Access (CDMA) systems, and the simultaneous detection of multiple users in a Digital Subscriber Line (DSL) system affected by crosstalk. Here, an efficient lattice decoding algorithm based on using Semi-Definite Programming (SDP) is introduced. The proposed algorithm is capable of handling any form of lattice constellation for an arbitrary labeling of points. In the proposed methods, the distance minimization problem is expressed in terms of a binary quadratic minimization problem, which is solved by introducing several matrix and vector lifting SDP relaxation models. The new SDP models provide a wealth of trade-off between the complexity and the performance of the decoding problem

    Quantum Fisher Information and its dynamical nature

    Full text link
    The importance of the quantum Fisher information metric is testified by the number of applications that this has in very different fields, ranging from hypothesis testing to metrology, passing through thermodynamics. Still, from the rich range of possible quantum Fisher information, only a handful are typically used and studied. This review aims at collecting a number of results scattered in the literature that can be useful to people who begin the study of Fisher information and to those who are already working on it to have a more organic understanding of the topic. Moreover, we complement the review with new results about the relation between Fisher information and physical evolutions. Extending the study done in [1], we prove that all the physically realisable dynamics can be defined solely in terms of their relation with respect to the Fisher information metric. Moreover, other properties as Markovianity, retrodiction or detailed balance can be expressed in the same formalism. These results show a fact that was partially overseen in the literature, namely the inherently dynamical nature of Fisher information.Comment: 36 pages of main text, 15 of additional information, 12 of appendix, and one of inde

    GAME THEORETIC APPROACHES TO COMMUNICATION OVER MIMO INTERFERENCE CHANNELS IN THE PRESENCE OF A MALICIOUS JAMMER

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2018

    De l'apprentissage faiblement supervisé au catalogage en ligne

    Get PDF
    Applied mathematics and machine computations have raised a lot of hope since the recent success of supervised learning. Many practitioners in industries have been trying to switch from their old paradigms to machine learning. Interestingly, those data scientists spend more time scrapping, annotating and cleaning data than fine-tuning models. This thesis is motivated by the following question: can we derive a more generic framework than the one of supervised learning in order to learn from clutter data? This question is approached through the lens of weakly supervised learning, assuming that the bottleneck of data collection lies in annotation. We model weak supervision as giving, rather than a unique target, a set of target candidates. We argue that one should look for an “optimistic” function that matches most of the observations. This allows us to derive a principle to disambiguate partial labels. We also discuss the advantage to incorporate unsupervised learning techniques into our framework, in particular manifold regularization approached through diffusion techniques, for which we derived a new algorithm that scales better with input dimension then the baseline method. Finally, we switch from passive to active weakly supervised learning, introducing the “active labeling” framework, in which a practitioner can query weak information about chosen data. Among others, we leverage the fact that one does not need full information to access stochastic gradients and perform stochastic gradient descent.Les mathĂ©matiques appliquĂ©es et le calcul nourrissent beaucoup d’espoirs Ă  la suite des succĂšs rĂ©cents de l’apprentissage supervisĂ©. Dans l’industrie, beaucoup d’ingĂ©nieurs cherchent Ă  remplacer leurs anciens paradigmes de pensĂ©e par l’apprentissage machine. Étonnamment, ces ingĂ©nieurs passent plus de temps Ă  collecter, annoter et nettoyer des donnĂ©es qu’à raffiner des modĂšles. Ce phĂ©nomĂšne motive la problĂ©matique de cette thĂšse: peut-on dĂ©finir un cadre thĂ©orique plus gĂ©nĂ©ral que l’apprentissage supervisĂ© pour apprendre grĂące Ă  des donnĂ©es hĂ©tĂ©rogĂšnes? Cette question est abordĂ©e via le concept de supervision faible, faisant l’hypothĂšse que le problĂšme que posent les donnĂ©es est leur annotation. On modĂ©lise la supervision faible comme l’accĂšs, pour une entrĂ©e donnĂ©e, non pas d’une sortie claire, mais d’un ensemble de sorties potentielles. On plaide pour l’adoption d’une perspective « optimiste » et l’apprentissage d’une fonction qui vĂ©rifie la plupart des observations. Cette perspective nous permet de dĂ©finir un principe pour lever l’ambiguĂŻtĂ© des informations faibles. On discute Ă©galement de l’importance d’incorporer des techniques sans supervision d’apprĂ©hension des donnĂ©es d’entrĂ©e dans notre thĂ©orie, en particulier de comprĂ©hension de la variĂ©tĂ© sous-jacente via des techniques de diffusion, pour lesquelles on propose un algorithme rĂ©aliste afin d’éviter le flĂ©au de la dimension, Ă  l’inverse de ce qui existait jusqu’alors. Enfin, nous nous attaquons Ă  la question de collecte active d’informations faibles, dĂ©finissant le problĂšme de « catalogage en ligne », oĂč un intendant doit acquĂ©rir une maximum d’informations fiables sur ses donnĂ©es sous une contrainte de budget. Entre autres, nous tirons parti du fait que pour obtenir un gradient stochastique et effectuer une descente de gradient, il n’y a pas besoin de supervision totale

    A Gaussian Source Coding Perspective on Caching and Total Correlation

    Get PDF
    Communication technology has advanced up to a point where children are getting unfamiliar with themost iconic symbol in IT: the loading icon. We no longer wait for something to come on TV, nor for a download to complete. All the content we desire is available in instantaneous and personalized streams. Whereas users benefit tremendously from the increased freedom, the network suffers. Not only do personalized data streams increase the load overall, the instantaneous aspect concentrates traffic around peak hours. The heaviest (mostly video) applications are used predominantly during the evening hours. Caching is a tool to balance traffic without compromising the ‘on-demand’ aspect of content delivery; by sending data in advance a server can avoid peak traffic. The challenge is, of course, that in advance the server has no clue what data the user might be interested in. We study this problem in a lossy source coding setting with Gaussian sources specifically, using amodel based on the Gray–Wyner network. Ultimately caching is a trade-off between anticipating the precise demand through user habits versus ‘more bang for buck’ by exploiting correlation among the files in the database. For two Gaussian sources and using Gaussian codebooks we derive this trade-off completely. Particularly interesting is the case when the user has no preference for some content a-priori, caching then becomes an application of the concepts ofWyner’s common information and Watanabe’s total correlation. We study these concepts in databases of more than two sources where we derive that caching all of the information shared by multiple Gaussians is easy, whereas caching some is hard. We characterize the former, provide an inner bound for the latter and conjecture for which class of Gaussians it is tight. Later we also study how to most efficiently capture the total correlation that exists between two sets of Gaussians. As a final chapter, we study the applicability of caching of discrete information sources by actually building such algorithms, using convolutional codes to ‘cache and compress’. We provide a proof of concept of the practicality for doubly symmetric and circularly symmetric binary sources. Lastly we provide a discussion on challenges to be overcome for generalizing such algorithms

    Linear Estimation in Interconnected Sensor Systems with Information Constraints

    Get PDF
    A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed

    Linear Estimation in Interconnected Sensor Systems with Information Constraints

    Get PDF
    A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed
    corecore