13,164 research outputs found

    Asymptotic Glosten Milgrom equilibrium

    Get PDF
    This paper studies the Glosten Milgrom model whose risky asset value admits an arbitrary discrete distribution. Contrast to existing results on insider's models, the insider's optimal strategy in this model, if exists, is not of feedback type. Therefore a weak formulation of equilibrium is proposed. In this weak formulation, the inconspicuous trade theorem still holds, but the optimality for the insider's strategy is not enforced. However, the insider can employ some feedback strategy whose associated expected profit is close to the optimal value, when the order size is small. Moreover this discrepancy converges to zero when the order size diminishes. The existence of such a weak equilibrium is established, in which the insider's strategy converges to the Kyle optimal strategy when the order size goes to zero

    Properties of Noncommutative Renyi and Augustin Information

    Full text link
    The scaled R\'enyi information plays a significant role in evaluating the performance of information processing tasks by virtue of its connection to the error exponent analysis. In quantum information theory, there are three generalizations of the classical R\'enyi divergence---the Petz's, sandwiched, and log-Euclidean versions, that possess meaningful operational interpretation. However, these scaled noncommutative R\'enyi informations are much less explored compared with their classical counterpart, and lacking crucial properties hinders applications of these quantities to refined performance analysis. The goal of this paper is thus to analyze fundamental properties of scaled R\'enyi information from a noncommutative measure-theoretic perspective. Firstly, we prove the uniform equicontinuity for all three quantum versions of R\'enyi information, hence it yields the joint continuity of these quantities in the orders and priors. Secondly, we establish the concavity in the region of s∈(−1,0)s\in(-1,0) for both Petz's and the sandwiched versions. This completes the open questions raised by Holevo [\href{https://ieeexplore.ieee.org/document/868501/}{\textit{IEEE Trans.~Inf.~Theory}, \textbf{46}(6):2256--2261, 2000}], Mosonyi and Ogawa [\href{https://doi.org/10.1007/s00220-017-2928-4/}{\textit{Commun.~Math.~Phys}, \textbf{355}(1):373--426, 2017}]. For the applications, we show that the strong converse exponent in classical-quantum channel coding satisfies a minimax identity. The established concavity is further employed to prove an entropic duality between classical data compression with quantum side information and classical-quantum channel coding, and a Fenchel duality in joint source-channel coding with quantum side information in the forthcoming papers

    Criticality in Translation-Invariant Parafermion Chains

    Full text link
    In this work we numerically study critical phases in translation-invariant ZN\mathbb{Z}_N parafermion chains with both nearest- and next-nearest-neighbor hopping terms. The model can be mapped to a ZN\mathbb{Z}_N spin model with nearest-neighbor couplings via a generalized Jordan-Wigner transformation and translation invariance ensures that the spin model is always self-dual. We first study the low-energy spectrum of chains with only nearest-neighbor coupling, which are mapped onto standard self-dual ZN\mathbb{Z}_N clock models. For 3≤N≤63\leq N\leq 6 we match the numerical results to the known conformal field theory(CFT) identification. We then analyze in detail the phase diagram of a N=3N=3 chain with both nearest and next-nearest neighbor hopping and six critical phases with central charges being 4/54/5, 1 or 2 are found. We find continuous phase transitions between c=1c=1 and c=2c=2 phases, while the phase transition between c=4/5c=4/5 and c=1c=1 is conjectured to be of Kosterlitz-Thouless type.Comment: published versio

    Language Models for Image Captioning: The Quirks and What Works

    Full text link
    Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.Comment: See http://research.microsoft.com/en-us/projects/image_captioning for project informatio
    • …
    corecore