1,732 research outputs found

    Entangled coherent states: teleportation and decoherence

    Get PDF
    When a superposition (∣α>−∣−α>)(|\alpha>-|-\alpha>) of two coherent states with opposite phase falls upon a 50-50 beamsplitter, the resulting state is entangled. Remarkably, the amount of entanglement is exactly 1 ebit, irrespective of α\alpha, as was recently discovered by O. Hirota and M. Sasaki. Here we discuss decoherence properties of such states and give a simple protocol that teleports one qubit encoded in Schr\"odinger cat statesComment: 11 pages LaTeX, 3 eps figures. Submitted to Phys. Rev.

    XXZ scalar products and KP

    Get PDF
    Using a Jacobi-Trudi-type identity, we show that the scalar product of a general state and a Bethe eigenstate in a finite-length XXZ spin-1/2 chain is (a restriction of) a KP tau function. This leads to a correspondence between the eigenstates and points on Sato's Grassmannian. Each of these points is a function of the rapidities of the corresponding eigenstate, the inhomogeneity variables of the spin chain and the crossing parameter.Comment: 14 pages, LaTeX2

    If Not Here, There. Explaining Machine Learning Models for Fault Localization in Optical Networks

    Get PDF
    Machine Learning (ML) is being widely investigated to automate safety-critical tasks in optical-network management. However, in some cases, decisions taken by ML models are hard to interpret, motivate and trust, and this lack of explainability complicates ML adoption in network management. The rising field of Explainable Artificial Intelligence (XAI) tries to uncover the reasoning behind the decision-making of complex ML models, offering end-users a stronger sense of trust towards ML-Automated decisions. In this paper we showcase an application of XAI, focusing on fault localization, and analyze the reasoning of the ML model, trained on real Optical Signal-To-Noise Ratio measurements, in two scenarios. In the first scenario we use measurements from a single monitor at the receiver, while in the second we also use measurements from multiple monitors along the path. With XAI, we show that additional monitors allow network operators to better understand model's behavior, making ML model more trustable and, hence, more practically adoptable
    • …
    corecore