1,048 research outputs found

    A generalization of Hausdorff dimension applied to Hilbert cubes and Wasserstein spaces

    Full text link
    A Wasserstein spaces is a metric space of sufficiently concentrated probability measures over a general metric space. The main goal of this paper is to estimate the largeness of Wasserstein spaces, in a sense to be precised. In a first part, we generalize the Hausdorff dimension by defining a family of bi-Lipschitz invariants, called critical parameters, that measure largeness for infinite-dimensional metric spaces. Basic properties of these invariants are given, and they are estimated for a naturel set of spaces generalizing the usual Hilbert cube. In a second part, we estimate the value of these new invariants in the case of some Wasserstein spaces, as well as the dynamical complexity of push-forward maps. The lower bounds rely on several embedding results; for example we provide bi-Lipschitz embeddings of all powers of any space inside its Wasserstein space, with uniform bound and we prove that the Wasserstein space of a d-manifold has "power-exponential" critical parameter equal to d.Comment: v2 Largely expanded version, as reflected by the change of title; all part I on generalized Hausdorff dimension is new, as well as the embedding of Hilbert cubes into Wasserstein spaces. v3 modified according to the referee final remarks ; to appear in Journal of Topology and Analysi

    Clustering as an example of optimizing arbitrarily chosen objective functions

    Get PDF
    This paper is a reflection upon a common practice of solving various types of learning problems by optimizing arbitrarily chosen criteria in the hope that they are well correlated with the criterion actually used for assessment of the results. This issue has been investigated using clustering as an example, hence a unified view of clustering as an optimization problem is first proposed, stemming from the belief that typical design choices in clustering, like the number of clusters or similarity measure can be, and often are suboptimal, also from the point of view of clustering quality measures later used for algorithm comparison and ranking. In order to illustrate our point we propose a generalized clustering framework and provide a proof-of-concept using standard benchmark datasets and two popular clustering methods for comparison

    Ordered Measurements of Permutationally-Symmetric Qubit Strings

    Full text link
    We show that any sequence of measurements on a permutationally-symmetric (pure or mixed) multi-qubit string leaves the unmeasured qubit substring also permutationally-symmetric. In addition, we show that the measurement probabilities for an arbitrary sequence of single-qubit measurements are independent of how many unmeasured qubits have been lost prior to the measurement. Our results are valuable for quantum information processing of indistinguishable particles by post-selection, e.g. in cases where the results of an experiment are discarded conditioned upon the occurrence of a given event such as particle loss. Furthermore, our results are important for the design of adaptive-measurement strategies, e.g. a series of measurements where for each measurement instance, the measurement basis is chosen depending on prior measurement results.Comment: 13 page

    Numerical Analysis of Boosting Scheme for Scalable NMR Quantum Computation

    Full text link
    Among initialization schemes for ensemble quantum computation beginning at thermal equilibrium, the scheme proposed by Schulman and Vazirani [L. J. Schulman and U. V. Vazirani, in Proceedings of the 31st ACM Symposium on Theory of Computing (STOC'99) (ACM Press, New York, 1999), pp. 322-329] is known for the simple quantum circuit to redistribute the biases (polarizations) of qubits and small time complexity. However, our numerical simulation shows that the number of qubits initialized by the scheme is rather smaller than expected from the von Neumann entropy because of an increase in the sum of the binary entropies of individual qubits, which indicates a growth in the total classical correlation. This result--namely, that there is such a significant growth in the total binary entropy--disagrees with that of their analysis.Comment: 14 pages, 18 figures, RevTeX4, v2,v3: typos corrected, v4: minor changes in PROGRAM 1, conforming it to the actual programs used in the simulation, v5: correction of a typographical error in the inequality sign in PROGRAM 1, v6: this version contains a new section on classical correlations, v7: correction of a wrong use of terminology, v8: Appendix A has been added, v9: published in PR

    FDTD Simulation of Thermal Noise in Open Cavities

    Full text link
    A numerical model based on the finite-difference time-domain (FDTD) method is developed to simulate thermal noise in open cavities owing to output coupling. The absorbing boundary of the FDTD grid is treated as a blackbody, whose thermal radiation penetrates the cavity in the grid. The calculated amount of thermal noise in a one-dimensional dielectric cavity recovers the standard result of the quantum Langevin equation in the Markovian regime. Our FDTD simulation also demonstrates that in the non-Markovian regime the buildup of the intracavity noise field depends on the ratio of the cavity photon lifetime to the coherence time of thermal radiation. The advantage of our numerical method is that the thermal noise is introduced in the time domain without prior knowledge of cavity modes.Comment: 8 pages, 7 figure

    A Number-Theoretic Error-Correcting Code

    Full text link
    In this paper we describe a new error-correcting code (ECC) inspired by the Naccache-Stern cryptosystem. While by far less efficient than Turbo codes, the proposed ECC happens to be more efficient than some established ECCs for certain sets of parameters. The new ECC adds an appendix to the message. The appendix is the modular product of small primes representing the message bits. The receiver recomputes the product and detects transmission errors using modular division and lattice reduction

    Stacking Gravitational Wave Signals from Soft Gamma Repeater Bursts

    Full text link
    Soft gamma repeaters (SGRs) have unique properties that make them intriguing targets for gravitational wave (GW) searches. They are nearby, their burst emission mechanism may involve neutron star crust fractures and excitation of quasi-normal modes, and they burst repeatedly and sometimes spectacularly. A recent LIGO search for transient GW from these sources placed upper limits on a set of almost 200 individual SGR bursts. These limits were within the theoretically predicted range of some models. We present a new search strategy which builds upon the method used there by "stacking" potential GW signals from multiple SGR bursts. We assume that variation in the time difference between burst electromagnetic emission and burst GW emission is small relative to the GW signal duration, and we time-align GW excess power time-frequency tilings containing individual burst triggers to their corresponding electromagnetic emissions. Using Monte Carlo simulations, we confirm that gains in GW energy sensitivity of N^{1/2} are possible, where N is the number of stacked SGR bursts. Estimated sensitivities for a mock search for gravitational waves from the 2006 March 29 storm from SGR 1900+14 are also presented, for two GW emission models, "fluence-weighted" and "flat" (unweighted).Comment: 17 pages, 16 figures, submitted to PR

    Melkprijs en melkproduktie

    Get PDF
    • …
    corecore