213,558 research outputs found

    Distillation protocols for Fourier states in quantum computing

    Full text link
    Fourier states are multi-qubit registers that facilitate phase rotations in fault-tolerant quantum computing. We propose distillation protocols for constructing the fundamental, nn-qubit Fourier state with error O(2n)O(2^{-n}) at a cost of O(nlogn)O(n \log n) Toffoli gates and Clifford gates, or any arbitrary Fourier state using O(n2)O(n^2) gates. We analyze these protocols with methods from digital signal processing. These results suggest that phase kickback, which uses Fourier states, could be the current lowest-overhead method for generating arbitrary phase rotations.Comment: 18 pages, 4 figure

    Algorithmic statistics revisited

    Full text link
    The mission of statistics is to provide adequate statistical hypotheses (models) for observed data. But what is an "adequate" model? To answer this question, one needs to use the notions of algorithmic information theory. It turns out that for every data string xx one can naturally define "stochasticity profile", a curve that represents a trade-off between complexity of a model and its adequacy. This curve has four different equivalent definitions in terms of (1)~randomness deficiency, (2)~minimal description length, (3)~position in the lists of simple strings and (4)~Kolmogorov complexity with decompression time bounded by busy beaver function. We present a survey of the corresponding definitions and results relating them to each other

    Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    Get PDF
    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non-negative amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch between the manifold described by the parameters and its polar approximation. To quantify the improvements afforded by the proposed extensions, we evaluate six algorithms for estimation of parameters in sparse translation-invariant signals, exemplified with the time delay estimation problem. The evaluation is based on three performance metrics: estimator precision, sampling rate and computational complexity. We use compressive sensing with all the algorithms to lower the necessary sampling rate and show that it is still possible to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super-resolution algorithm. The algorithms studied here provide various tradeoffs between computational complexity, estimation precision, and necessary sampling rate. The work shows that compressive sensing for the class of sparse translation-invariant signals allows for a decrease in sampling rate and that the use of polar interpolation increases the estimation precision.Comment: 13 pages, 5 figures, to appear in IEEE Transactions on Signal Processing; minor edits and correction

    Modeling Mental Qualities

    Get PDF
    Conscious experiences are characterized by mental qualities, such as those involved in seeing red, feeling pain, or smelling cinnamon. The standard framework for modeling mental qualities represents them via points in geometrical spaces, where distances between points inversely correspond to degrees of phenomenal similarity. This paper argues that the standard framework is structurally inadequate and develops a new framework that is more powerful and flexible. The core problem for the standard framework is that it cannot capture precision structure: for example, consider the phenomenal contrast between seeing an object as crimson in foveal vision versus merely as red in peripheral vision. The solution I favor is to model mental qualities using regions, rather than points. I explain how this seemingly simple formal innovation not only provides a natural way of modeling precision, but also yields a variety of further theoretical fruits: it enables us to formulate novel hypotheses about the space and structures of mental qualities, formally differentiate two dimensions of phenomenal similarity, generate a quantitative model of the phenomenal sorites, and define a measure of discriminatory grain. A noteworthy consequence is that the structure of the mental qualities of conscious experiences is fundamentally different from the structure of the perceptible qualities of external objects

    Verifying Atom Entanglement Schemes by Testing Bell's Inequality

    Get PDF
    Recent experiments to test Bell's inequality using entangled photons and ions aimed at tests of basic quantum mechanical principles. Interesting results have been obtained and many loopholes could be closed. In this paper we want to point out that tests of Bell's inequality also play an important role in verifying atom entanglement schemes. We describe as an example a scheme to prepare arbitrary entangled states of N two-level atoms using a leaky optical cavity and a scheme to entangle atoms inside a photonic crystal. During the state preparation no photons are emitted and observing a violation of Bell's inequality is the only way to test whether a scheme works with a high precision or not.Comment: Proceedings for the conference Garda 2000, to appear in Zeitschrift fuer Naturforschung, 7 pages, 7 figure

    The Cosmic Linear Anisotropy Solving System (CLASS) I: Overview

    Full text link
    The Cosmic Linear Anisotropy Solving System (CLASS) is a new accurate Boltzmann code, designed to offer a more user-friendly and flexible coding environment to cosmologists. CLASS is very structured, easy to modify, and offers a rigorous way to control the accuracy of output quantities. It is also incidentally a bit faster than other codes. In this overview, we present the general principles of CLASS and its basic structure. We insist on the friendliness and flexibility aspects, while accuracy, physical approximations and performances are discussed in a series of companion papers.Comment: 19 pages, typos corrected. Code available at http://class-code.ne

    XRay: Enhancing the Web's Transparency with Differential Correlation

    Get PDF
    Today's Web services - such as Google, Amazon, and Facebook - leverage user data for varied purposes, including personalizing recommendations, targeting advertisements, and adjusting prices. At present, users have little insight into how their data is being used. Hence, they cannot make informed choices about the services they choose. To increase transparency, we developed XRay, the first fine-grained, robust, and scalable personal data tracking system for the Web. XRay predicts which data in an arbitrary Web account (such as emails, searches, or viewed products) is being used to target which outputs (such as ads, recommended products, or prices). XRay's core functions are service agnostic and easy to instantiate for new services, and they can track data within and across services. To make predictions independent of the audited service, XRay relies on the following insight: by comparing outputs from different accounts with similar, but not identical, subsets of data, one can pinpoint targeting through correlation. We show both theoretically, and through experiments on Gmail, Amazon, and YouTube, that XRay achieves high precision and recall by correlating data from a surprisingly small number of extra accounts.Comment: Extended version of a paper presented at the 23rd USENIX Security Symposium (USENIX Security 14

    LR-CNN: Local-aware Region CNN for Vehicle Detection in Aerial Imagery

    Get PDF
    State-of-the-art object detection approaches such as Fast/Faster R-CNN, SSD, or YOLO have difficulties detecting dense, small targets with arbitrary orientation in large aerial images. The main reason is that using interpolation to align RoI features can result in a lack of accuracy or even loss of location information. We present the Local-aware Region Convolutional Neural Network (LR-CNN), a novel two-stage approach for vehicle detection in aerial imagery. We enhance translation invariance to detect dense vehicles and address the boundary quantization issue amongst dense vehicles by aggregating the high-precision RoIs' features. Moreover, we resample high-level semantic pooled features, making them regain location information from the features of a shallower convolutional block. This strengthens the local feature invariance for the resampled features and enables detecting vehicles in an arbitrary orientation. The local feature invariance enhances the learning ability of the focal loss function, and the focal loss further helps to focus on the hard examples. Taken together, our method better addresses the challenges of aerial imagery. We evaluate our approach on several challenging datasets (VEDAI, DOTA), demonstrating a significant improvement over state-of-the-art methods. We demonstrate the good generalization ability of our approach on the DLR 3K dataset.Comment: 8 page
    corecore