11,895 research outputs found

    Key Distillation and the Secret-Bit Fraction

    Full text link
    We consider distillation of secret bits from partially secret noisy correlations P_ABE, shared between two honest parties and an eavesdropper. The most studied distillation scenario consists of joint operations on a large number of copies of the distribution (P_ABE)^N, assisted with public communication. Here we consider distillation with only one copy of the distribution, and instead of rates, the 'quality' of the distilled secret bits is optimized, where the 'quality' is quantified by the secret-bit fraction of the result. The secret-bit fraction of a binary distribution is the proportion which constitutes a secret bit between Alice and Bob. With local operations and public communication the maximal extractable secret-bit fraction from a distribution P_ABE is found, and is denoted by Lambda[P_ABE]. This quantity is shown to be nonincreasing under local operations and public communication, and nondecreasing under eavesdropper's local operations: it is a secrecy monotone. It is shown that if Lambda[P_ABE]>1/2 then P_ABE is distillable, thus providing a sufficient condition for distillability. A simple expression for Lambda[P_ABE] is found when the eavesdropper is decoupled, and when the honest parties' information is binary and the local operations are reversible. Intriguingly, for general distributions the (optimal) operation requires local degradation of the data.Comment: 12 page

    Hall-Littlewood polynomials and characters of affine Lie algebras

    Full text link
    The Weyl-Kac character formula gives a beautiful closed-form expression for the characters of integrable highest-weight modules of Kac-Moody algebras. It is not, however, a formula that is combinatorial in nature, obscuring positivity. In this paper we show that the theory of Hall-Littlewood polynomials may be employed to prove Littlewood-type combinatorial formulas for the characters of certain highest weight modules of the affine Lie algebras C_n^{(1)}, A_{2n}^{(2)} and D_{n+1}^{(2)}. Through specialisation this yields generalisations for B_n^{(1)}, C_n^{(1)}, A_{2n-1}^{(2)}, A_{2n}^{(2)} and D_{n+1}^{(2)} of Macdonald's identities for powers of the Dedekind eta-function. These generalised eta-function identities include the Rogers-Ramanujan, Andrews-Gordon and G\"ollnitz-Gordon q-series as special, low-rank cases.Comment: 33 pages, proofs of several conjectures from the earlier version have been include

    Expressing Privacy Preferences in terms of Invasiveness

    Get PDF
    Dynamic context aware systems need highly flexible privacy protection mechanisms. We describe an extension to an existing RBAC-based mechanism that utilises a dynamic measure of invasiveness to determine whether contextual information should be released

    Evolutionary Inference for Function-valued Traits: Gaussian Process Regression on Phylogenies

    Full text link
    Biological data objects often have both of the following features: (i) they are functions rather than single numbers or vectors, and (ii) they are correlated due to phylogenetic relationships. In this paper we give a flexible statistical model for such data, by combining assumptions from phylogenetics with Gaussian processes. We describe its use as a nonparametric Bayesian prior distribution, both for prediction (placing posterior distributions on ancestral functions) and model selection (comparing rates of evolution across a phylogeny, or identifying the most likely phylogenies consistent with the observed data). Our work is integrative, extending the popular phylogenetic Brownian Motion and Ornstein-Uhlenbeck models to functional data and Bayesian inference, and extending Gaussian Process regression to phylogenies. We provide a brief illustration of the application of our method.Comment: 7 pages, 1 figur

    Re-Politicising Regulation: Politics: Regulatory Variation and Fuzzy Liberalisation in the Single European Energy Market

    Get PDF
    [From the introduction] The idea that we are living in the age of the regulatory state has dominated the study of public policy in the European Union and its member states in general, and the study of the utilities sectors in particular.1 The European Commission’s continuous drive to expand the Single Market has therefore been a free-market and rule-oriented project, driven by regulatory politics rather than policies that involve direct public expenditure. The dynamics of European integration are rooted in three central concepts: free trade, multilateral rules, and supranational cooperation. During the 1990s EU competition policy took a ‘public turn’ and set its sights on the public sector.2 EU legislation broke up national monopolies in telecommunications, electricity and gas, and set the scene for further extension of the single market into hitherto protected sectors. Both the integration theory literature (intergovernmentalist and institutionalist alike) and literature on the emergence of the EU as a ‘regulatory state’ assumed that this was primarily a matter of policy making: once agreement had been reached to liberalise the utilities markets a relatively homogeneous process would follow. The regulatory state model fit the original common market blueprint better the old industrial policy approaches. On the other hand, sector-specific studies continue to reveal a less than fully homogeneous internal market. The EU has undergone momentous changes in the last two decades, which have rendered the notion of a homogeneous single market somewhat unrealistic

    Highly comparative feature-based time-series classification

    Full text link
    A highly comparative, feature-based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series. These features are derived from across the scientific time-series analysis literature, and include summaries of time series in terms of their correlation structure, distribution, entropy, stationarity, scaling properties, and fits to a range of time-series models. After computing thousands of features for each time series in a training set, those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier. The resulting feature-based classifiers automatically learn the differences between classes using a reduced number of time-series properties, and circumvent the need to calculate distances between time series. Representing time series in this way results in orders of magnitude of dimensionality reduction, allowing the method to perform well on very large datasets containing long time series or time series of different lengths. For many of the datasets studied, classification performance exceeded that of conventional instance-based classifiers, including one nearest neighbor classifiers using Euclidean distances and dynamic time warping and, most importantly, the features selected provide an understanding of the properties of the dataset, insight that can guide further scientific investigation

    Alternative Archaeological Representations within Virtual Worlds

    Get PDF
    Traditional VR methods allow the user to tour and view the virtual world from different perspectives. Increasingly, more interactive and adaptive worlds are being generated, potentially allowing the user to interact with and affect objects in the virtual world. We describe and compare four models of operation that allow the publisher to generate views, with the client manipulating and affecting specific objects in the world. We demonstrate these approaches through a problem in archaeological visualization
    • 

    corecore