4,887 research outputs found

    Data protection and statistics – a dynamic and tension-filled relationship

    Get PDF
    New statistical methods have been developed for the longer-term storage of microdata. These methods must comply, however, with the fundamental right to informational self-determination and the legal regulations imposed by the Federal Constitutional Court. Thus it is crucial to develop effective and coherent methods for protecting personal data collected for statistical purposes. Recent decisions by the Federal Constitutional Court are likely to result in the outlawing of comprehensive, permanent statistical compilations comprised of microdata from a wide range of sources and updated regularly. However, aside from such comprehensive methods, there are certainly other ways of using microdata that cannot be dismissed from the outset as violating constitutional legal norms. Internet access to statistical microdata is likely to take on increased importance for scientific research in the near future. Yet this would radically change the entire landscape of data protection: the vast amount of additional information now available on the Internet makes it almost impossible to judge whether individuals can be rendered identifiable. In view of this almost unlimited information, individual data can only be offered over the Internet if the absolute anonymity of the data can be guaranteed.Right to informational self-determination, census ruling of December 15, 1983, longer-term storage of microdata, primary statistics, secondary statistics, statistical confidentiality, absolute anonymisation, de facto anonymisation, additional information, pseudonymisation, personal data profiles.

    Comparisons of the execution times and memory requirements for high-speed discrete fourier transforms and fast fourier transforms, for the measurement of AC power harmonics

    Get PDF
    Conventional wisdom dictates that a Fast Fourier Transform (FFT) will be a more computationally effective method for measuring multiple harmonics than a Discrete Fourier Transform (DFT) approach. However, in this paper it is shown that carefully coded discrete transforms which distribute their computational load over many frames can be made to produce results in shorter execution times than the FFT approach, even for large number of harmonic measurement frequencies. This is because the execution time of the presented DFT actually rises with N and not the classical N2 value, while the execution time of the FFT rises with Nlog2N

    Reducing the Number of Annotations in a Verification-oriented Imperative Language

    Full text link
    Automated software verification is a very active field of research which has made enormous progress both in theoretical and practical aspects. Recently, an important amount of research effort has been put into applying these techniques on top of mainstream programming languages. These languages typically provide powerful features such as reflection, aliasing and polymorphism which are handy for practitioners but, in contrast, make verification a real challenge. In this work we present Pest, a simple experimental, while-style, multiprocedural, imperative programming language which was conceived with verifiability as one of its main goals. This language forces developers to concurrently think about both the statements needed to implement an algorithm and the assertions required to prove its correctness. In order to aid programmers, we propose several techniques to reduce the number and complexity of annotations required to successfully verify their programs. In particular, we show that high-level iteration constructs may alleviate the need for providing complex loop annotations.Comment: 15 pages, 8 figure

    A Syntactic Model of Mutation and Aliasing

    Full text link
    Traditionally, semantic models of imperative languages use an auxiliary structure which mimics memory. In this way, ownership and other encapsulation properties need to be reconstructed from the graph structure of such global memory. We present an alternative "syntactic" model where memory is encoded as part of the program rather than as a separate resource. This means that execution can be modelled by just rewriting source code terms, as in semantic models for functional programs. Formally, this is achieved by the block construct, introducing local variable declarations, which play the role of memory when their initializing expressions have been evaluated. In this way, we obtain a language semantics which directly represents at the syntactic level constraints on aliasing, allowing simpler reasoning about related properties. To illustrate this advantage, we consider the issue, widely studied in the literature, of characterizing an isolated portion of memory, which cannot be reached through external references. In the syntactic model, closed block values, called "capsules", provide a simple representation of isolated portions of memory, and capsules can be safely moved to another location in the memory, without introducing sharing, by means of "affine' variables. We prove that the syntactic model can be encoded in the conventional one, hence efficiently implemented.Comment: In Proceedings DCM 2018 and ITRS 2018 , arXiv:1904.0956
    • …
    corecore