46 research outputs found

    Shannon Perfect Secrecy in a Discrete Hilbert Space

    Full text link
    The One-time-pad (OTP) was mathematically proven to be perfectly secure by Shannon in 1949. We propose to extend the classical OTP from an n-bit finite field to the entire symmetric group over the finite field. Within this context the symmetric group can be represented by a discrete Hilbert sphere (DHS) over an n-bit computational basis. Unlike the continuous Hilbert space defined over a complex field in quantum computing, a DHS is defined over the finite field GF(2). Within this DHS, the entire symmetric group can be completely described by the complete set of n-bit binary permutation matrices. Encoding of a plaintext can be done by randomly selecting a permutation matrix from the symmetric group to multiply with the computational basis vector associated with the state corresponding to the data to be encoded. Then, the resulting vector is converted to an output state as the ciphertext. The decoding is the same procedure but with the transpose of the pre-shared permutation matrix. We demonstrate that under this extension, the 1-to-1 mapping in the classical OTP is equally likely decoupled in Discrete Hilbert Space. The uncertainty relationship between permutation matrices protects the selected pad, consisting of M permutation matrices (also called Quantum permutation pad, or QPP). QPP not only maintains the perfect secrecy feature of the classical formulation but is also reusable without invalidating the perfect secrecy property. The extended Shannon perfect secrecy is then stated such that the ciphertext C gives absolutely no information about the plaintext P and the pad.Comment: 7 pages, 1 figure, presented and published by QCE202

    Quantum Public Key Distribution using Randomized Glauber States

    Full text link
    State-of-the-art Quantum Key Distribution (QKD) is based on the uncertainty principle of qubits on quantum measurements and is theoretically proven to be unconditionally secure. Over the past three decades, QKD has been explored with single photons as the information carrier. More recently, attention has shifted towards using weak coherent laser pulses as the information carrier. In this paper, we propose a novel quantum key distribution mechanism over a pure optical channel using randomized Glauber states. The proposed mechanism closely resembles a quantum mechanical implementation of the public key envelope idea. For the proposed solution, we explore physical countermeasures to provide path authentication and to avoid man-in-the-middle attacks. Other attack vectors can also be effectively mitigated by leveraging the QPKE, the uncertainty principle and the DPSK modulation technique.Comment: 6 pages, 4 figures; presented and published by QCE202

    Bayesian Hierarchical Modelling for Tailoring Metric Thresholds

    Full text link
    Software is highly contextual. While there are cross-cutting `global' lessons, individual software projects exhibit many `local' properties. This data heterogeneity makes drawing local conclusions from global data dangerous. A key research challenge is to construct locally accurate prediction models that are informed by global characteristics and data volumes. Previous work has tackled this problem using clustering and transfer learning approaches, which identify locally similar characteristics. This paper applies a simpler approach known as Bayesian hierarchical modeling. We show that hierarchical modeling supports cross-project comparisons, while preserving local context. To demonstrate the approach, we conduct a conceptual replication of an existing study on setting software metrics thresholds. Our emerging results show our hierarchical model reduces model prediction error compared to a global approach by up to 50%.Comment: Short paper, published at MSR '18: 15th International Conference on Mining Software Repositories May 28--29, 2018, Gothenburg, Swede

    Do Stack Traces Help Developers Fix Bugs?

    Get PDF
    A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports

    Assessing Code Authorship: The Case of the Linux Kernel

    Get PDF
    Code authorship is a key information in large-scale open source systems. Among others, it allows maintainers to assess division of work and identify key collaborators. Interestingly, open-source communities lack guidelines on how to manage authorship. This could be mitigated by setting to build an empirical body of knowledge on how authorship-related measures evolve in successful open-source communities. Towards that direction, we perform a case study on the Linux kernel. Our results show that: (a) only a small portion of developers (26 %) makes significant contributions to the code base; (b) the distribution of the number of files per author is highly skewed --- a small group of top authors (3 %) is responsible for hundreds of files, while most authors (75 %) are responsible for at most 11 files; (c) most authors (62 %) have a specialist profile; (d) authors with a high number of co-authorship connections tend to collaborate with others with less connections.Comment: Accepted at 13th International Conference on Open Source Systems (OSS). 12 page

    A novel model for hourly PM2.5 concentration prediction based on CART and EELM

    Get PDF
    Hourly PM2.5 concentrations have multiple change patterns. For hourly PM2.5 concentration prediction, it is beneficial to split the whole dataset into several subsets with similar properties and to train a local prediction model for each subset. However, the methods based on local models need to solve the global-local duality. In this study, a novel prediction model based on classification and regression tree (CART) and ensemble extreme learning machine (EELM) methods is developed to split the dataset into subsets in a hierarchical fashion and build a prediction model for each leaf. Firstly, CART is used to split the dataset by constructing a shallow hierarchical regression tree. Then at each node of the tree, EELM models are built using the training samples of the node, and hidden neuron numbers are selected to minimize validation errors respectively on the leaves of a sub-tree that takes the node as the root. Finally, for each leaf of the tree, a global and several local EELMs on the path from the root to the leaf are compared, and the one with the smallest validation error on the leaf is chosen. The meteorological data of Yancheng urban area and the air pollutant concentration data from City Monitoring Centre are used to evaluate the method developed. The experimental results demonstrate that the method developed addresses the global-local duality, having better performance than global models including random forest (RF), v-support vector regression (v-SVR) and EELM, and other local models based on season and k-means clustering. The new model has improved the capability of treating multiple change patterns

    Evaluating Process Quality Based on Change Request Data – An Empirical Study of the Eclipse Project

    Full text link
    Abstract. The information routinely collected in change request management systems contains valuable information for monitoring of the process quality. However this data is currently utilized in a very limited way. This paper presents an empirical study of the process quality in the product portfolio of the Eclipse project. It is based on a systematic approach for the evaluation of process quality characteristics using change request data. Results of the study offer insights into the development process of Eclipse. Moreover the study allows assessing applicability and limitations of the proposed approach for the evaluation of process quality
    corecore