8,195 research outputs found
POPE: Partial Order Preserving Encoding
Recently there has been much interest in performing search queries over
encrypted data to enable functionality while protecting sensitive data. One
particularly efficient mechanism for executing such queries is order-preserving
encryption/encoding (OPE) which results in ciphertexts that preserve the
relative order of the underlying plaintexts thus allowing range and comparison
queries to be performed directly on ciphertexts. In this paper, we propose an
alternative approach to range queries over encrypted data that is optimized to
support insert-heavy workloads as are common in "big data" applications while
still maintaining search functionality and achieving stronger security.
Specifically, we propose a new primitive called partial order preserving
encoding (POPE) that achieves ideal OPE security with frequency hiding and also
leaves a sizable fraction of the data pairwise incomparable. Using only O(1)
persistent and non-persistent client storage for
, our POPE scheme provides extremely fast batch insertion
consisting of a single round, and efficient search with O(1) amortized cost for
up to search queries. This improved security and
performance makes our scheme better suited for today's insert-heavy databases.Comment: Appears in ACM CCS 2016 Proceeding
Quantum resource estimates for computing elliptic curve discrete logarithms
We give precise quantum resource estimates for Shor's algorithm to compute
discrete logarithms on elliptic curves over prime fields. The estimates are
derived from a simulation of a Toffoli gate network for controlled elliptic
curve point addition, implemented within the framework of the quantum computing
software tool suite LIQ. We determine circuit implementations for
reversible modular arithmetic, including modular addition, multiplication and
inversion, as well as reversible elliptic curve point addition. We conclude
that elliptic curve discrete logarithms on an elliptic curve defined over an
-bit prime field can be computed on a quantum computer with at most qubits using a quantum circuit of at most Toffoli gates. We are able to classically simulate the
Toffoli networks corresponding to the controlled elliptic curve point addition
as the core piece of Shor's algorithm for the NIST standard curves P-192,
P-224, P-256, P-384 and P-521. Our approach allows gate-level comparisons to
recent resource estimates for Shor's factoring algorithm. The results also
support estimates given earlier by Proos and Zalka and indicate that, for
current parameters at comparable classical security levels, the number of
qubits required to tackle elliptic curves is less than for attacking RSA,
suggesting that indeed ECC is an easier target than RSA.Comment: 24 pages, 2 tables, 11 figures. v2: typos fixed and reference added.
ASIACRYPT 201
Binary decision diagrams for fault tree analysis
This thesis develops a new approach to fault tree analysis, namely the Binary Decision
Diagram (BDD) method. Conventional qualitative fault tree analysis techniques such
as the "top-down" or "bottom-up" approaches are now so well developed that further
refinement is unlikely to result in vast improvements in terms of their computational
capability. The BDD method has exhibited potential gains to be made in terms of
speed and efficiency in determining the minimal cut sets. Further, the nature of the
binary decision diagram is such that it is more suited to Boolean manipulation. The
BDD method has been programmed and successfully applied to a number of
benchmark fault trees.
The analysis capabilities of the technique have been extended such that all quantitative
fault tree top event parameters, which can be determined by conventional Kinetic Tree
Theory, can now be derived directly from the BDD. Parameters such as the top event
probability, frequency of occurrence and expected number of occurrences can be
calculated exactly using this method, removing the need for the approximations
previously required.
Thus the BDD method is proven to have advantages in terms of both accuracy and
efficiency. Initiator/enabler event analysis and importance measures have been
incorporated to extend this method into a full analysis procedure
Software defect prediction: do different classifiers find the same defects?
Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.Peer reviewedFinal Published versio
Developing Agent Based Modeling For Logic Programming And Reverse Analysis For Hopfield Network
For higher-order programming, higher order network architecture is necessary as high order neural networks have faster convergence rate, greater storage capacity, stronger approximation property, and higher fault tolerance than lower-order neural networks. So, higher order Hopfield network is brought into this thesis by using logic programming and reverse analysis in Hopfield network. The goal of performing logic programming based on the energy minimization scheme is to achieve the best global minimum. However, there is no guarantee to find the best minimum in the network. Thus, Boltzmann Machines and Hyperbolic Tangent activation function are being introduced to overcome this problem. To choose the best and efficient method to obtain the global minima among Wan Abdullah method (use McCulloch-Pitts updating rule in Hopfield net), Boltzmann machine and Hyperbolic Tangent activation functions, a comparison table will be created in this thesis. To carry out such work, agent based modeling (ABM) is created. NetLogo as the platform to carry out logic programming and reverse analysis. ABM can allow rapid development of models, easy addition of features and a user friendly handling and coding. In logic programming systems, not only the result in terms of global minimum will be analyzed but in the aspect of hamming distance and central processing unit (CPU) times will also be carried out. In reverse analysis systems, the inherent relationships among the data can be learned by extracting common patterns that exist in data sets. The unknown and unexpected relation can be seek. As a result, real life cases will be carried out by using ABM to run computer simulation in this thesis
A P2P Networking Simulation Framework For Blockchain Studies
Recently, blockchain becomes a disruptive technology of building distributed applications (DApps). Many researchers and institutions have devoted their resources to the development of more effective blockchain technologies and innovative applications. However, with the limitation of computing power and financial resources, it is hard for researchers to deploy and test their blockchain innovations in a large-scape physical network.
Hence, in this dissertation, we proposed a peer-to-peer (P2P) networking simulation framework, which allows to deploy and test (simulate) a large-scale blockchain system with thousands of nodes in one single computer. We systematically reviewed existing research and techniques of blockchain simulator and evaluated their advantages and disadvantages.
To achieve generality and flexibility, our simulation framework lays the foundation for simulating blockchain network with different scales and protocols. We verified our simulation framework by deploying the most famous three blockchain systems (Bitcoin, Ethereum and IOTA) in our simulation framework.
We demonstrated the effectiveness of our simulation framework with the following three case studies: (a) Improve the performance of blockchain by changing key parameters or deploying new directed acyclic graph (DAG) structure protocol; (b) Test and analyze the attack response of Tangle-based blockchain (IOTA) (c) Establish and deploy a new smart grid bidding system for demand side in our simulation framework.
This dissertation also points out a series of open issues for future research
An instruction systolic array architecture for multiple neural network types
Modern electronic systems, especially sensor and imaging systems, are beginning to
incorporate their own neural network subsystems. In order for these neural systems to learn in
real-time they must be implemented using VLSI technology, with as much of the learning
processes incorporated on-chip as is possible. The majority of current VLSI implementations
literally implement a series of neural processing cells, which can be connected together in an
arbitrary fashion. Many do not perform the entire neural learning process on-chip, instead
relying on other external systems to carry out part of the computation requirements of the
algorithm.
The work presented here utilises two dimensional instruction systolic arrays in an attempt to
define a general neural architecture which is closer to the biological basis of neural networks - it
is the synapses themselves, rather than the neurons, that have dedicated processing units. A
unified architecture is described which can be programmed at the microcode level in order to
facilitate the processing of multiple neural network types.
An essential part of neural network processing is the neuron activation function, which can
range from a sequential algorithm to a discrete mathematical expression. The architecture
presented can easily carry out the sequential functions, and introduces a fast method of
mathematical approximation for the more complex functions. This can be evaluated on-chip,
thus implementing the entire neural process within a single system.
VHDL circuit descriptions for the chip have been generated, and the systolic processing
algorithms and associated microcode instruction set for three different neural paradigms have
been designed. A software simulator of the architecture has been written, giving results for
several common applications in the field
- …