59 research outputs found

    From Known-Plaintext Security to Chosen-Plaintext Security

    Get PDF
    We present a new encryption mode for block ciphers. The mode is efficient and is secure against chosen-plaintext attack (CPA) already if the underlying symmetric cipher is secure against known-plaintext attack (KPA). We prove that known (and widely used) encryption modes as CBC mode and counter mode do not have this property. In particular, we prove that CBC mode using a KPA secure cipher is KPA secure, but need not be CPA secure, and we prove that counter mode using a KPA secure cipher need not be even KPA secure. The analysis is done in a concrete security framework

    UC and EUC Weak Bit-Commitments Using Seal-Once Tamper-Evidence

    Get PDF
    Based on tamper-evident devices, i.e., a type of distinguishable, sealed envelopes, we put forward weak bit-commitment protocols which are UC-secure. These commitments are weak in that it is legitimate that a party could cheat. Unlike in several similar lines of work, in our case, the party is not obliged to cheat, but he has ability to cheat if and when needed. The empowered party is the sender, i.e., the protocols are also sender-strong. We motivate the construction of such primitives at both theoretical and practical levels. Such protocols complete the picture of existent receiver-strong weak bit-commitments based on tamper-evidence. We also show that existent receiver-strong protocols of the kind are not EUC-secure, i.e., they are only UC-secure. Further, we put forward a second formalisation of tamper-evident distinguishable envelopes which renders those protocols and the protocols herein EUC-secure. We finally draw most implication-relations between the tamper-evident devices, our weak sender-strong commitments, the existent weak receiver-strong commitments, as well as standard commitments. The mechanisms at the foundation of these primitives are lightweight and the protocols yielded are end-to-end humanly verifiable

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks

    Modeling Quantum-Safe Authenticated Key Establishment, and an Isogeny-Based Protocol

    Get PDF
    We propose a security model for authenticated key establishment in the quantum setting. Our model is the first for authenticated key establishment that allows for quantum superpositions of queries. The model builds on the classical Canetti-Krawczyk model but allows quantum interactions between the adversary and quantum oracles that emulate classical parties. We demonstrate that this new security definition is satisfiable by giving a generic construction from simpler cryptographic primitives and a specific protocol which is secure in the quantum random oracle model, under the supersingular isogeny decisional Diffie-Hellman assumption (SIDH)

    Computational Indistinguishability Amplification: Tight Product Theorems for System Composition

    Get PDF
    Computational indistinguishability amplification is the problem of strengthening cryptographic primitives whose security is defined by bounding the distinguishing advantage of an efficient distinguisher. Examples include pseudorandom generators (PRGs), pseudorandom functions (PRFs), and pseudorandom permutations (PRPs). The literature on computational indistinguishability amplification consists only of few isolated results. Yao\u27s XOR-lemma implies, by a hybrid argument, that no efficient distinguisher has advantage better than (roughly) n2m−1δmn2^{m-1} \delta^m in distinguishing the XOR of mm independent nn-bit PRG outputs S1,…,SmS_1,\ldots,S_m from uniform randomness if no efficient distinguisher has advantage more than δ\delta in distinguishing SiS_i from a uniform nn-bit string. The factor 2m−12^{m-1} allows for security amplification only if δ<12\delta<\frac{1}{2}: For the case of PRFs, a random-offset XOR-construction of Myers was the first result to achieve {\em strong} security amplification, i.e., also for 12≤δ<1\frac{1}{2} \le \delta < 1. This paper proposes a systematic treatment of computational indistinguishability amplification. We generalize and improve the above product theorem for the XOR of PRGs along five axes. First, we prove the {\em tight} information-theoretic bound 2m−1δm2^{m-1}\delta^m (without factor nn) also for the computational setting. Second, we prove results for {\em interactive} systems (e.g. PRFs or PRPs). Third, we consider the general class of {\em neutralizing combination constructions}, not just XOR. As an application this yields the first indistinguishability amplification results for the cascade of PRPs (i.e., block ciphers) converting a weak PRP into an arbitrarily strong PRP, both for single-sided and two-sided queries. Fourth, {\em strong} security amplification is achieved for a subclass of neutralizing constructions which includes as a special case the construction of Myers. As an application we obtain highly practical optimal security amplification for block ciphers, simply by adding random offsets at the input and output of the cascade. Fifth, we show strong security amplification also for {\em weakened assumptions} like security against random-input (as opposed to chosen-input) attacks. A key technique is a generalization of Yao\u27s XOR-lemma to (interactive) systems, which is of independent interest

    Privacy and Security Assessment of Biometric Template Protection

    Full text link

    Short undeniable signatures:design, analysis, and applications

    Get PDF
    Digital signatures are one of the main achievements of public-key cryptography and constitute a fundamental tool to ensure data authentication. Although their universal verifiability has the advantage to facilitate their verification by the recipient, this property may have undesirable consequences when dealing with sensitive and private information. Motivated by such considerations, undeniable signatures, whose verification requires the cooperation of the signer in an interactive way, were invented. This thesis is mainly devoted to the design and analysis of short undeniable signatures. Exploiting their online property, we can achieve signatures with a fully scalable size depending on the security requirements. To this end, we develop a general framework based on the interpolation of group elements by a group homomorphism, leading to the design of a generic undeniable signature scheme. On the one hand, this paradigm allows to consider some previous undeniable signature schemes in a unified setting. On the other hand, by selecting group homomorphisms with a small group range, we obtain very short signatures. After providing theoretical results related to the interpolation of group homomorphisms, we develop some interactive proofs in which the prover convinces a verifier of the interpolation (resp. non-interpolation) of some given points by a group homomorphism which he keeps secret. Based on these protocols, we devise our new undeniable signature scheme and prove its security in a formal way. We theoretically analyze the special class of group characters on Z*n. After studying algorithmic aspects of the homomorphism evaluation, we compare the efficiency of different homomorphisms and show that the Legendre symbol leads to the fastest signature generation. We investigate potential applications based on the specific properties of our signature scheme. Finally, in a topic closely related to undeniable signatures, we revisit the designated confirmer signature of Chaum and formally prove the security of a generalized version

    Variants of Pseudo-deterministic Algorithms and Duality in TFNP

    Get PDF
    We introduce a new notion of ``faux-deterministic'' algorithms for search problems in query complexity. Roughly, for a search problem \cS, a faux-deterministic algorithm is a probability distribution A\mathcal{A} over deterministic algorithms A∈AA\in \mathcal{A} such that no computationally bounded adversary making black-box queries to a sampled algorithm A∼AA\sim \mathcal{A} can find an input xx on which AA fails to solve \cS ((x, A(x))\notin \cS). Faux-deterministic algorithms are a relaxation of \emph{pseudo-deterministic} algorithms, which are randomized algorithms with the guarantee that for any given input xx, the algorithm outputs a unique output yxy_x with high probability. Pseudo-deterministic algorithms are statistically indistinguishable from deterministic algorithms, while faux-deterministic algorithms relax this statistical indistinguishability to computational indistinguishability. We prove that in the query model, every verifiable search problem that has a randomized algorithm also has a faux-deterministic algorithm. By considering the pseudo-deterministic lower bound of Goldwasser et al. \cite{goldwasser_et_al:LIPIcs.CCC.2021.36}, we immediately prove an exponential gap between pseudo-deterministic and faux-deterministic complexities in query complexity. We additionally show that our faux-deterministic algorithm is also secure against quantum adversaries that can make black-box queries in superposition. We highlight two reasons to study faux-deterministic algorithms. First, for practical purposes, one can use a faux-deterministic algorithm instead of pseudo-deterministic algorithms in most cases where the latter is required. Second, since efficient faux-deterministic algorithms exist even when pseudo-deterministic ones do not, their existence demonstrates a barrier to proving pseudo-deterministic lower bounds: Lower bounds on pseudo-determinism must distinguish pseudo-determinism from faux-determinism. Finally, changing our perspective to the adversaries' viewpoint, we introduce a notion of ``dual problem'' \cS^{*} for search problems \cS. In the dual problem \cS^*, the input is an algorithm AA purporting to solve \cS, and our goal is to find an adverse input xx on which AA fails to solve \cS. We discuss several properties in the query and Turing machine model that show the new problem \cS^* is analogous to a dual for \cS
    • …
    corecore