297 research outputs found

    Group Selection and Key Management Strategies for Ciphertext-Policy Attribute-Based Encryption

    Get PDF
    Ciphertext-Policy Attribute-Based Encryption (CPABE) was introduced by Bethencourt, Sahai, and Waters, as an improvement of Identity Based Encryption, allowing fine grained control of access to encrypted files by restricting access to only users whose attributes match that of the monotonic access tree of the encrypted file. Through these modifications, encrypted files can be placed securely on an unsecure server, without fear of malicious users being able to access the files, while allowing each user to have a unique key, reducing the vulnerabilites associated with sharing a key between multiple users. However, due to the fact that CPABE was designed for the purpose of not using trusted servers, key management strategies such as efficient renewal and immediate key revocation are inherently prevented. In turn, this reduces security of the entire scheme, as a user could maliciously keep a key after having an attribute changed or revoked, using the old key to decrypt files that they should not have access to with their new key. Additionally, the original CPABE implementation provided does not discuss the selection of the underlying bilinear pairing which is used as the cryptographic primitive for the scheme. This thesis explores different possibilites for improvement to CPABE, in both the choice of bilinear group used, as well as support for key management that does not rely on proxy servers while minimizing the communication overhead. Through this work, it was found that nonsupersingular elliptic curves can be used for CPABE, and Barreto-Naehrig curves allowed the fastest encryption and key generation in CHARM, but were the slowest curves for decryption due to the large size of the output group. Key management was performed by using a key-insulation method, which provided helper keys which allow keys to be transformed over different time periods, with revocation and renewal through key update. Unfortunately, this does not allow immediate revocation, and revoked keys are still valid until the end of the time period during which they are revoked. Discussion of other key management methods is presented to show that immediate key revocation is difficult without using trusted servers to control access

    TRUTHFUL ACCESS CONTROL OF GRANULAR AGENTS FOR CLOUD-BASED SERVICES

    Get PDF
    A as of late offered get right of entry to keep an eye on original referred to as attribute-based get admission to keep an eye on is an efficient bidder to take on the 1st teaser. It-not most effective provides you know who corroboration but also in addition defines get admission to keep watch over policies per face inside the requester, air, or perchance the data tangle. In an attribute-based get entry to keep watch over arrangement. There are a variety of applying cloud-computing, let's say testimony discussing, knowledge arsenal, big memorandums care, healing break procedure etc. The usual account/password-based proof is not separateness preserving. However, it's properly sanctioned such solitude is a crucial item subsequent regarded as in cloud-computing practices. The umbrella notion of key-insulated cover finished up allow hide lengthy-term keys within a physically-secure but computationally-limited machine. Short-term hush-hush keys are depot by enjoyers at the potent but rocky project locus cryptographic computations transpire. Within previously mentioned news, we suggest a superb-grained two-factor get right of entry to keep an eye on agreement for web-based cloud-computing services and products, utilizing a trivial redemption apparatus. Our compact supports slender attribute-based get right of entry to which provides an exceptional ambidexterity for this process to organize the several get entry to policies in response to the various scenarios. Simultaneously, the separation with the buyer can be preserved

    DELICATE ARRANGEMENT FOR BINARY-DYNAMIC ACCESS CONTROL FOR WEB-BASED CLOUD COMPUTING SERVICES

    Get PDF
    A lately suggested access control model known as attribute-based access control is a good candidate to tackle the first problem. It-not only provides anonymous authentication but in addition further defines access control policies according to features in the requester, atmosphere, or possibly the information object. In particular, within the framework of our 2FA access system, in accordance with the attribute, an access control mechanism is implemented that is necessary for the trivial and secret user's security key device. Let's introduce a new fine-grained two factors of authentication (2FA) of the access control system to cloud computing of Web services. To ensure that the user cannot access the system, if not both, that can improve the security mechanism of the system, especially in missions where many users share the same computer for Web services in their cloud. Finally, our goal is to conduct a simulation to demonstrate the feasibility of the 2FA system. Our protocol supports fine-grained attribute-based access which supplies an excellent versatility for that system to create different access policies based on different scenarios. Simultaneously, the privacy from the user can also be preserved

    A Constant Time, Single Round Attribute-Based Authenticated Key Exchange in Random Oracle Model

    Get PDF
    In this paper, we present a single round two-party {\em attribute-based authenticated key exchange} (ABAKE) protocol in the framework of ciphertext-policy attribute-based systems. Since pairing is a costly operation and the composite order groups must be very large to ensure security, we focus on pairing free protocols in prime order groups. The proposed protocol is pairing free, working in prime order group and having tight reduction to Strong Diffie Hellman (SDH) problem under the attribute-based Canetti Krawzyck (CK) model which is a natural extension of the CK model (which is for the PKI-based authenticated key exchange) for the attribute-based setting. The security proof is given in the random oracle model. Our ABAKE protocol does not depend on any underlying attribute-based encryption or signature schemes unlike the previous solutions for ABAKE. Ours is the \textit{first} scheme that removes this restriction. Thus, the first major advantage is that smaller key sizes are sufficient to achieve comparable security. Another notable feature of our construction is that it involves only constant number of exponentiations per party unlike the state-of-the-art ABAKE protocols where the number of exponentiations performed by each party depends on the size of the linear secret sharing matrix. We achieve this by doing appropriate precomputation of the secret share generation. Ours is the \textit{first} construction that achieves this property. Our scheme has several other advantages. The major one being the capability to handle active adversaries. Most of the previous ABAKE protocols can offer security only under passive adversaries. Our protocol recognizes the corruption by an active adversary and aborts the process. In addition to this property, our scheme satisfies other security properties that are not covered by CK model such as forward secrecy, key compromise impersonation attacks and ephemeral key compromise impersonation attacks

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected works), Vol. 2

    Get PDF
    This second volume dedicated to Dezert-Smarandache Theory (DSmT) in Information Fusion brings in new fusion quantitative rules (such as the PCR1-6, where PCR5 for two sources does the most mathematically exact redistribution of conļ¬‚icting masses to the non-empty sets in the fusion literature), qualitative fusion rules, and the Belief Conditioning Rule (BCR) which is diļ¬€erent from the classical conditioning rule used by the fusion community working with the Mathematical Theory of Evidence. Other fusion rules are constructed based on T-norm and T-conorm (hence using fuzzy logic and fuzzy set in information fusion), or more general fusion rules based on N-norm and N-conorm (hence using neutrosophic logic and neutrosophic set in information fusion), and an attempt to unify the fusion rules and fusion theories. The known fusion rules are extended from the power set to the hyper-power set and comparison between rules are made on many examples. One deļ¬nes the degree of intersection of two sets, degree of union of two sets, and degree of inclusion of two sets which all help in improving the all existing fusion rules as well as the credibility, plausibility, and communality functions. The book chapters are written by Frederic Dambreville, Milan Daniel, Jean Dezert, Pascal Djiknavorian, Dominic Grenier, Xinhan Huang, Pavlina Dimitrova Konstantinova, Xinde Li, Arnaud Martin, Christophe Osswald, Andrew Schumann, Tzvetan Atanasov Semerdjiev, Florentin Smarandache, Albena Tchamova, and Min Wang

    Toward the development and implementation of object-oriented extensions for discrete-event simulation in a strongly-typed procedural language

    Get PDF
    The primary emphasis of this research is computer simulation. Computer simulations are used to model and analyze systems. To date, computer simulations have almost exclusively been written in procedural, strongly-typed languages such as FORTRAN or Pascal;Recent advancements in simulation research suggest an object-oriented approach to simulation languages may provide key benefits in computer simulation. The goal of this research is to combine the advantages of a simulation language written in a procedural, strongly-typed language with the benefits available through the object-oriented programming paradigm;This research presents a review of the methods of computer simulation. A significant portion of this research is devoted to a description of the development of the object-oriented simulation software in a strongly-typed, procedural language;The software developed in this research is capable of simulating systems with multiple servers and queues. Arrival and service distributions may be selected from the uniform, exponential, and normal family of distributions. Resource usage is not supported in the simulation program

    Public Key Infrastructure

    Full text link

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral

    An overview of artificial intelligence applications for power electronics

    Get PDF

    An Approach to Guide Users Towards Less Revealing Internet Browsers

    Get PDF
    When browsing the Internet, HTTP headers enable both clients and servers send extra data in their requests or responses such as the User-Agent string. This string contains information related to the senderā€™s device, browser, and operating system. Previous research has shown that there are numerous privacy and security risks result from exposing sensitive information in the User-Agent string. For example, it enables device and browser fingerprinting and user tracking and identification. Our large analysis of thousands of User-Agent strings shows that browsers differ tremendously in the amount of information they include in their User-Agent strings. As such, our work aims at guiding users towards using less exposing browsers. In doing so, we propose to assign an exposure score to browsers based on the information they expose and vulnerability records. Thus, our contribution in this work is as follows: first, provide a full implementation that is ready to be deployed and used by users. Second, conduct a user study to identify the effectiveness and limitations of our proposed approach. Our implementation is based on using more than 52 thousand unique browsers. Our performance and validation analysis show that our solution is accurate and efficient. The source code and data set are publicly available and the solution has been deployed
    • ā€¦
    corecore