550 research outputs found

    A hierarchical key pre-distribution scheme for fog networks

    Get PDF
    Security in fog computing is multi-faceted, and one particular challenge is establishing a secure communication channel between fog nodes and end devices. This emphasizes the importance of designing efficient and secret key distribution scheme to facilitate fog nodes and end devices to establish secure communication channels. Existing secure key distribution schemes designed for hierarchical networks may be deployable in fog computing, but they incur high computational and communication overheads and thus consume significant memory. In this paper, we propose a novel hierarchical key pre-distribution scheme based on “Residual Design” for fog networks. The proposed key distribution scheme is designed to minimize storage overhead and memory consumption, while increasing network scalability. The scheme is also designed to be secure against node capture attacks. We demonstrate that in an equal-size network, our scheme achieves around 84% improvement in terms of node storage overhead, and around 96% improvement in terms of network scalability. Our research paves the way for building an efficient key management framework for secure communication within the hierarchical network of fog nodes and end devices. KEYWORDS: Fog Computing, Key distribution, Hierarchical Networks

    A hierarchical key pre-distribution scheme for fog networks

    Get PDF
    Security in fog computing is multi-faceted, and one particular challenge is establishing a secure communication channel between fog nodes and end devices. This emphasizes the importance of designing efficient and secret key distribution scheme to facilitate fog nodes and end devices to establish secure communication channels. Existing secure key distribution schemes designed for hierarchical networks may be deployable in fog computing, but they incur high computational and communication overheads and thus consume significant memory. In this paper, we propose a novel hierarchical key pre-distribution scheme based on “Residual Design” for fog networks. The proposed key distribution scheme is designed to minimize storage overhead and memory consumption, while increasing network scalability. The scheme is also designed to be secure against node capture attacks. We demonstrate that in an equal-size network, our scheme achieves around 84% improvement in terms of node storage overhead, and around 96% improvement in terms of network scalability. Our research paves the way for building an efficient key management framework for secure communication within the hierarchical network of fog nodes and end devices. KEYWORDS: Fog Computing, Key distribution, Hierarchical Networks

    Using combinatorial group testing to solve integrity issues

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2015O uso de documentos eletrônicos para compartilhar informações é de fundamental importância, assim como a garantia de integridade e autenticidade dos mesmos. Para provar que alguém é dono ou concorda com o conteúdo de um documento em papel, essa pessoa precisa assiná-lo. Se o documento foi modificado após a assinatura, geralmente é possível localizar essas modificações através de rasuras. Existem técnicas similares em documentos digitais, conhecidas como assinaturas digitais, porém, propriedades como as de identificar as modificações são perdidas.Ao determinar quais partes de um documento foram modificadas, o receptor da mensagem seria capaz de verificar se essas modificações ocorreram em partes importantes, irrelevantes ou até esperadas do documento. Em algumas aplicações, uma quantidade limitada de modificações são permitidas mas é necessário manter o controle do local em que elas ocorreram, como em formulários eletrônicos. Em outras aplicações modificações não são permitidas, mas é importante poder acessar partes das informações que tem integridade garantida ou até mesmo utilizar a localização das modificações para investigação.Neste trabalho é considerado o problema de garantia parcial de integridade e autenticidade de dados assinados. Dois cenários são estudados: o primeiro está relacionado com a localização de modificações em um documento assinado e o segundo está relacionado com a localização de assinaturas inválidas em um conjunto de dados assinados individualmente. No primeiro cenário é proposto um esquema de assinatura digital capaz de detectar e localizar modificações num documento. O documento a ser assinado é primeiramente dividido em n blocos, tendo em conta um limite d para a quantidade máxima de blocos modificados que o esquema de assinatura consegue localizar. São propostos algoritmos eficientes para as etapas de assinatura e verificação, resultando em uma assinatura de tamanho razoavelmente compacto. Por exemplo, para d fixo, são adicionados O(log n) hashes ao tamanho de uma assinatura tradicional, ao mesmo tempo permitindo a identificação de até d blocos modificados.No cenário de localização de assinaturas inválidas em um conjunto de dados assinados individualmente é introduzido o conceito de níveis de agregação de assinatura. Com esse método o verificador pode distinguir os dados válidos dos inválidos, em contraste com a agregação de assinaturas tradicional, na qual até mesmo um único dado modificado invalidaria todo o conjunto de dados. Além disso, o número de assinaturas transmitidas é muito menor que num método de verificação em lotes, que requer o envio de todas as assinaturas individualmente. Nesse cenário é estudada uma aplicação em bancos de dados terceirizados, onde cada tupla armazenada é individualmente assinada. Como resultado de uma consulta ao banco de dados, são retornadas n tuplas e um conjunto de t assinaturas agregadas pelo servidor (com t muito menor que n). Quem realizou a consulta executa até t verificações de assinatura de maneira a verificar a integridade das n tuplas. Mesmo que algumas dessas tuplas sejam inválidas, pode-se identificar exatamente quais são as tuplas válidas. São propostos algoritmos eficientes para agregar, verificar as assinaturas e identificar as tuplas modificadas.Os dois esquemas propostos são baseados em testes combinatórios de grupo e matrizes cover-free. Nesse contexto são apresentadas construções detalhadas de matrizes cover-free presentes na literatura e a aplicação das mesmas nos esquemas propostos. Finalmente, são apresentadas análises de complexidade e resultados experimentais desses esquemas, comprovando a sua eficiência. Abstract : We consider the problem of partially ensuring the integrity and authenticity of signed data. Two scenarios are considered: the first is related to locating modifications in a signed document, and the second is related to locating invalid signatures in a set of individually signed data. In the first scenario we propose a digital signature scheme capable of locating modifications in a document. We divide the document to be signed into n blocks and assume a threshold d for the maximum amount of modified blocks that the signature scheme can locate. We propose efficient algorithms for signature and verification steps which provide a reasonably compact signature size. For instance, for fixed d we increase the size of a traditional signature by adding a factor of O(log n) hashes, while providing the identification of up to d modified blocks. In the scenario of locating invalid signatures in a set of individually signed data we introduce the concept of levels of signature aggregation. With this method the verifier can distinguish the valid data from the invalid ones, in contrast to traditional aggregation, where even a single invalid piece of data would invalidate the whole set. Moreover, the number of signatures transmitted is much smaller than in a batch verification method, which requires sending all the signatures individually. We consider an application in outsourced databases in which every tuple stored is individually signed. As a result from a query in the database, we return n tuples and a set of t signatures aggregated by the database server (with t much smaller than n). The querier performs t signature verifications in order to verify the integrity of all n tuples. Even if some of the tuples were modified, we can identify exactly which ones are valid. We provide efficient algorithms to aggregate, verify and identify the modified tuples. Both schemes are based on nonadaptive combinatorial group testing and cover-free matrices

    Implementação eficiente da Curve25519 para microcontroladores ARM

    Get PDF
    Orientador: Diego de Freitas AranhaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Com o advento da computação ubíqua, o fenômeno da Internet das Coisas (de Internet of Things) fará que com inúmeros dispositivos conectem-se um com os outros, enquanto trocam dados muitas vezes sensíveis pela sua natureza. Danos irreparáveis podem ser causados caso o sigilo destes seja quebrado. Isso causa preocupações acerca da segurança da comunicação e dos próprios dispositivos, que geralmente têm carência de mecanismos de proteção contra interferências físicas e pouca ou nenhuma medida de segurança. Enquanto desenvolver criptografia segura e eficiente como um meio de prover segurança à informação não é inédito, esse novo ambiente, com uma grande superfície de ataque, tem imposto novos desafios para a engenharia criptográfica. Uma abordagem segura para resolver este problema é utilizar blocos bem conhecidos e profundamente analisados, tal como o protocolo Segurança da Camada de Transporte (de Transport Layer Security, TLS). Na última versão desse padrão, as opções para Criptografia de Curvas Elípticas (de Elliptic Curve Cryptography - ECC) são expandidas para além de parâmetros estabelecidos por governos, tal como a proposta Curve25519 e protocolos criptográficos relacionados. Esse trabalho pesquisa implementações seguras e eficientes de Curve25519 para construir um esquema de troca de chaves em um microcontrolador ARM Cortex-M4, além do esquema de assinatura digital Ed25519 e a proposta de esquema de assinaturas digitais qDSA. Como resultado, operações de desempenho crítico, tal como o multiplicador de 256 bits, foram otimizadas; em particular, aceleração de 50% foi alcançada, impactando o desempenho de protocolos em alto nívelAbstract: With the advent of ubiquitous computing, the Internet of Things will undertake numerous devices connected to each other, while exchanging data often sensitive by nature. Breaching the secrecy of this data may cause irreparable damage. This raises concerns about the security of their communication and the devices themselves, which usually lack tamper resistance mechanisms or physical protection and even low to no security mesures. While developing efficient and secure cryptography as a mean to provide information security services is not a new problem, this new environment, with a wide attack surface, imposes new challenges to cryptographic engineering. A safe approach to solve this problem is reusing well-known and thoroughly analyzed blocks, such as the Transport Layer Security (TLS) protocol. In the last version of this standard, Elliptic Curve Cryptography options were expanded beyond government-backed parameters, such as the Curve25519 proposal and related cryptographic protocols. This work investigates efficient and secure implementations of Curve25519 to build a key exchange protocol on an ARM Cortex-M4 microcontroller, along the related signature scheme Ed25519 and a digital signature scheme proposal called qDSA. As result, performance-critical operations, such as a 256-bit multiplier, are greatly optimized; in this particular case, a 50% speedup is achieved, impacting the performance of higher-level protocolsMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPESFuncam

    Cloud-based homomorphic encryption for privacy-preserving machine learning in clinical decision support

    Get PDF
    While privacy and security concerns dominate public cloud services, Homomorphic Encryption (HE) is seen as an emerging solution that ensures secure processing of sensitive data via untrusted networks in the public cloud or by third-party cloud vendors. It relies on the fact that some encryption algorithms display the property of homomorphism, which allows them to manipulate data meaningfully while still in encrypted form; although there are major stumbling blocks to overcome before the technology is considered mature for production cloud environments. Such a framework would find particular relevance in Clinical Decision Support (CDS) applications deployed in the public cloud. CDS applications have an important computational and analytical role over confidential healthcare information with the aim of supporting decision-making in clinical practice. Machine Learning (ML) is employed in CDS applications that typically learn and can personalise actions based on individual behaviour. A relatively simple-to-implement, common and consistent framework is sought that can overcome most limitations of Fully Homomorphic Encryption (FHE) in order to offer an expanded and flexible set of HE capabilities. In the absence of a significant breakthrough in FHE efficiency and practical use, it would appear that a solution relying on client interactions is the best known entity for meeting the requirements of private CDS-based computation, so long as security is not significantly compromised. A hybrid solution is introduced, that intersperses limited two-party interactions amongst the main homomorphic computations, allowing exchange of both numerical and logical cryptographic contexts in addition to resolving other major FHE limitations. Interactions involve the use of client-based ciphertext decryptions blinded by data obfuscation techniques, to maintain privacy. This thesis explores the middle ground whereby HE schemes can provide improved and efficient arbitrary computational functionality over a significantly reduced two-party network interaction model involving data obfuscation techniques. This compromise allows for the powerful capabilities of HE to be leveraged, providing a more uniform, flexible and general approach to privacy-preserving system integration, which is suitable for cloud deployment. The proposed platform is uniquely designed to make HE more practical for mainstream clinical application use, equipped with a rich set of capabilities and potentially very complex depth of HE operations. Such a solution would be suitable for the long-term privacy preserving-processing requirements of a cloud-based CDS system, which would typically require complex combinatorial logic, workflow and ML capabilities

    Voice and Video Capacity of a Secure Wireless System

    Get PDF
    Improving the security and availability of secure wireless multimedia systems is the purpose of this thesis. Specifically, this thesis answered research questions about the capacity of wireless multimedia systems and how three variables relate to this capacity. The effects of securing the voice signal, real-time traffic originating foreign to a wireless local area network and use of an audio-only signal compared with a combined signal were all studied. The research questions were answered through a comprehensive literature review in addition to an experiment which had thirty-six subjects using a secure wireless multimedia system which was developed as part of this thesis effort. Additionally, questions related to the techniques for deploying wireless multimedia system including the maturity and security of the technology were answered. The research identified weaknesses in existing analytical and computer models and the need for a concise and realistic model of wireless multimedia systems. The culmination of this effort was the integration of an audio-video system with an existing research platform which is actively collecting data for the Logistics Readiness Branch of the Air Force Research Laboratory

    Studies in authentication

    Get PDF
    This thesis presents advances in several areas of authentication. First, we consider cryptographic accumulators, which are compact digital objects representing arbitrarily large sets. They support efficient proofs of membership (or, alternatively, of non-membership). We give the first definition of cryptographic accumulators in the UC framework, and construct two new accumulators: one uniquely suited for use in a revokable anonymous credential scheme, and one uniquely suited for use in a distributed system such as a blockchain-based PKI. Next, we consider multi-designated verifier signatures (MDVS). An MDVS is a special kind of signature that can only be verified by parties explicitly specified by the signer; more than that, even if those designated verifiers wanted to prove to an external party (e.g. an adversary) that a certain message was signed by the signer, they should be unable to do so. This is crucial in contexts where off-the-record communication is desirable; the sender may not want to be provably linked to a possibly sensitive message, but still want the intended recipients to be able to verify the authenticity of the message. Existing literature defines and builds limited notions of MDVS, where the off-the-record property only holds when it is conceivable that all verifiers collude. We strengthen this property to support any subset of colluding verifiers, and give two constructions of our stronger notion of MDVS: one from functional encryption, and one from standard primitives (but with a slightly larger signature size). Finally, we consider fuzzy password authenticated key exchange (Fuzzy PAKE). PAKEs are protocols which enable two parties holding the same password (that is, the same potentially low-entropy, non-uniform string) to agree on a (high-entropy, uniform) secret key in a way that resists man-in-the-middle attacks and offline dictionary attacks on the password. We define Fuzzy PAKE, a special kind of PAKE where the passwords used for authentication may contain some errors. We provide the first efficient and general solutions to this problem that enable, for example, key agreement based on commonly used biometrics such as iris scans
    corecore