95 research outputs found

    Proxy Blind Multi-signature Scheme using ECC for handheld devices

    Get PDF
    A proxy blind signature scheme is a special form of blind signature which allowed a designated person called proxy signer to sign on behalf of two or more original signers without knowing the content of the message or document. It combines the advantages of proxy signature, blind signature and multi-signature scheme. This paper describes an e±cient proxy blind multi-signature scheme. The security of the proposed schemes is based on the di±culty of breaking the one-way hash function and the elliptic curve discrete logarithm problem (ECDLP). This can be implemented in low power and small processor handheld devices such as smart card, PDA etc which work in low power and small processor. This scheme utilizes a trusted third party called certificate authority to ensure that signatures can only be generated during valid delegation period. It satisfies the security properties of both proxy and blind signature scheme

    Towards Secure Online Distribution of Multimedia Codestreams

    Get PDF

    Security in Distributed, Grid, Mobile, and Pervasive Computing

    Get PDF
    This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security

    A System for the Verication of Location Claims

    Get PDF
    As location becomes an increasingly important piece of context information regarding a device, so too must the method of providing this information increase in reliability. In many situations, false location information may impact the security or objectives of the system to which it has been supplied. Research concerning localization and location verication addresses this issue. The majority of solutions, however, revolve around a trusted infrastructure to provide a certied location. This thesis presents an enhanced design for a location verication system, moving verication away from infrastructure-based approaches. Instead, an ad hoc approach is presented, employing regular local devices in the role usually reserved for trusted entities - the role of the evidence provider. We begin with an introduction to the area of localization, outlining the primary techniques employed. We summarize previous approaches, highlighting the improvements and outstanding issues of each. Following this, we outline a novel metric for use with distance bounding to increase the accuracy of evidence extracted from the distance bounding process. We show through emulation that this metric is feasible within an IEEE 802.11 wireless network. We detail the Secure Location Verication Proof Gathering Protocol (SLVPGP), a protocol designed to protect the process of evidence gathering. We employ our novel metric to conrm the presence of a device in an area. We repeatedly extend the SLVPGP's basic design to form three protocols, each with increasingly stronger security. These protocols are formally veried to conrm their specied security properties. To complete the design of our verication system, we present two approaches to judging a claim based on the evidence supplied. We demonstrate the accuracy of these approach through simulation. We also include a brief outline of the concept of reputation and discuss an existing app

    Exploiting Natural On-chip Redundancy for Energy Efficient Memory and Computing

    Get PDF
    Power density is currently the primary design constraint across most computing segments and the main performance limiting factor. For years, industry has kept power density constant, while increasing frequency, lowering transistors supply (Vdd) and threshold (Vth) voltages. However, Vth scaling has stopped because leakage current is exponentially related to it. Transistor count and integration density keep doubling every process generation (Moore’s Law), but the power budget caps the amount of hardware that can be active at the same time, leading to dark silicon. With each new generation, there are more resources available, but we cannot fully exploit their performance potential. In the last years, different research trends have explored how to cope with dark silicon and unlock the energy efficiency of the chips, including Near-Threshold voltage Computing (NTC) and approximate computing. NTC aggressively lowers Vdd to values near Vth. This allows a substantial reduction in power, as dynamic power scales quadratically with supply voltage. The resultant power reduction could be used to activate more chip resources and potentially achieve performance improvements. Unfortunately, Vdd scaling is limited by the tight functionality margins of on-chip SRAM transistors. When scaling Vdd down to values near-threshold, manufacture-induced parameter variations affect the functionality of SRAM cells, which eventually become not reliable. A large amount of emerging applications, on the other hand, features an intrinsic error-resilience property, tolerating a certain amount of noise. In this context, approximate computing takes advantage of this observation and exploits the gap between the level of accuracy required by the application and the level of accuracy given by the computation, providing that reducing the accuracy translates into an energy gain. However, deciding which instructions and data and which techniques are best suited for approximation still poses a major challenge. This dissertation contributes in these two directions. First, it proposes a new approach to mitigate the impact of SRAM failures due to parameter variation for effective operation at ultra-low voltages. We identify two levels of natural on-chip redundancy: cache level and content level. The first arises because of the replication of blocks in multi-level cache hierarchies. We exploit this redundancy with a cache management policy that allocates blocks to entries taking into account the nature of the cache entry and the use pattern of the block. This policy obtains performance improvements between 2% and 34%, with respect to block disabling, a technique with similar complexity, incurring no additional storage overhead. The latter (content level redundancy) arises because of the redundancy of data in real world applications. We exploit this redundancy compressing cache blocks to fit them in partially functional cache entries. At the cost of a slight overhead increase, we can obtain performance within 2% of that obtained when the cache is built with fault-free cells, even if more than 90% of the cache entries have at least a faulty cell. Then, we analyze how the intrinsic noise tolerance of emerging applications can be exploited to design an approximate Instruction Set Architecture (ISA). Exploiting the ISA redundancy, we explore a set of techniques to approximate the execution of instructions across a set of emerging applications, pointing out the potential of reducing the complexity of the ISA, and the trade-offs of the approach. In a proof-of-concept implementation, the ISA is shrunk in two dimensions: Breadth (i.e., simplifying instructions) and Depth (i.e., dropping instructions). This proof-of-concept shows that energy can be reduced on average 20.6% at around 14.9% accuracy loss

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book

    Development of a software infrastructure for the secure distribution of documents using free cloud storage

    Full text link
    El siglo XXI pertenece al mundo de la computación especialmente como resultado de la computación en la nube. Esta tecnología posibilita la gestión de información de modo ubicuo, por lo que las personas pueden acceder a sus datos desde cualquier sitio y en cualquier momento. En este panorama, la emergencia del almacenamiento en la nube ha tenido un rol muy importante durante los últimos cinco años. Actualmente, varios servicios gratuitos de almacenamiento en la nube hacen posible que los usuarios tengan un backup sin coste de sus activos, pudiendo gestionarlos y compartirlos, representando una oportunidad muy económica para pequeñas y medianas empresas. Sin embargo, la adopción del almacenamiento en la nube involucra la externalización de datos, por lo que un usuario no tiene la garantía sobre la forma en la que sus datos serían procesados y protegidos. Por tanto, parece necesario el dotar al almacenamiento en la nube pública de una serie de medidas para proteger la confidencialidad y la privacidad de los usuarios, asegurar la integridad de los datos y garantizar un backup adecuado de los activos de información. Por esta razón, se propone en este trabajo Encrypted Cloud, una aplicación de escritorio funcional en Windows y en Ubuntu, que gestiona de forma transparente para el usuario una cantidad variable de directorios locales donde los usuarios pueden depositar sus ficheros de forma encriptada y balanceada. De hecho, se podrá seleccionar las carpetas locales creadas por la aplicación de escritorio de Dropbox o Google Drive como directorios locales para Encrypted Cloud, unificando el espacio de almacenamiento gratuito ofrecido por estos proveedores cloud. Además, Encrypted Cloud permite compartir ficheros encriptados con otros usuarios, usando para ello un protocolo propio de distribución de claves criptográficas simétricas. Destacar que, entre otras funcionalidades, también dispone de un servicio que monitoriza aquellos ficheros que han sido eliminados o movidos por una tercera parte no autorizada.The 21st century belongs to the world of computing, specially as a result of the socalled cloud computing. This technology enables ubiquitous information management and thus people can access all their data from any place and at any time. In this landscape, the emergence of cloud storage has had an important role in the last ve years. Nowadays, several free public cloud storage services make it possible for users to have a free backup of their assets and to manage and share them, representing a lowcost opportunity for Small and Medium Companies (SMEs). However, the adoption of cloud storage involves data outsourcing, so a user does not have the guarantee about the way her data will be processed and protected. Therefore, it seems necessary to endow public cloud storage with a set of means to protect users' con dentiality and privacy, to assess data integrity and to guarantee a proper backup of information assets. For this reason, in this work it is proposed Encrypted Cloud, a desktop application which works on Windows and Ubuntu, and that manages transparently to the user a variable amount of local directories in which the users can place their les in an encrypted and balanced way. Therefore, the user could choose the local folders created by the Dropbox or Google Drive desktop application as local directories for Encrypted Cloud, unifying the free storage space o ered by these cloud providers. In addition, Encrypted Cloud allows to share encrypted les with other users, using for this our own cryptographic key distribution protocol. Note that, among other functionalities, it also has a service that monitors those les which are deleted or moved by an unauthorised third party
    corecore