114,026 research outputs found

    Flash-based security primitives: Evolution, challenges and future directions

    Get PDF
    Over the last two decades, hardware security has gained increasing attention in academia and industry. Flash memory has been given a spotlight in recent years, with the question of whether or not it can prove useful in a security role. Because of inherent process variation in the characteristics of flash memory modules, they can provide a unique fingerprint for a device and have thus been proposed as locations for hardware security primitives. These primitives include physical unclonable functions (PUFs), true random number generators (TRNGs), and integrated circuit (IC) counterfeit detection. In this paper, we evaluate the efficacy of flash memory-based security primitives and categorize them based on the process variations they exploit, as well as other features. We also compare and evaluate flash-based security primitives in order to identify drawbacks and essential design considerations. Finally, we describe new directions, challenges of research, and possible security vulnerabilities for flash-based security primitives that we believe would benefit from further exploration

    Performance Considerations for an Embedded Implementation of OMA DRM 2

    Full text link
    As digital content services gain importance in the mobile world, Digital Rights Management (DRM) applications will become a key component of mobile terminals. This paper examines the effect dedicated hardware macros for specific cryptographic functions have on the performance of a mobile terminal that supports version 2 of the open standard for Digital Rights Management defined by the Open Mobile Alliance (OMA). Following a general description of the standard, the paper contains a detailed analysis of the cryptographic operations that have to be carried out before protected content can be accessed. The combination of this analysis with data on execution times for specific algorithms realized in hardware and software has made it possible to build a model which has allowed us to assert that hardware acceleration for specific cryptographic algorithms can significantly reduce the impact DRM has on a mobile terminal's processing performance and battery life.Comment: Submitted on behalf of EDAA (http://www.edaa.com/

    Towards Automating the Construction & Maintenance of Attack Trees: a Feasibility Study

    Full text link
    Security risk management can be applied on well-defined or existing systems; in this case, the objective is to identify existing vulnerabilities, assess the risks and provide for the adequate countermeasures. Security risk management can also be applied very early in the system's development life-cycle, when its architecture is still poorly defined; in this case, the objective is to positively influence the design work so as to produce a secure architecture from the start. The latter work is made difficult by the uncertainties on the architecture and the multiple round-trips required to keep the risk assessment study and the system architecture aligned. This is particularly true for very large projects running over many years. This paper addresses the issues raised by those risk assessment studies performed early in the system's development life-cycle. Based on industrial experience, it asserts that attack trees can help solve the human cognitive scalability issue related to securing those large, continuously-changing system-designs. However, big attack trees are difficult to build, and even more difficult to maintain. This paper therefore proposes a systematic approach to automate the construction and maintenance of such big attack trees, based on the system's operational and logical architectures, the system's traditional risk assessment study and a security knowledge database.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    Information commons planning: Strategic and operational considerations

    Get PDF
    This session will introduce you to planning issues and operational elements that you will need to consider when implementing an Information Commons. The planning process, service models, collaboration issues and relationship building will be discussed in context of a case study of the University of Auckland Library’s Information Commons Group. The Information Commons Group consists of the large purpose built Kate Edger Information Commons (established in 2003) on the City Campus, the smaller Grafton Information Commons (established in 2004) on the Medical Campus and the library-based Epsom Information Commons (established in 2006) on the Education Campus. The group comprises three models of co-location, collaboration, integration and innovation successfully operating within the same IT, service and staffing infrastructure. These student-centered learning facilities provide proactive integrated learning support in a collaborative, interdisciplinary physical and virtual learning environment

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Trusted Computing and Secure Virtualization in Cloud Computing

    Get PDF
    Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment
    • …
    corecore