3 research outputs found

    Optimizing the resource utilization of enterprise content management workloads through measured performance baselines and dynamic topology adaptation

    Get PDF
    To oblige with the legal requirements, organizations have to keep data up to a certain amount of time.They are creating a huge amount of data on daily basis therefore it is very difficult for them to manage and store this data due to the legal requirements. This is where Enterprise Content Management (ECM) system comes into picture. ECM is a means of organizing and storing an organization's documents and other content that relates to the organization's processes. With ECM being offered as a service, thanks to cloud computing it makes sense to offer this functionality as a shared service. There are various benefits of offering it as a shared service one of which is that it is a cheaper method to meet the needs of large organizations with different requirements for ECM functionality. ECM systems use resources like memory, central processing unit (CPU) and disk which are shared among different clients (organizations). With every client, a service level agreement is there which describes the performance criteria a provider promises to meet while delivering the ECM service. To improve the performance of the ECM by optimizing the use of resources and match the Service level agreements various techniques are used. In this thesis, heuristics technique is used. Performance baselines and utilization of resources are measured for different workloads of the clients and on the basis of that, resources of the ECM can be dynamically provisioned or assigned to different clients get the optimized resource utilization and better performance. First of all typical workload is designed which is similar to the work being performed by various banks and insurance companies using IBM ECM systems and which consists of interactive and batch type of operations. Performance baselines are being measured for these workloads by monitoring the key performance indicators (KPIs) with variable number of users performing operations on the system at the same time. After getting the results for KPIs and resource utilization, resources are being assigned dynamically according to their utilization in a way that the use of resources is optimized and clients are satisfied with better service at the same time

    Deletion of content in large cloud storage systems

    Get PDF
    This thesis discusses the practical implications and challenges of providing secure deletion of data in cloud storage systems. Secure deletion is a desirable functionality to some users, but a requirement to others. The term secure deletion describes the practice of deleting data in such a way, that it can not be reconstructed later, even by forensic means. This work discuss the practice of secure deletion as well as existing methods that are used today. When moving from traditional on-site data storage to cloud services, these existing methods are not applicable anymore. For this reason, it presents the concept of cryptographic deletion and points out the challenge behind implementing it in a practical way. A discussion of related work in the areas of data encryption and cryptographic deletion shows that a research gap exists in applying cryptographic deletion in an efficient, practical way to cloud storage systems. The main contribution of this thesis, the Key-Cascade method, solves this issue by providing an efficient data structure for managing large numbers of encryption keys. Secure deletion is practiced today by individuals and organizations, who need to protect the confidentiality of data, after it has been deleted. It is mostly achieved by means of physical destruction or overwriting in local hard disks or large storage systems. However, these traditional methods ofoverwriting data or destroying media are not suited to large, distributed, and shared cloud storage systems. The known concept of cryptographic deletion describes storing encrypted data in an untrusted storage system, while keeping the key in a trusted location. Given that the encryption is effective, secure deletion of the data can now be achieved by securely deleting the key. Whether encryption is an acceptable protection mechanism, must be decided either by legislature or the customers themselves. This depends on whether cryptographic deletion is done to satisfy legal requirements or customer requirements. The main challenge in implementing cryptographic deletion lies in the granularity of the delete operation. Storage encryption providers today either require deleting the master key, which deletes all stored data, or require expensive copy and re-encryption operations. In the literature, a few constructions can be found that provide an optimized key management. The contributions of this thesis, found in the Key-Cascade method, expand on those findings and describe data structures and operations for implementing efficient cryptographic deletion in a cloud object store. This thesis discusses the conceptual aspects of the Key-Cascade method as well as its mathematical properties. In order to enable production use of a Key-Cascade implementation, it presents multiple extensions to the concept. These extensions improve the performance and usability and also enable frictionless integration into existing applications. With SDOS, the Secure Delete Object Store, a working implementation of the concepts and extensions is given. Its design as an API proxy is unique among the existing cryptographic deletion systems and allows integration into existing applications, without the need to modify them. The results of performance evaluations, conducted with SDOS, show that cryptographic deletion is feasible in practice. With MCM, the Micro Content Management system, this thesis also presents a larger demonstrator system for SDOS. MCM provides insight into how SDOS can be integrated into and deployed as part of a cloud data management application
    corecore