7 research outputs found

    Data security in multi-tenant environments in the cloud

    Get PDF
    While cloud computing is widely used in consumer applications, business and enterprise customers remain hesitant. The most commonly cited issues preventing the adoption of cloud computing are reliability, security and privacy. Enterprise Software as a Service solutions offered in the cloud consist of many distinct components that are integrated into a solution which is consumed by the customer. Single components are connected and form a complex solution by communicating and complementing their services. This communication is often not properly secured because components were developed for non-cloud scenarios where inter process and component communication security requirements are less stringent. Preventing unauthorized access by users, processes or components is a basic requirement for any solution. Especially in a cloud context the integration of not or lesser trusted components might be required but a trustable solution is still expected. As a first line of defense, access to systems and services is secured by authentication mechanisms. This requires a system to validate user credentials as well as provide proof of its identity to the user. The individual components comprising a cloud service need to authenticate each other as well in order to prevent unauthorized access by compromised components or systems. Securing this communication by authentication requires the individual components to have access to certain keys. While authentication is used to secure services against unauthorized access, encryption can often be employed to secure data for transport or storage. In both cases similar problems are faced. When using keys for encryption and authentication the security of the system relies on securely managing the keys. This thesis will investigate technology options for authentication, encryption and key management in a cloud based Software as a Service solution exemplified by the IBM SmartCloud Archive

    Deletion of content in large cloud storage systems

    Get PDF
    This thesis discusses the practical implications and challenges of providing secure deletion of data in cloud storage systems. Secure deletion is a desirable functionality to some users, but a requirement to others. The term secure deletion describes the practice of deleting data in such a way, that it can not be reconstructed later, even by forensic means. This work discuss the practice of secure deletion as well as existing methods that are used today. When moving from traditional on-site data storage to cloud services, these existing methods are not applicable anymore. For this reason, it presents the concept of cryptographic deletion and points out the challenge behind implementing it in a practical way. A discussion of related work in the areas of data encryption and cryptographic deletion shows that a research gap exists in applying cryptographic deletion in an efficient, practical way to cloud storage systems. The main contribution of this thesis, the Key-Cascade method, solves this issue by providing an efficient data structure for managing large numbers of encryption keys. Secure deletion is practiced today by individuals and organizations, who need to protect the confidentiality of data, after it has been deleted. It is mostly achieved by means of physical destruction or overwriting in local hard disks or large storage systems. However, these traditional methods ofoverwriting data or destroying media are not suited to large, distributed, and shared cloud storage systems. The known concept of cryptographic deletion describes storing encrypted data in an untrusted storage system, while keeping the key in a trusted location. Given that the encryption is effective, secure deletion of the data can now be achieved by securely deleting the key. Whether encryption is an acceptable protection mechanism, must be decided either by legislature or the customers themselves. This depends on whether cryptographic deletion is done to satisfy legal requirements or customer requirements. The main challenge in implementing cryptographic deletion lies in the granularity of the delete operation. Storage encryption providers today either require deleting the master key, which deletes all stored data, or require expensive copy and re-encryption operations. In the literature, a few constructions can be found that provide an optimized key management. The contributions of this thesis, found in the Key-Cascade method, expand on those findings and describe data structures and operations for implementing efficient cryptographic deletion in a cloud object store. This thesis discusses the conceptual aspects of the Key-Cascade method as well as its mathematical properties. In order to enable production use of a Key-Cascade implementation, it presents multiple extensions to the concept. These extensions improve the performance and usability and also enable frictionless integration into existing applications. With SDOS, the Secure Delete Object Store, a working implementation of the concepts and extensions is given. Its design as an API proxy is unique among the existing cryptographic deletion systems and allows integration into existing applications, without the need to modify them. The results of performance evaluations, conducted with SDOS, show that cryptographic deletion is feasible in practice. With MCM, the Micro Content Management system, this thesis also presents a larger demonstrator system for SDOS. MCM provides insight into how SDOS can be integrated into and deployed as part of a cloud data management application

    Optimization of the physical operators of a native RDF triple store for modern processor hardware

    No full text
    Die Leistungsfähigkeit handelsüblicher CPUs wächst seit langem schneller als die Geschwindigkeit des Speichers. Diese größer werdende Lücke macht die Optimierung von Speicherzugriffen zu einem immer wichtigeren Werkzeug bei der Optimierung von Datenbanksystemen. In dieser Studienarbeit wird daher eine alternative Implementierung der RDF-3X Datenbank-Engine vorgestellt, mit dem Ziel die Zugriffsmuster auf den Prozessor-Cache, und den Hauptspeicher zu verbessern, und dadurch Effizienzgewinne zu erzielen. Es wird der Aufbau von Datenbanksystemen erläutert, und eine Verbesserung der Operatoren der experimentellen Open-Source Datenbank RDF-3X vorgeschlagen. Anhand von Leistungsmessungen an der Implementierung wird der Erfolg der Verbesserungen beziffert, und es werden Hinweise für künftige Arbeiten auf dem Gebiet gegeben

    Particulate Matter Matters---The Data Science Challenge @ BTW 2019

    No full text
    For the second time, the Data Science Challenge took place as part of the 18th symposium ``Database Systems for Business, Technology and Web'' (BTW) of the Gesellschaft für Informatik (GI). The Challenge was organized by the University of Rostock and sponsored by IBM and SAP. This year, the integration, analysis and visualization around the topic of particulate matter pollution was the focus of the challenge. After a preselection round, the accepted participants had one month to adapt their developed approach to a substantiated problem, the real challenge. The final presentation took place at BTW 2019 in front of the prize jury and the attending audience. In this article, we give a brief overview of the schedule and the organization of the Data Science Challenge. In addition, the problem to be solved and its solution will be presented by the participants

    Particulate Matter Matters

    Get PDF
    For the second time, the Data Science Challenge took place as part of the 18th symposium “Database Systems for Business, Technology and Web” (BTW) of the Gesellschaft für Informatik (GI). The Challenge was organized by the University of Rostock and sponsored by IBM and SAP. This year, the integration, analysis and visualization around the topic of particulate matter pollution was the focus of the challenge. After a preselection round, the accepted participants had one month to adapt their developed approach to a substantiated problem, the real challenge. The final presentation took place at BTW 2019 in front of the prize jury and the attending audience. In this article, we give a brief overview of the schedule and the organization of the Data Science Challenge. In addition, the problem to be solved and its solution will be presented by the participants

    Cohesin Associates with Spindle Poles in a Mitosis-specific Manner and Functions in Spindle Assembly in Vertebrate Cells

    No full text
    Cohesin is an essential protein complex required for sister chromatid cohesion. Cohesin associates with chromosomes and establishes sister chromatid cohesion during interphase. During metaphase, a small amount of cohesin remains at the chromosome-pairing domain, mainly at the centromeres, whereas the majority of cohesin resides in the cytoplasm, where its functions remain unclear. We describe the mitosis-specific recruitment of cohesin to the spindle poles through its association with centrosomes and interaction with nuclear mitotic apparatus protein (NuMA). Overexpression of NuMA enhances cohesin accumulation at spindle poles. Although transient cohesin depletion does not lead to visible impairment of normal spindle formation, recovery from nocodazole-induced spindle disruption was significantly impaired. Importantly, selective blocking of cohesin localization to centromeres, which disrupts centromeric sister chromatid cohesion, had no effect on this spindle reassembly process, clearly separating the roles of cohesin at kinetochores and spindle poles. In vitro, chromosome-independent spindle assembly using mitotic extracts was compromised by cohesin depletion, and it was rescued by addition of cohesin that was isolated from mitotic, but not S phase, cells. The combined results identify a novel spindle-associated role for human cohesin during mitosis, in addition to its function at the centromere/kinetochore regions
    corecore