8 research outputs found

    An Approach towards Data Protection as a Service Intended for Cloud Masses

    Get PDF
    The Cloud is a database of numerous data which is accumulated , adjusted or recovered by clients . By Cloud Service Provider users may receive services of pay per use. There are many services, like quick access to data , scalability , data storage , data recovery … etc which can profit customers . Cloud service providers are responsible of the security and protection of their customers’ data which moves online and vast data centers. Thousands of requests and hundreds of millions of users can benefit from securities added to single cloud platform. Protecting data requires various sophisticated tools, mechanisms and activities. Keywords: Data protection as a service, cryptographic defenses, single cloud platform

    Data Protection as a service in Cloud

    Get PDF
    Cloud computing enable highly scalable services to be easily consumed over the internet as and when needed. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate. Providing a strong data protection to cloud users while enabling rich applications is a challenging task. We explore a new cloud platform architecture called Data Protection as a Service, which dramatically reduces the per-application development effort required to offer data protection, while still allowing rapid development and maintenance

    Identification of time series components using break for time series components (bftsc) and group for time series components (gftsc) techniques

    Get PDF
    Commonly in time series modelling, identifying the four time series components which are trend, seasonal, cyclical, and irregular is conducted manually using the time series plot. However, this manual identification approach requires tacit knowledge of the expert forecaster. Thus, an automated identification approach is needed to bridge the gap between expert and end user. Previously, a technique known as Break for Additive Seasonal and Trend (BFAST) was developed to automatically identify only linear trend and seasonal components, and consider the other two (i.e., cyclical and irregular) as random. Therefore, in this study, BFAST was extended to identify all four time series components using two new techniques termed Break for Time Series Components (BFTSC) and Group for Time Series Components (GFTSC). Both techniques were developed by adding cyclical and irregular components to the previous BFAST technique. The performance of BFTSC and GFTSC were validated through simulation and empirical studies. In the simulation study, monthly and yearly data were replicated 100 times based on three sample sizes (small, medium, and large), and embedding the four time series components as the simulation conditions. Percentages of identifying the correct time series components were calculated in the simulation data. Meanwhile in the empirical study, four data sets were used by comparing the manual identification approach with the BFTSC and GFTSC automatic identification. The simulation findings indicated that BFTSC and GFTSC identified correct time series components 100% when large sample size combined with linear trend and other remaining time series components. The empirical findings also supported BFTSC and GFTSC, which performed as well as a manual identification approach for only two data sets exhibiting linear trend and other components combinations. Both techniques were not performing well in other two data sets displaying curve trend. These findings indicated that BFTSC and GFTSC automatic identification techniques are suitable for data with linear trend and require future extensions for other trends. The proposed techniques help end user in reducing time to automatically identify the time series component

    Comprehensive and Practical Policy Compliance in Data Retrieval Systems

    Get PDF
    Data retrieval systems such as online search engines and online social networks process many data items coming from different sources, each subject to its own data use policy. Ensuring compliance with these policies in a large and fast-evolving system presents a significant technical challenge since bugs, misconfigurations, or operator errors can cause (accidental) policy violations. To prevent such violations, researchers and practitioners develop policy compliance systems. Existing policy compliance systems, however, are either not comprehensive or not practical. To be comprehensive, a compliance system must be able to enforce users' policies regarding their personal privacy preferences, the service provider's own policies regarding data use such as auditing and personalization, and regulatory policies such as data retention and censorship. To be practical, a compliance system needs to meet stringent requirements: (1) runtime overhead must be low; (2) existing applications must run with few modifications; and (3) bugs, misconfigurations, or actions by unprivileged operators must not cause policy violations. In this thesis, we present the design and implementation of two comprehensive and practical compliance systems: Thoth and Shai. Thoth relies on pure runtime monitoring: it tracks data flows by intercepting processes' I/O, and then it checks the associated policies to allow only policy-compliant flows at runtime. Shai, on the other hand, combines offline analysis and light-weight runtime monitoring: it pushes as many policy checks as possible to an offline (flow) analysis by predicting the policies that data-handling processes will be subject to at runtime, and then it compiles those policies into a set of fine-grained I/O capabilities that can be enforced directly by the underlying operating system

    Dynamic collaboration and secure access of services in multi-cloud environments

    Get PDF
    The cloud computing services have gained popularity in both public and enterprise domains and they process a large amount of user data with varying privacy levels. The increasing demand for cloud services including storage and computation requires new functional elements and provisioning schemes to meet user requirements. Multi-clouds can optimise the user requirements by allowing them to choose best services from a large number of services offered by various cloud providers as they are massively scalable, can be dynamically configured, and delivered on demand with large-scale infrastructure resources. A major concern related to multi-cloud adoption is the lack of models for them and their associated security issues which become more unpredictable in a multi-cloud environment. Moreover, in order to trust the services in a foreign cloud users depend on their assurances given by the cloud provider but cloud providers give very limited evidence or accountability to users which offers them the ability to hide some behaviour of the service. In this thesis, we propose a model for multi-cloud collaboration that can securely establish dynamic collaboration between heterogeneous clouds using the cloud on-demand model in a secure way. Initially, threat modelling for cloud services has been done that leads to the identification of various threats to service interfaces along with the possible attackers and the mechanisms to exploit those threats. Based on these threats the cloud provider can apply suitable mechanisms to protect services and user data from these threats. In the next phase, we present a lightweight and novel authentication mechanism which provides a single sign-on (SSO) to users for authentication at runtime between multi-clouds before granting them service access and it is formally verified. Next, we provide a service scheduling mechanism to select the best services from multiple cloud providers that closely match user quality of service requirements (QoS). The scheduling mechanism achieves high accuracy by providing distance correlation weighting mechanism among a large number of services QoS parameters. In the next stage, novel service level agreement (SLA) management mechanisms are proposed to ensure secure service execution in the foreign cloud. The usage of SLA mechanisms ensures that user QoS parameters including the functional (CPU, RAM, memory etc.) and non-functional requirements (bandwidth, latency, availability, reliability etc.) of users for a particular service are negotiated before secure collaboration between multi-clouds is setup. The multi-cloud handling user requests will be responsible to enforce mechanisms that fulfil the QoS requirements agreed in the SLA. While the monitoring phase in SLA involves monitoring the service execution in the foreign cloud to check its compliance with the SLA and report it back to the user. Finally, we present the use cases of applying the proposed model in scenarios such as Internet of Things (IoT) and E-Healthcare in multi-clouds. Moreover, the designed protocols are empirically implemented on two different clouds including OpenStack and Amazon AWS. Experiments indicate that the proposed model is scalable, authentication protocols result only in a limited overhead compared to standard authentication protocols, service scheduling achieves high efficiency and any SLA violations by a cloud provider can be recorded and reported back to the user.My research for first 3 years of PhD was funded by the College of Engineering and Technology

    Improving trust in cloud, enterprise, and mobile computing platforms

    Get PDF
    Trust plays a fundamental role in the adoption of technology by society. Potential consumers tend to avoid a particular technology whenever they feel suspicious about its ability to cope with their security demands. Such a loss of trust could occur in important computing platforms, namely cloud, enterprise, and mobile platforms. In this thesis, we aim to improve trust in these platforms by (i) enhancing their security mechanisms, and (ii) giving their users guarantees that these mechanisms are in place. To realize both these goals, we propose several novel systems. For cloud platforms, we present Excalibur, a system that enables building trusted cloud services. Such services give cloud customers the ability to process data privately in the cloud, and to attest that the respective data protection mechanisms are deployed. Attestation is made possible by the use of trusted computing hardware placed on the cloud nodes. For enterprise platforms, we propose an OS security model—the broker security model—aimed at providing information security against a negligent or malicious system administrator while letting him retain most of the flexibility to manage the OS. We demonstrate the effectiveness of this model by building BrokULOS, a proof-of-concept instantiation of this model for Linux. For mobile platforms, we present the Trusted Language Runtime (TLR), a software system for hosting mobile apps with stringent security needs (e.g., e-wallet). The TLR leverages ARM TrustZone technology to protect mobile apps from OS security breaches.Für die gesellschaftliche Akzeptanz von Technologie spielt Vertrauen eine entscheidende Rolle. Wichtige Rechnerplattformen erfüllen diesbezüglich die Anforderungen ihrer Nutzer jedoch nicht zufriedenstellend. Dies trifft insbesondere auf Cloud-, Unternehmens- und Mobilplattformen zu. In dieser Arbeit setzen wir uns zum Ziel, das Vertrauen in diese Plattformen zu stärken, indem wir (1) ihre Sicherheitsmechanismen verbessern sowie (2) garantieren, dass diese Sicherheitsmechanismen aktiv sind. Zu diesem Zweck schlagen wir mehrere neuartige Systeme vor. Für Cloud-Plattformen präsentieren wir Excalibur, welches das Erstellen von vertrauenswürdigen Cloud-Diensten ermöglicht. Diese Cloud-Dienste erlauben es den Benutzern, ihre Daten in der Cloud vertraulich zu verarbeiten und sich darüber hinaus den Einsatz entsprechender Schutzvorkehrungen bescheinigen zu lassen. Eine solche Attestierung geschieht mit Hilfe von Trusted Computing Hardware auf den Cloud-Servern. Für Unternehmensplattformen stellen wir ein Sicherheitsmodell auf Betriebssystemebene vor—das Broker Security Model. Es zielt darauf ab, Informationssicherheit trotz fahrlässigem oder böswilligem Systemadministrator zu gewährleisten, ohne diesen bei seinen Administrationsaufgaben stark einzuschränken. Wir demonstrieren die Leistungsfähigkeit dieses Modells mit BrokULOS, einer Prototypimplementierung für Linux. Für Mobilplattformen stellen wir die Trusted Language Runtime (TLR) vor, ein Softwaresystem zum Hosting von mobilen Anwendungen mit strikten Sicherheitsanforderungen (z.B. elektronische Bezahlfunktionen). TLR nutzt die ARM TrustZone-Technologie um mobile Anwendungen vor Sicherheitslücken im Betriebssystem selbst zu schützen

    Sicheres Cloud Computing in der Praxis: Identifikation relevanter Kriterien zur Evaluierung der Praxistauglichkeit von Technologieansätzen im Cloud Computing Umfeld mit dem Fokus auf Datenschutz und Datensicherheit

    Get PDF
    In dieser Dissertation werden verschiedene Anforderungen an sicheres Cloud Computing untersucht. Insbesondere geht es dabei um die Analyse bestehender Forschungs- und Lösungsansätze zum Schutz von Daten und Prozessen in Cloud-Umgebungen und um die Bewertung ihrer Praxistauglichkeit. Die Basis für die Vergleichbarkeit stellen spezifizierte Kriterien dar, nach denen die untersuchten Technologien bewertet werden. Hauptziel dieser Arbeit ist zu zeigen, auf welche Weise technische Forschungsansätze verglichen werden können, um auf dieser Grundlage eine Bewertung ihrer Eignung in der Praxis zu ermöglichen. Hierzu werden zunächst relevante Teilbereiche der Cloud Computing Sicherheit aufgezeigt, deren Lösungsstrategien im Kontext der Arbeit diskutiert und State-of-the-Art Methoden evaluiert. Die Aussage zur Praxistauglichkeit ergibt sich dabei aus dem Verhältnis des potenziellen Nutzens zu den damit verbundene erwartenden Kosten. Der potenzielle Nutzen ist dabei als Zusammenführung der gebotenen Leistungsfähigkeit, Sicherheit und Funktionalität der untersuchten Technologie definiert. Zur objektiven Bewertung setzten sich diese drei Größen aus spezifizierten Kriterien zusammen, deren Informationen direkt aus den untersuchten Forschungsarbeiten stammen. Die zu erwartenden Kosten ergeben sich aus Kostenschlüsseln für Technologie, Betrieb und Entwicklung. In dieser Arbeit sollen die zugleich spezifizierten Evaluierungskriterien sowie die Konstellation der obig eingeführten Begriffe ausführlich erläutert und bewertet werden. Für die bessere Abschätzung der Eignung in der Praxis wird in der Arbeit eine angepasste SWOT-Analyse für die identifizierten relevanten Teilbereiche durchgeführt. Neben der Definition der Praktikabilitätsaussage, stellt dies die zweite Innovation dieser Arbeit dar. Das konkrete Ziel dieser Analyse ist es, die Vergleichbarkeit zwischen den Teilbereichen zu erhöhen und so die Strategieplanung zur Entwicklung sicherer Cloud Computing Lösungen zu verbessern
    corecore