15 research outputs found

    PREEMPTIVE PRIORITY SCHEDULING WITH AGING TECHNIQUE FOR EFFECTIVE DISASTER MANAGEMENT IN CLOUD

    Get PDF
    Cloud computing has intern the attention of today’s leading IT industries offering huge potential for more flexible, readily scalable and cost effective IT operation. There is a significant anticipation for emergent innovation and expanded capabilities of a cloud-based environment as the cloud services are still in the infant stage. With all technology innovation, the biggest gain will be realized when the Cloud services are efficiently utilized. One of the prime contributions of Cloud is its capacity to handle a huge amount of data for either processing or storing. The inadequacy of this model is that it is prone to disaster. Some of the popular scheduling techniques applied by the researchers and leading IT industries are Round Robin, Preemptive scheduling etc. This research focuses on a novel approach for disaster management through efficient scheduling mechanism.This paper presents a Priority Preemptive scheduling (PPS) with aging of the low priority jobs in Cloud for disaster management. The implementation results show that the jobs at any instance of time are provided with the resources and henceforth preventing them to enter the starvation, which is one of the prime causes for disaster

    Trustworthy Cloud Computing

    Get PDF
    Trustworthy cloud computing has been a central tenet of the European Union cloud strategy for nearly a decade. This chapter discusses the origins of trustworthy computing and specifically how the goals of trustworthy computing—security and privacy, reliability, and business integrity—are represented in computer science research. We call for further inter- and multi-disciplinary research on trustworthy cloud computing that reflect a more holistic view of trust

    Big Data in the Cloud: A Survey

    Get PDF
    Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future

    Disaster recovery in cloud computing systems: an overview

    Get PDF
    With the rapid growth of internet technologies, large-scale online services, such as data backup and data recovery are increasingly available. Since these large-scale online services require substantial networking, processing, and storage capacities, it has become a considerable challenge to design equally large-scale computing infrastructures that support these services cost-effectively. In response to this rising demand, cloud computing has been refined during the past decade and turned into a lucrative business for organizations that own large datacenters and offer their computing resources. Undoubtedly cloud computing provides tremendous benefits for data storage backup and data accessibility at a reasonable cost. This paper aims at surveying and analyzing the previous works proposed for disaster recovery in cloud computing. The discussion concentrates on investigating the positive aspects and the limitations of each proposal. Also examined are discussed the current challenges in handling data recovery in the cloud context and the impact of data backup plan on maintaining the data in the event of natural disasters. A summary of the leading research work is provided outlining their weaknesses and limitations in the area of disaster recovery in the cloud computing environment. An in-depth discussion of the current and future trends research in the area of disaster recovery in cloud computing is also offered. Several work research directions that ought to be explored are pointed out as well, which may help researchers to discover and further investigate those problems related to disaster recovery in the cloud environment that have remained unresolved

    Implementasi Dan Evaluasi Kinerja Disaster Recovery dalam Lingkungan Data Center Berbasis Cloud - Implementation And Evaluation Of Disaster Recovery Performance In Cloud-Based Data Center Environment

    Get PDF
    Meningkatnya pertumbuhan penggunaan internet diimbangi dengan tingginya permintaan pengguna atau penikmat layanan aplikasi agar aplikasi dapat diakses setiap waktu. Di zaman milenial ini, ketersedian aplikasi menjadi raja dari segala sesuatu, jika aplikasi tidak dapat diakses bahkan dalam satu menit akan menjadi masalah besar bagi pemilik bisnis dan mengancam reputasi perusahaan terutama yang membutuhkan ketersediaan aplikasi selama 24 jam contohnya aplikasi perbankan, aplikasi e-commerse, dsb. Untuk menjamin ketersediaan aplikasi dan sistem mayoritas perusahaan yang memiliki infrastruktur tradisional atau private server memiliki disaster planning mulai dari cara paling sederhana yaitu melakukan backup secara berkala maupun membangun sistem Disaster Recovery secara realtime. Saat ini, ada banyak penyedia cloud yang menawarkan layanan yang disebut DRaaS (Disaster Recovery as a Service) untuk memfasilitasi kebutuhan perusahaan yang ingin menjaga kelangsungan bisnis mereka. Banyaknya penyedia layanan cloud serta miripnya layanan yang ditawarkan membuat calon pengguna bingung penyedia cloud mana yang sesuai dengan sistem bisnis mereka. Penelitian ini memperkenalkan metode desain implementasi DRaaS dari dua penyedia layanan cloud berbeda untuk mendapatkan parameter RPO dan RTO yang nantinya dijadikan pertimbangan dalam memilih layanan DRaaS. Hasil dari penelitian ini adalah sebuah sistem disaster recovery yang menggunakan Google Cloud dan Amazon Web Service sebagai secondary server yang menghasilkan nilai RPO 3 menit. Sistem DRaaS ini telah diuji untuk melakukan failover dengan nilai RTO 9,6 detik untuk Google Cloud dan 15,4 detik untuk Amazon Web Service. ========================================================================================================== The increasing growth of Internet use is offset by the high demand of users or connoisseurs of application services for applications to be accessed at any time. In this millennial era, the availability of applications to be the king of everything, if the application can not be accessed even within a minute will be a big problem for business owners and threaten the reputation of the company especially those requiring 24 hours application availability for example banking applications, e-commerce applications, etc. To ensure the availability of applications and systems the majority of companies that have a traditional infrastructure or private server has a disaster planning starting from the simplest way of doing regular backups and build system Disaster Recovery in realtime. Currently, there are many cloud providers offering services called DRaaS (Disaster Recovery as a Service) to facilitate the needs of companies that want to keep their business going. The number of cloud service providers and similar services offered to make prospective users confused which cloud provider that suits their business systems. This research introduces the design method of DRaaS implementation from two different cloud service providers to get RPO and RTO parameters which will be taken into consideration in choosing DRaaS service. The results of this study is a disaster recovery system that uses Google Cloud and Amazon Web Service as a secondary server that produces a 3 minute RPO value. This DRaaS system has been tested for failover with RTO value 9.6 seconds for Google Cloud and 15.4 seconds for Amazon Web Service

    Cloud Standby - Eine Methode zur Vorhaltung eines Notfallsystems in der Cloud

    Get PDF
    Kleine und Mittelständische Unternehmen (KMU) sehen sich in ihrem Alltag immer wieder Gefahren ausgesetzt. Zwar führen 94 % der KMUs in Deutschland zwar regelmäßig Datensicherungen durch, aber gerade einmal 50 % sichern ihre kritischen Prozesse und die daran beteiligten Systeme mit einem Notfallsystem bei einem anderen Anbieter ab. Ziel dieser Arbeit ist es daher, eine neue Methode für die Vorhaltung eines Notfallsystems in der Cloud zu entwickeln

    Cloud Standby - Eine Methode zur Vorhaltung eines Notfallsystems in der Cloud

    Get PDF
    Ziel dieser Arbeit ist es, eine neue Methode für die Vorhaltung eines Notfallsystems in der Cloud zu entwickeln, die aus einer Methode zur Notfallwiederherstellung und einer modellbasierten Deployment-Methode besteht. Die Methode zur Notfallwiederherstellung besteht aus einem Notfallwiederherstellungsprozess, eines Notfallwiederherstellungsprotokolls und einer Entscheidungsunterstützung zur Konfiguration des Prozesses besteht

    تحسين الأمنية بتطوير نموذج للتوثيق وفحص سلامة ملفات المستخدم لتطبيقات الويب السحابية العامة

    Get PDF
    Cloud computing is being adopted generally and it has shown a high impact on the development of businesses, it enables on-demand access to a shared pool of configurable computing resources. Cloud computing faces many security problems like any other electronic system, and among these problems is the attacks on user authentication and thus on the integrity and confidentiality of data especially in the public cloud computing environment. Authentication plays a major role in keeping information secure in the cloud environment. Cloud users must ensure the integrity of their files stored in the cloud. In this study, the main objective is to develop a model for user authentication and checking the integrity of files stored in the public cloud, by studying the state of art of security models in public cloud computing and analyzing them, in particular the models for integrity of data or files and user authentication. This study uses the descriptive, deductive, applied and prototype methodology. We developed a model for the user authentication and file integrity checking for files in the cloud, in the user authentication system, we used two-factor authentication that involves password and digital signature which uses the certificate-based authentication. For the file integrity checking system, the model used a secure hashing algorithm whereby the file hash value is calculated and encrypted before sending to the cloud. All file and data transfers between the cloud provider and the user are encrypted using the symmetric and asymmetric encryption system. We used several tools and programming languages to implement the model and experiments. Our experiments proved that the model is effective and acceptable. Among the most important results is that the model provides strong user authentication and integrity checking system for cloud users and files. The model also provides confidentiality and non-repudiation. It also increases user confidence in cloud applications as we ensured secure connection between cloud users and cloud service providers, the model also uses less computation power on user devices. Future studies should be conducted to solve the problem of phishing attacks for web pages, and the model can be improved to verify the integrity of files shared by multiple users and adapt the model to new security algorithms
    corecore