7 research outputs found

    An enhancement of toe model by investigating the influential factors of cloud adoption security objectives

    Get PDF
    Cloud computing (CC) is a future technological trend for technological infrastructure development. And it is growing strongly as the backbone of industrial future technological infrastructure. As CC service has a lot to offer, it also has some major downside that clients cannot ignore. For CC service adoption, the potential candidates are SMEs but due to lack of resources, experience, expertise and low financial structure scenario CC can be most helpful. CC faces a major issue in term of cloud security, an organization doesn’t understand the cloud security factors in the organization and data owner doubts about their data. In the research paper, an investigation on the cloud security objectives to find out the influential factors for cloud adoption in SMEs by proposing an enhancement of Technology-Organization- Environment (TOE) model with some positive influential factor like cloud security, relative advantages, cost saving, availability, SLA, capability, top management, organizational readiness, IS knowledge, malicious insiders, government regulatory support, competitive pressure, size and type. Some negative influencing factors like technological readiness, cloud trust and lack of standards in cloud security. Data were collected by questionnaires from a selected IT company based on SaaS and public cloud. Case study method has been used for validating the enhance TOE model. The IBM Statistics SPSS v22 tool was used for data analysis. The results of data analysis support the enhancement as well as all the proposed hypotheses. In summary, the results of the analysis show that all the enhancement factors were found to have a significant cloud security influence on adoption of cloud computing for SMEs

    Sla Management in a Collaborative Network Of Federated Clouds: The Cloudland

    Get PDF
    Cloud services have always promised to be available, flexible, and speedy. However, not a single Cloud provider can deliver such promises to their distinctly demanding customers. Cloud providers have a constrained geographical presence, and are willing to invest in infrastructure only when it is profitable to them. Cloud federation is a concept that collectively combines segregated Cloud services to create an extended pool of resources for Clouds to competently deliver their promised level of services. This dissertation is concerned with studying the governing aspects related to the federation of Clouds through collaborative networking. The main objective of this dissertation is to define a framework for a Cloud network that considers balancing the trade-offs among customers’ various quality of service (QoS) requirements, as well as providers\u27 resources utilization. We propose a network of federated Clouds, CloudLend, that creates a platform for Cloud providers to collaborate, and for customers to expand their service selections. We also define and specify a service level agreement (SLA) management model in order to govern and administer the relationships established between different Cloud services in CloudLend. We define a multi-level SLA specification model to annotate and describe QoS terms, in addition to a game theory-based automated SLA negotiation model that supports both customers and providers in negotiating SLA terms, and guiding them towards signing a contract. We also define an adaptive agent-based SLA monitoring model which identifies the root causes of SLA violations, and impartially distributes any updates and changes in established SLAs to all relevant entities. Formal verification proved that our proposed framework assures customers with maximum optimized guarantees to their QoS requirements, in addition to supporting Cloud providers to make informed resource utilization decisions. Additionally, simulation results demonstrate the effectiveness of our SLA management model. Our proposed Cloud Lend network and its SLA management model paves the way to resource sharing among different Cloud providers, which allows for the providers’ lock-in constraints to be broken, allowing effortless migration of customers’ applications across different providers whenever is needed

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks

    Exploring Firm-Level Cloud Adoption and Diffusion

    Get PDF
    Cloud computing innovation adoption literature has primarily focused on individuals, small businesses, and nonprofit organizations. The functional linkage between cloud adoption and diffusion is instrumental toward understanding enterprise firm-level adoption. The purpose of this qualitative collective case study was to explore strategies used by information technology (IT) executives to make advantageous enterprise cloud adoption and diffusion decisions. This study was guided by an integrated diffusion of innovation and technology, organization, and environment conceptual framework to capture and model this complex, multifaceted problem. The study’s population consisted of IT executives with cloud-centric roles in 3 large (revenues greater than $5 billion) telecom-related companies with a headquarters in the United States. Data collection included semistructured, individual interviews (n = 19) and the analysis of publicly available financial documents (n = 50) and organizational technical documents (n = 41). Data triangulation and interviewee member checking were used to increase study findings validity. Inter- and intracase analyses, using open and axial coding as well as constant comparative methods, were leveraged to identify 5 key themes namely top management support, information source bias, organizational change management, governance at scale, and service selection. An implication of this study for positive social change is that IT telecom executives might be able to optimize diffusion decisions to benefit downstream consumers in need of services

    Developing a user-centric distributed middleware for SLA monitoring in SaaS cloud computing using RESTful services

    Get PDF
    One of the most important discussions in the cloud computing field is user satisfaction with the associated services. It is important to maintain trusted relationships between clients and providers, for customers who pay subscriptions to receive these services in a timely and accurate manner. Despite the overwhelming advantages of cloud services, clients sometimes have problems in service outage and resource failure. This is due to the failures that can happen in cloud servers, which cause outages to the received services. For example, the failure of Microsoft Office 365 on 18th of January 2016, caused email disruption which lasted for many days. New measures are needed to ensure that the contract signed between the two parties, known as a Service Level Agreement (SLA) has been adhered to. Measuring the quality of cloud computing provision from the client’s point of view is, therefore, essential in order to ensure that the service conforms to the level specified in the agreement; this is usually referred to as Quality of Experience. In recent years, there has been an increase shift in using Simple Object Access Protocol (SOAP) to Representational State Transfer (REST) technology as an alternative technology in cloud applications APIs development. However, there is a penchant in most of cloud monitoring solutions to use SOAP protocol in managing the monitoring process. This trend has drawn the attention to the need for using REST technology in transferring the monitored data between the provider side and the client side. This thesis addresses the problem of monitoring the quality of Software as a Service from the users’ perspective, and the need for developing a lightweight middleware for delivering the monitored data in Software as a Service cloud computing. The aim of this research is to propose a user centric approach for monitoring Software as a Service in cloud computing, and to reduce the overhead caused by the monitoring process. In order to achieve this aim, a user centric middleware capable of monitoring the Quality of Experience has been developed. The developed middleware is a Service Oriented middleware which uses RESTful web services and provides the monitoring process as an add-on service. A new approach was developed for embedding the SLA parameters in REST services through extending the HTTP messages and exploiting the HEAD and OPTIONS methods to transmit the monitored data and to send notifications about anySLA violations. This reduces the need to exchange extra monitoring messages between the two parties, and hence reduces the communication overhead. Furthermore, the estimation of the user satisfaction was implemented by developing a decision making approach to estimate the Quality of Experience value and to predict the effect of the SLA parameters and the Quality of Service (QoS) on the user satisfaction. Fuzzy logic techniques were employed in the decision making process.The developed middleware is called MonSLAR, for Monitoring SLA for Restful services in SaaS cloud computing environments. The middleware was implemented using the Java programming language, and tested successfully in a cloud environment to prove the proposed solution’s capability of transmitting the data using the REST methods, in addition to providing automated and real time feedback. MonSLAR uses a distributed monitoring architecture, which allows SLA parameters to be embedded in the requests and responses of the REST protocol. The proposed middleware was evaluated by measuring the overhead caused by using REST technology in terms of response time and message size and compared to existing techniques. The results revealed that the message size overhead of using REST is approximately five times less than the message size overhead caused by SOAP. Furthermore, the response time overhead of the monitoring process is comparable to the overhead caused by the available monitoring frameworks. To sum up, the proposed middleware will help to strengthen the relationship between the client and the provider by using real time notifications to the client about any degradation in the cloud services, using a lightweight middleware

    Service level agreement specification for IoT application workflow activity deployment, configuration and monitoring

    Get PDF
    PhD ThesisCurrently, we see the use of the Internet of Things (IoT) within various domains such as healthcare, smart homes, smart cars, smart-x applications, and smart cities. The number of applications based on IoT and cloud computing is projected to increase rapidly over the next few years. IoT-based services must meet the guaranteed levels of quality of service (QoS) to match users’ expectations. Ensuring QoS through specifying the QoS constraints using service level agreements (SLAs) is crucial. Also because of the potentially highly complex nature of multi-layered IoT applications, lifecycle management (deployment, dynamic reconfiguration, and monitoring) needs to be automated. To achieve this it is essential to be able to specify SLAs in a machine-readable format. currently available SLA specification languages are unable to accommodate the unique characteristics (interdependency of its multi-layers) of the IoT domain. Therefore, in this research, we propose a grammar for a syntactical structure of an SLA specification for IoT. The grammar is based on a proposed conceptual model that considers the main concepts that can be used to express the requirements for most common hardware and software components of an IoT application on an end-to-end basis. We follow the Goal Question Metric (GQM) approach to evaluate the generality and expressiveness of the proposed grammar by reviewing its concepts and their predefined lists of vocabularies against two use-cases with a number of participants whose research interests are mainly related to IoT. The results of the analysis show that the proposed grammar achieved 91.70% of its generality goal and 93.43% of its expressiveness goal. To enhance the process of specifying SLA terms, We then developed a toolkit for creating SLA specifications for IoT applications. The toolkit is used to simplify the process of capturing the requirements of IoT applications. We demonstrate the effectiveness of the toolkit using a remote health monitoring service (RHMS) use-case as well as applying a user experience measure to evaluate the tool by applying a questionnaire-oriented approach. We discussed the applicability of our tool by including it as a core component of two different applications: 1) a contextaware recommender system for IoT configuration across layers; and 2) a tool for automatically translating an SLA from JSON to a smart contract, deploying it on different peer nodes that represent the contractual parties. The smart contract is able to monitor the created SLA using Blockchain technology. These two applications are utilized within our proposed SLA management framework for IoT. Furthermore, we propose a greedy heuristic algorithm to decentralize workflow activities of an IoT application across Edge and Cloud resources to enhance response time, cost, energy consumption and network usage. We evaluated the efficiency of our proposed approach using iFogSim simulator. The performance analysis shows that the proposed algorithm minimized cost, execution time, networking, and Cloud energy consumption compared to Cloud-only and edge-ward placement approaches
    corecore