2 research outputs found

    Workload Analysis of Cloud Resources using Time Series and Machine Learning Prediction

    Full text link
    © 2019 IEEE. Most of the businesses now-a-days have started using cloud platforms to host their software applications. A Cloud platform is shared resource that provides various services like software as a service (SAAS), infrastructure as a service (IAAS) or anything as a service (XAAS) that is required to develop and deploy any business application. These cloud services are provided as virtual machines (VM) that can handle the end user's requirements. The cloud providers must ensure efficient resource handling mechanisms for different time intervals to avoid wastage of resources. Auto-scaling mechanisms would take care of using these resources appropriately along with providing an excellent quality of service. Auto-scaling supports the cloud service providers achieve the goal of supplying the required resources automatically. It use methods that will calculate the number of requests and decides the resources to release based on workload. The workload consists of some quantity of application program running on the machine and usually some number of users connected to and communicating with the computer's applications. The researchers have used various approaches to perform autoscaling which is a process to predict the workload that is required to handle the end users request and provide required resources as Virtual Machines (VM) disruptively. Along with providing uninterrupted service, the businesses also only pay for the service they use, thus increasing the popularity of Cloud computing. Based on the workload identified the resources are provisioned. The resource provisioning techniques is a model used for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, applications, and services) required resources are released. In this regard, the aim of this paper is to develop a framework to predict the workload using deep learning which would be able to handle provisioning of cloud resources dynamically. This framework would handle the user request efficiently and allocate the required virtual machines. As a result, an efficient dynamic method of provisioning of cloud services would be implemented supporting both the cloud providers and users

    Harnessing Artificial Intelligence Capabilities Through Cloud Services: a Case Study of Inhibitors and Success Factors

    Get PDF
    Industry and research have recognized the need to adopt and utilize artificial intelligence (AI) to automate and streamline business processes to gain competitive edges. However, developing and running AI algorithms requires a complex IT infrastructure, significant computing power, and sufficient IT expertise, making it unattainable for many organizations. Organizations attempting to build AI solutions in-house often opt to establish an AI center of excellence, accumulating huge costs and extremely long time to value. Fortunately, this deterrence is eliminated by the availability of AI delivered through cloud computing services. The cloud deployment models, Infrastructure as a Service, Platform as a Service, and Software as a Service provide various AI services. IaaS delivers virtualized computing resources over the internet and enables the raw computational power and specialized hardware for building and training AI algorithms. PaaS provides development tools and running environments that assist data scientists and developers in implementing code to bring out AI capabilities. Finally, SaaS offers off-the-shelf AI tools and pre-trained models provided to customers on a commercial basis. Due to the lack of customizability and control of pre-built AI solutions, this empirical investigation focuses merely on IaaS and PaaS-related AI services. The rationale is associated with the complexity of developing, managing and maintaining customized cloud infrastructures and AI solutions that meet a business's actual needs. By applying the Diffusion of Innovation (DOI) theory and the Critical Success Factor (CSF) method, this research explores and identifies the drivers and inhibitors for AI services adoption and critical success factors for harnessing AI capabilities through cloud services.Based on a comprehensive review of the existing literature and a series of nine systematic interviews, this study reveals ten factors that drive- and 17 factors that inhibit the adoption of AI developer tools and infrastructure services. To further aid practitioners and researchers in mitigating the challenges of harnessing AI capabilities, this study identifies four affinity groups of success factors: 1) organizational factors, 2) cloud management factors, 3) technical factors, and 4) the technology commercialization process. Within these categories, nine sub-affinity groups and 20 sets of CSFs are presented
    corecore