407 research outputs found
A Process Framework for Managing Quality of Service in Private Cloud
As information systems leaders tap into the global market of cloud computing-based services, they struggle to maintain consistent application performance due to lack of a process framework for managing quality of service (QoS) in the cloud. Guided by the disruptive innovation theory, the purpose of this case study was to identify a process framework for meeting the QoS requirements of private cloud service users. Private cloud implementation was explored by selecting an organization in California through purposeful sampling. Information was gathered by interviewing 23 information technology (IT) professionals, a mix of frontline engineers, managers, and leaders involved in the implementation of private cloud. Another source of data was documents such as standard operating procedures, policies, and guidelines related to private cloud implementation. Interview transcripts and documents were coded and sequentially analyzed. Three prominent themes emerged from the analysis of data: (a) end user expectations, (b) application architecture, and (c) trending analysis. The findings of this study may help IT leaders in effectively managing QoS in cloud infrastructure and deliver reliable application performance that may help in increasing customer population and profitability of organizations. This study may contribute to positive social change as information systems managers and workers can learn and apply the process framework for delivering stable and reliable cloud-hosted computer applications
Academic Cloud ERP Quality Assessment Model
In the past few decades, educational institutions have been using conventional academic ERP system to integrate and optimize their business process. In this delivery model, each educational institutions are responsible of their own data, installation, and also maintenance. For some institutions, it might cause not only waste of resources, but also problems in management and financial aspects. Cloud-based Academic ERP, a SaaS-based ERP system, begin to come as a solution with is virtualization technology. It allows institutions to use only the needed ERP resources, without any specific installation, integration, or maintenance needs. As the implementation of Cloud ERP increases, problems arise on how to evaluate this system. Current evaluation approaches are either only evaluating the cloud computing aspects or only evaluating the software quality aspects. This paper proposes an assessment model for Cloud ERP system, considering both software quality characteristics and cloud computing attributes to help strategic decision makers evaluate academic Cloud ERP system
From Bare Metal to Virtual: Lessons Learned when a Supercomputing Institute Deploys its First Cloud
As primary provider for research computing services at the University of
Minnesota, the Minnesota Supercomputing Institute (MSI) has long been
responsible for serving the needs of a user-base numbering in the thousands.
In recent years, MSI---like many other HPC centers---has observed a growing
need for self-service, on-demand, data-intensive research, as well as the
emergence of many new controlled-access datasets for research purposes. In
light of this, MSI constructed a new on-premise cloud service, named Stratus,
which is architected from the ground up to easily satisfy data-use agreements
and fill four gaps left by traditional HPC. The resulting OpenStack cloud,
constructed from HPC-specific compute nodes and backed by Ceph storage, is
designed to fully comply with controls set forth by the NIH Genomic Data
Sharing Policy.
Herein, we present twelve lessons learned during the ambitious sprint to take
Stratus from inception and into production in less than 18 months. Important,
and often overlooked, components of this timeline included the development of
new leadership roles, staff and user training, and user support documentation.
Along the way, the lessons learned extended well beyond the technical
challenges often associated with acquiring, configuring, and maintaining
large-scale systems.Comment: 8 pages, 5 figures, PEARC '18: Practice and Experience in Advanced
Research Computing, July 22--26, 2018, Pittsburgh, PA, US
A Deep Reinforcement Learning based Algorithm for Time and Cost Optimized Scaling of Serverless Applications
Serverless computing has gained a strong traction in the cloud computing
community in recent years. Among the many benefits of this novel computing
model, the rapid auto-scaling capability of user applications takes prominence.
However, the offer of adhoc scaling of user deployments at function level
introduces many complications to serverless systems. The added delay and
failures in function request executions caused by the time consumed for
dynamically creating new resources to suit function workloads, known as the
cold-start delay, is one such very prevalent shortcoming. Maintaining idle
resource pools to alleviate this issue often results in wasted resources from
the cloud provider perspective. Existing solutions to address this limitation
mostly focus on predicting and understanding function load levels in order to
proactively create required resources. Although these solutions improve
function performance, the lack of understanding on the overall system
characteristics in making these scaling decisions often leads to the
sub-optimal usage of system resources. Further, the multi-tenant nature of
serverless systems requires a scalable solution adaptable for multiple
co-existing applications, a limitation seen in most current solutions. In this
paper, we introduce a novel multi-agent Deep Reinforcement Learning based
intelligent solution for both horizontal and vertical scaling of function
resources, based on a comprehensive understanding on both function and system
requirements. Our solution elevates function performance reducing cold starts,
while also offering the flexibility for optimizing resource maintenance cost to
the service providers. Experiments conducted considering varying workload
scenarios show improvements of up to 23% and 34% in terms of application
latency and request failures, while also saving up to 45% in infrastructure
cost for the service providers.Comment: 15 pages, 22 figure
Cloud adoption and cyber security in public organizations: an empirical investigation on Norwegian municipalities
The public sector in Norway, particularly municipalities, is currently transforming through the adoption of cloud solutions. This multiple case study investigates cloud adoption and is security challenges that come along with it. The objective is to identify the security challenges that cloud solutions present and techniques or strategies that can be used to mitigate these security challenges. The Systematic Literature Review (SLR) provided valuable insights into the prevalent challenges and associated mitigation techniques in cloud adoption. The thesis also uses a qualitative approach using Semi-Structured Interviews (SSI) to gather insight into informants’ experiences regarding cloud adoption and its security challenges. The study’s empirical data is based on interviews with six different Norwegian municipalities, providing a unique and broad perspective. The analysis of the empirical findings, combined with the literature, reveals several security challenges and mitigation techniques in adopting cloud solutions. The security challenges encompass organizational, environmental, legal, and technical aspects of cloud adoption in the municipality. Based on the findings, it is recommended that Norwegian municipalities act on these issues to ensure a more secure transition to cloud solutions
Deadline-Aware Reservation-Based Scheduling
The ever-growing need to improve return-on-investment (ROI) for cluster infrastructure that processes data which is being continuously generated at a higher rate than ever before introduces new challenges for big-data processing frameworks. Highly complex mixed workload arriving at modern clusters along with a growing number of time-sensitive critical production jobs necessitates cluster management systems to evolve. Most big-data systems are not only required to guarantee that production jobs will complete before their deadline, but also minimize the latency for best-effort jobs to increase ROI.
This research presents DARSS, a deadline-aware reservation-based scheduling system. DARSS addresses the above-stated problem by using a reservation-based approach to scheduling that supports temporal requirements of production jobs while keeping the latency for best-effort jobs low. Fined-grained resource allocation enables DARSS to schedule more tasks than a coarser-grained approach would. Furthermore, DARSS schedules production jobs as close to their deadlines as possible. This scheduling policy allows the system to maximize the number of low-priority tasks that can be scheduled opportunistically. DARSS is a scalable system that can be integrated with YARN.
DARSS is evaluated on a simulated cluster of 300 nodes against a workload derived from Google Borg's trace. DARSS is compared with Microsoft's Rayon and YARN's built-in scheduler. DARSS achieves better production job acceptance rate than both YARN and Rayon. The experiments show that all of the production jobs accepted by DARSS complete before their deadlines. Furthermore, DARSS has a higher number of best-effort jobs serviced than Rayon. And finally, DARSS has lower latency for best-effort jobs than Rayon
- …