253 research outputs found
Academic Cloud Computing Research: Five Pitfalls and Five Opportunities
This discussion paper argues that there are five fundamental pitfalls, which
can restrict academics from conducting cloud computing research at the
infrastructure level, which is currently where the vast majority of academic
research lies. Instead academics should be conducting higher risk research, in
order to gain understanding and open up entirely new areas.
We call for a renewed mindset and argue that academic research should focus
less upon physical infrastructure and embrace the abstractions provided by
clouds through five opportunities: user driven research, new programming
models, PaaS environments, and improved tools to support elasticity and
large-scale debugging. The objective of this paper is to foster discussion, and
to define a roadmap forward, which will allow academia to make longer-term
impacts to the cloud computing community.Comment: Accepted and presented at the 6th USENIX Workshop on Hot Topics in
Cloud Computing (HotCloud'14
Resource provisioning in Science Clouds: Requirements and challenges
Cloud computing has permeated into the information technology industry in the
last few years, and it is emerging nowadays in scientific environments. Science
user communities are demanding a broad range of computing power to satisfy the
needs of high-performance applications, such as local clusters,
high-performance computing systems, and computing grids. Different workloads
are needed from different computational models, and the cloud is already
considered as a promising paradigm. The scheduling and allocation of resources
is always a challenging matter in any form of computation and clouds are not an
exception. Science applications have unique features that differentiate their
workloads, hence, their requirements have to be taken into consideration to be
fulfilled when building a Science Cloud. This paper will discuss what are the
main scheduling and resource allocation challenges for any Infrastructure as a
Service provider supporting scientific applications
Policy-based SLA storage management model for distributed data storage services
There is high demand for storage related services supporting scientists in their research activities. Those services are expected to provide not only capacity but also features allowing for more flexible and cost efficient usage. Such features include easy multiplatform data access, long term data retention, support for performance and cost differentiating of SLA restricted data access. The paper presents a policy-based SLA storage management model for distributed data storage services. The model allows for automated management of distributed data aimed at QoS provisioning with no strict resource reservation. The problem of providing users with the required QoS requirements is complex, and therefore the model implements heuristic approach for solving it. The corresponding system architecture, metrics and methods for SLA focused storage management are developed and tested in a real, nationwide environment
A Systematic Mapping Study of Empirical Studies on Software Cloud Testing Methods
Context: Software has become more complicated, dynamic, and asynchronous than ever, making testing more challenging. With the increasing interest in the development of cloud computing, and increasing demand for cloud-based services, it has become essential to systematically review the research in the area of software testing in the context of cloud environments. Objective: The purpose of this systematic mapping study is to provide an overview of the empirical research in the area of software cloud-based testing, in order to build a classification scheme. We investigate functional and non-functional testing methods, the application of these methods, and the purpose of testing using these methods. Method: We searched for electronically available papers in order to find relevant literature and to extract and analyze data about the methods used. Result: We identified 69 primary studies reported in 75 research papers published in academic journals, conferences, and edited books. Conclusion: We found that only a minority of the studies combine rigorous statistical analysis with quantitative results. The majority of the considered studies present early results, using a single experiment to evaluate their proposed solution
Lightweight Multilingual Software Analysis
Developer preferences, language capabilities and the persistence of older
languages contribute to the trend that large software codebases are often
multilingual, that is, written in more than one computer language. While
developers can leverage monolingual software development tools to build
software components, companies are faced with the problem of managing the
resultant large, multilingual codebases to address issues with security,
efficiency, and quality metrics. The key challenge is to address the opaque
nature of the language interoperability interface: one language calling
procedures in a second (which may call a third, or even back to the first),
resulting in a potentially tangled, inefficient and insecure codebase. An
architecture is proposed for lightweight static analysis of large multilingual
codebases: the MLSA architecture. Its modular and table-oriented structure
addresses the open-ended nature of multiple languages and language
interoperability APIs. We focus here as an application on the construction of
call-graphs that capture both inter-language and intra-language calls. The
algorithms for extracting multilingual call-graphs from codebases are
presented, and several examples of multilingual software engineering analysis
are discussed. The state of the implementation and testing of MLSA is
presented, and the implications for future work are discussed.Comment: 15 page
Characterizing Issue Management in Runtime Systems
Modern programming languages like Java require runtime systems to support the
implementation and deployment of software applications in diverse computing
platforms and operating systems. These runtime systems are normally developed
in GitHub-hosted repositories based on close collaboration between large
software companies (e.g., IBM, Microsoft) and OSS developers. However, despite
their popularity and broad usage; to the best of our knowledge, these
repositories have never been studied. We report an empirical study of around
118K issues from 34 runtime system repos in GitHub. We found that issues
regarding enhancement, test failure and bug are mostly posted on runtime system
repositories and solution related discussion are mostly present on issue
discussion. 82.69% issues in the runtime system repositories have been resolved
and 0.69% issues are ignored; median of issue close rate, ignore rate and
addressing time in these repositories are 76.1%, 2.2% and 58 days respectively.
82.65% issues are tagged with labels while only 28.30% issues have designated
assignees and 90.65% issues contain at least one comment; also presence of
these features in an issue report can affect issue closure. Based on the
findings, we offer six recommenda
Recommended from our members
Characterizing the impact of network latency on cloud-based applications’ performance
Businesses and individuals run increasing numbers of applications in the cloud. The performance of an application running in the cloud depends on the data center conditions and upon the resources committed to an application. Small network delays may lead to a significant performance degradation, which affects both the user’s cost and the service provider’s resource usage, power consumption and data center efficiency. In this work, we quantify the effect of network latency on several typical cloud workloads, varying in complexity and use cases. Our results show that different applications are affected by network latency to differing amounts. These insights into the effect of network latency on different applications have ramifications for workload placement and physical host sharing when trying to reach performance targets
Business Intelligence for Small and Middle-Sized Entreprises
Data warehouses are the core of decision support sys- tems, which nowadays
are used by all kind of enter- prises in the entire world. Although many
studies have been conducted on the need of decision support systems (DSSs) for
small businesses, most of them adopt ex- isting solutions and approaches, which
are appropriate for large-scaled enterprises, but are inadequate for small and
middle-sized enterprises. Small enterprises require cheap, lightweight
architec- tures and tools (hardware and software) providing on- line data
analysis. In order to ensure these features, we review web-based business
intelligence approaches. For real-time analysis, the traditional OLAP
architecture is cumbersome and storage-costly; therefore, we also re- view
in-memory processing. Consequently, this paper discusses the existing approa-
ches and tools working in main memory and/or with web interfaces (including
freeware tools), relevant for small and middle-sized enterprises in decision
making
- …