7,011 research outputs found
Towards Autonomic Service Provisioning Systems
This paper discusses our experience in building SPIRE, an autonomic system
for service provision. The architecture consists of a set of hosted Web
Services subject to QoS constraints, and a certain number of servers used to
run session-based traffic. Customers pay for having their jobs run, but require
in turn certain quality guarantees: there are different SLAs specifying charges
for running jobs and penalties for failing to meet promised performance
metrics. The system is driven by an utility function, aiming at optimizing the
average earned revenue per unit time. Demand and performance statistics are
collected, while traffic parameters are estimated in order to make dynamic
decisions concerning server allocation and admission control. Different utility
functions are introduced and a number of experiments aiming at testing their
performance are discussed. Results show that revenues can be dramatically
improved by imposing suitable conditions for accepting incoming traffic; the
proposed system performs well under different traffic settings, and it
successfully adapts to changes in the operating environment.Comment: 11 pages, 9 Figures,
http://www.wipo.int/pctdb/en/wo.jsp?WO=201002636
Firms' contribution to open source software and the dominant skilled user
: Free/libre or open-source software (FLOSS) is nowadays produced not only by individual benevolent developers but, in a growing proportion, by firms that hire programmers for their own objectives of development in open source or for contributing to open-source projects in the context of dedicated communities. A recent literature has focused on the question of the business models explaining how and why firms may draw benefits from such involvement and their connected activities. They can be considered as the building blocks of a new modus operandi of an industry, built on an alternative approach to intellectual property management. Its prospects will depend on both the firms' willingness to rally and its ability to compete with the traditional “proprietary” approach. As a matter of fact, firms' involvement in FLOSS, while growing, remains very contrasting, depending on the nature of the products and the characteristics of the markets. The aim of this paper is to emphasize that, beside factors like the importance of software as a core competence of the firm, the role of users on the related markets - and more precisely their level of skills - may provide a major explanation of such diversity. We introduce the concept of the dominant skilled user and we set up a theoretical model to better understand how it may condition the nature and outcome of the competition between a FLOSS firm and a proprietary firm. We discuss these results in the light of empirical stylized facts drawn from the recent trends in the software industrySoftware ; Open Source ; Intellectual Property ; Competition ; Users
Cloud-scale VM Deflation for Running Interactive Applications On Transient Servers
Transient computing has become popular in public cloud environments for
running delay-insensitive batch and data processing applications at low cost.
Since transient cloud servers can be revoked at any time by the cloud provider,
they are considered unsuitable for running interactive application such as web
services. In this paper, we present VM deflation as an alternative mechanism to
server preemption for reclaiming resources from transient cloud servers under
resource pressure. Using real traces from top-tier cloud providers, we show the
feasibility of using VM deflation as a resource reclamation mechanism for
interactive applications in public clouds. We show how current hypervisor
mechanisms can be used to implement VM deflation and present cluster deflation
policies for resource management of transient and on-demand cloud VMs.
Experimental evaluation of our deflation system on a Linux cluster shows that
microservice-based applications can be deflated by up to 50\% with negligible
performance overhead. Our cluster-level deflation policies allow overcommitment
levels as high as 50\%, with less than a 1\% decrease in application
throughput, and can enable cloud platforms to increase revenue by 30\%.Comment: To appear at ACM HPDC 202
Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research
This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
- …