363,331 research outputs found
Guidance Notes for Cloud Research Users
There is a rapidly increasing range of research activities which involve the outsourcing of computing and storage resources to public Cloud Service Providers (CSPs), who provide managed and scalable resources virtualised as a single service. For example Amazon Elastic Computing Cloud (EC2) and Simple Storage Service (S3) are two widely adopted open cloud solutions, which aim at providing pooled computing and storage services and charge users according to their weighted resource usage. Other examples include employment of Google Application Engine and Microsoft Azure as development platforms for research applications. Despite a lot of activity and publication on cloud computing, the term itself and the technologies that underpin it are still confusing to many. This note, as one of deliverables of the TeciRes project1, provides guidance to researchers who are potential end users of public CSPs for research activities. The note contains information to researchers on: •The difference between and relation to current research computing models •The considerations that have to be taken into account before moving to cloud-aided research •The issues associated with cloud computing for research that are currently being investigated •Tips and tricks when using cloud computing Readers who are interested in provisioning cloud capabilities for research should also refer to our guidance notes to cloud infrastructure service providers. This guidance notes focuses on technical aspects only. Readers who are interested in non-technical guidance should refer to the briefing paper produced by the “using cloud computing for research” project
Cloudbus Toolkit for Market-Oriented Cloud Computing
This keynote paper: (1) presents the 21st century vision of computing and
identifies various IT paradigms promising to deliver computing as a utility;
(2) defines the architecture for creating market-oriented Clouds and computing
atmosphere by leveraging technologies such as virtual machines; (3) provides
thoughts on market-based resource management strategies that encompass both
customer-driven service management and computational risk management to sustain
SLA-oriented resource allocation; (4) presents the work carried out as part of
our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a
Service software system containing SDK (Software Development Kit) for
construction of Cloud applications and deployment on private or public Clouds,
in addition to supporting market-oriented resource management; (ii)
internetworking of Clouds for dynamic creation of federated computing
environments for scaling of elastic applications; (iii) creation of 3rd party
Cloud brokering services for building content delivery networks and e-Science
applications and their deployment on capabilities of IaaS providers such as
Amazon along with Grid mashups; (iv) CloudSim supporting modelling and
simulation of Clouds for performance studies; (v) Energy Efficient Resource
Allocation Mechanisms and Techniques for creation and management of Green
Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape
Towards distributed architecture for collaborative cloud services in community networks
Internet and communication technologies have lowered the costs for communities to collaborate, leading to new services like user-generated content and social computing, and through collaboration, collectively built infrastructures like community networks have also emerged. Community networks get formed when individuals and local organisations from a geographic area team up to create and run a community-owned IP network to satisfy the community’s demand for ICT, such as facilitating Internet access and providing services of local interest.
The consolidation of today’s cloud technologies offers now the possibility of collectively built community clouds, building upon user-generated content and user-provided networks towards an ecosystem of cloud services. To address the limitation and enhance utility of community networks, we propose a collaborative distributed architecture for building a community cloud system that employs resources contributed by the members of the community network for provisioning infrastructure and software services. Such architecture needs to be tailored to the specific social, economic and technical characteristics of the community networks for community clouds to be successful and sustainable. By real deployments of clouds in community networks and evaluation of application performance, we show that community clouds are feasible. Our result may encourage collaborative innovative cloud-based services made possible with the resources of a community.Peer ReviewedPostprint (author’s final draft
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
A Service based Development Environment on Web 2.0 Platforms
Governments are investing on the IT adoption and promoting the socalled e-economies as a way to improve competitive advantages. One of the main government’s actions is to provide internet access to the most part of the population, people and organisations. Internet provides the required support for connecting organizations, people and geographically distributed developments teams. Software developments are tightly related to the availability of tools and platforms needed for products developments. Internet is becoming the most widely used platform. Software forges such as SourceForge provide an integrated tools environment gathering a set of tools that are suited for each development with a low cost. In this paper we propose an innovating approach based on Web2.0, services and a method engineering approach for software developments. This approach represents one of the possible usages of the internet of the future
Transparency about net neutrality: A translation of the new European rules into a multi-stakeholder model
The new European framework directive contains a number of policy objectives in the area of net neutrality. In support of these objectives, the universal service directive includes a transparency obligation for ISPs. This paper proposes a multi-stakeholder model for the implementation of this transparency obligation. The model is a multi-stakeholder model in the sense that it treats the content and form of the transparent information in close connection with the parties involved in the provision of the information and the processes in which they take part. Another crucial property of the model is that it distinguishes between technical and user-friendly information. This distinction makes it possible to limit the obligation to ISPs to the information for which they are in the best position to provide: the technical information on the traffic management measures that they apply, e.g., which traffic streams are subject to special treatment? Which measures are applied and when? The public availability of this technical information creates the opportunity for the other parties in the model to step in and contribute to the formulation of the user-friendly information for end users: which applications and services receive special treatment? When is their effect noticeable? It is expected that the involvement of other parties will lead to multiple, complementary routes for the formulation of the user-friendly information. Thus, the user-friendly information emerges in ways driven by market players and stakeholders that would be difficult to design and lay down in advance in the transparency obligation. --net neutrality,transparency,traffic management
- …