1,006 research outputs found
Survey and Analysis of Production Distributed Computing Infrastructures
This report has two objectives. First, we describe a set of the production
distributed infrastructures currently available, so that the reader has a basic
understanding of them. This includes explaining why each infrastructure was
created and made available and how it has succeeded and failed. The set is not
complete, but we believe it is representative.
Second, we describe the infrastructures in terms of their use, which is a
combination of how they were designed to be used and how users have found ways
to use them. Applications are often designed and created with specific
infrastructures in mind, with both an appreciation of the existing capabilities
provided by those infrastructures and an anticipation of their future
capabilities. Here, the infrastructures we discuss were often designed and
created with specific applications in mind, or at least specific types of
applications. The reader should understand how the interplay between the
infrastructure providers and the users leads to such usages, which we call
usage modalities. These usage modalities are really abstractions that exist
between the infrastructures and the applications; they influence the
infrastructures by representing the applications, and they influence the ap-
plications by representing the infrastructures
DRIVE: A Distributed Economic Meta-Scheduler for the Federation of Grid and Cloud Systems
The computational landscape is littered with islands of disjoint resource providers including
commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers.
These providers are independent and isolated due to a lack of communication and coordination,
they are also often proprietary without standardised interfaces, protocols, or execution environments.
The lack of standardisation and global transparency has the effect of binding consumers
to individual providers. With the increasing ubiquity of computation providers there is an opportunity
to create federated architectures that span both Grid and Cloud computing providers
effectively creating a global computing infrastructure. In order to realise this vision, secure and
scalable mechanisms to coordinate resource access are required. This thesis proposes a generic
meta-scheduling architecture to facilitate federated resource allocation in which users can provision
resources from a range of heterogeneous (service) providers.
Efficient resource allocation is difficult in large scale distributed environments due to the inherent
lack of centralised control. In a Grid model, local resource managers govern access to a
pool of resources within a single administrative domain but have only a local view of the Grid
and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to
submit jobs to multiple resource managers, however they are most often deployed on a per-client
basis and are therefore concerned with only their allocations, essentially competing against one
another. In a federated environment the widespread adoption of utility computing models seen in
commercial Cloud providers has re-motivated the need for economically aware meta-schedulers.
Economies provide a way to represent the different goals and strategies that exist in a competitive
distributed environment. The use of economic allocation principles effectively creates an
open service market that provides efficient allocation and incentives for participation.
The major contributions of this thesis are the architecture and prototype implementation of the
DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler
in which members of the VO collaboratively allocate services or resources. Providers
joining the VO contribute obligation services to the VO. These contributed services are in effect
membership “dues” and are used in the running of the VOs operations – for example allocation,
advertising, and general management. DRIVE is independent from a particular class of provider
(Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in
federated environments composed of heterogeneous providers in vastly different scenarios. Protocol
independence facilitates the use of arbitrary protocols based on specific requirements and
infrastructural availability. For instance, within a single organisation where internal trust exists,
users can achieve maximum allocation performance by choosing a simple economic protocol.
In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used
with a secure protocol which ensures the allocation is carried out fairly in the absence of trust.
DRIVE establishes contracts between participants as the result of allocation. A contract describes
individual requirements and obligations of each party. A unique two stage contract negotiation
protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of
the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a
distributed environment without requiring large scale dedicated resources.
This thesis presents several other contributions related to meta-scheduling and open service
markets. To overcome the perceived performance limitations of economic systems four high utilisation
strategies have been developed and evaluated. Each strategy is shown to improve occupancy,
utilisation and profit using synthetic workloads based on a production Grid trace. The
gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications.
The gRAVI toolkit has been extended for this thesis such that it creates economically
aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring
developer input. The final contribution of this thesis is the definition and architecture of
a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources
contributed by members of a Social network. The Social Cloud prototype is based on DRIVE
and highlights the ease in which dynamic DRIVE markets can be created and used in different
domains
Excavations at the Gilligan's Island shelters (5FN1592), Fort Carson Military Reservation (FCMR), Fremont County, Colorado
Department Head: Kathleen A. Galvin.2008 Summer.Includes bibliographical references (pages 379-399).This thesis examines the surface and subsurface archaeological work undertaken in 2002 at the Gilligan's Island site (5FN1592), located at the base of the Rocky Mountains on the Fort Carson Military Reservation, eastern Fremont County, Colorado. Permission was granted by Fort Carson to conduct an excavation at this site in order to determine its potential to produce significant subsurface occupational remains. Excavations focused on two connecting rock shelters at the base of a prominent cliff face. Four interconnecting grid units were positioned in a trench-like fashion through the central midline of each shelter proper. Deposition of excavated units ranges up to 1.3 meters in depth. These trenches exposed deeply stratified prehistoric materials including multiple intact features. The radiocarbon data (based on conventional uncalibrated dates) identified three prehistoric cultural components: Middle Archaic period (ca. 4240-3010 B.P.), Late Archaic period (ca. 2230-1880 B.P.), and Developmental period (ca. 1390- 1070 B.P.). A historic component is also evident and is associated with probable looting activities in the shelters.Volume I. Primary report -- Volume II. Database appendixes (zip file)
Bulletin of the Massachusetts Archaeological Society, Vol. 81, No. 1-2
Editor’s Notes (Ryan Wheeler) Rememberance: Frederica Rockefeller Dimmick (1934 - 2019) (Tonya Baroody Largy, Ian W. Brown, John Rempelakis, William A. Griswold, William P. Burke, and Philip Graham) New Directions on Old Roads: A History of Transportation Archaeology in Massachusetts (John Rempelakis) Discovery of a Small, Isolated, High-Density Lithic Workshop in Interior Massachusetts (Alan E. Strauss) Post-Contact Upland Sites near Lake Chaubunagungamaug (Martin G. Dudek) Native Agricultural Villages in Essex County: Archaeological and Ethnohistorical Evidence (Mary Ellen Lepionka) Contributor
Enforcing CPU allocation in a heterogeneous IaaS
International audienceIn an Infrastructure as a Service (IaaS), the amount of resources allocated to a virtual machine (VM) at creation time may be expressed with relative values (relative to the hardware, i.e., a fraction of the capacity of a device) or absolute values (i.e., a performance metric which is independent from the capacity of the hardware). Surprisingly, disk or network resource allocations are expressed with absolute values (bandwidth), but CPU resource allocations are expressed with relative values (a percentage of a processor). The major problem with CPU relative value allocations is that it depends on the capacity of the CPU, which may vary due to different factors (server heterogeneity in a cluster, Dynamic Voltage Frequency Scaling (DVFS)). In this paper, we analyze the side effects and drawbacks of relative allocations. We claim that CPU allocation should be expressed with absolute values. We propose such a CPU resource management system and we demonstrate and evaluate its benefits
Recommended from our members
Mobile computing in a clouded environment
textCloud Computing has started to become a viable option for computing centers and mobile consumers seeking to reduce cost overhead, power consumption, and increase software services available within their platform. For instance distributed memory constrained mobile devices can expand their ability to share real time data by utilizing virtual memory located within the cloud. Cloud memory services can be configured to restrict read and write access to the shared memory pool on a partner by partner basis. Utilization of such resources in turn reduces hardware requirements on mobile devices while lessening power consumption for each physical resource.
Within the Cloud Computing paradigm, computing resources are provisioned to consumers on demand and guaranteed through service level agreements. Although the
idea of a computing utility is not new, its realization has come to pass as researchers and corporate companies embark on a journey of implementing highly scalable cloud environments. As new solutions and architectures are proposed, additional use cases and consumer concerns have been revealed. These issues range from consumer security, adequate service level agreements and vendor interoperability, to cloud technology standardizations. Further, the current state of the art does not adequately address these needs for mobile consumers, where services need to be guaranteed even as consumers dynamically change locations. Due to the rapid adoption of virtualization stacks and the dramatic increase of mobile computing devices, cloud providers must be able to handle logical and physical mobility of consumers. As consumers move throughout geographical regions, there exists the probability that a consumer’s new locale may hinder a producer’s ability to uphold service level agreements. This inability is due to the fact that a producer may not have physical resources located relatively close to a mobile consumer’s new locale. As a consequence, producers must either continue to provide degraded resource consumption or migrate workloads to third party producers in order to ensure service level agreements are maintained. The goal of this report is to research existing architectures that provide the ability to adequately uphold service level agreements as mobile consumers move from locale to locale. Further we propose an architecture that can be implemented along with existing solutions in order to ensure consumers receive adequate service levels regardless of locality. We believe this architecture will lead to increased cloud interoperability and decreased consumer to producer platform coupling.Electrical and Computer Engineerin
Market-Based Scheduling in Distributed Computing Systems
In verteilten Rechensystemen (bspw. im Cluster und Grid Computing) kann eine Knappheit der zur Verfügung stehenden Ressourcen auftreten. Hier haben Marktmechanismen das Potenzial, Ressourcenbedarf und -angebot durch geeignete Anreizmechanismen zu koordinieren und somit die ökonomische Effizienz des Gesamtsystems zu steigern. Diese Arbeit beschäftigt sich anhand vier spezifischer Anwendungsszenarien mit der Frage, wie Marktmechanismen für verteilte Rechensysteme ausgestaltet sein sollten
- …