2,332 research outputs found
Fleets: Scalable Services in a Factored Operating System
Current monolithic operating systems are designed for uniprocessor systems, and their architecture reflects this. The rise of multicore and cloud computing is drastically changing the tradeoffs in operating system design. The culture of scarce computational resources is being replaced with one of abundant cores, where spatial layout of processes supplants time multiplexing as the primary scheduling concern. Efforts to parallelize monolithic kernels have been difficult and only marginally successful, and new approaches are needed. This paper presents fleets, a novel way of constructing scalable OS services. With fleets, traditional OS services are factored out of the kernel and moved into user space, where they are further parallelized into a distributed set of concurrent, message-passing servers. We evaluate fleets within fos, a new factored operating system designed from the ground up with scalability as the first-order design constraint. This paper details the main design principles of fleets, and how the system architecture of fos enables their construction. We describe the design and implementation of three critical fleets (network stack, page allocation, and file system) and compare with Linux. These comparisons show that fos achieves superior performance and has better scalability than Linux for large multicores; at 32 cores, fos's page allocator performs 4.5 times better than Linux, and fos's network stack performs 2.5 times better. Additionally, we demonstrate how fleets can adapt to changing resource demand, and the importance of spatial scheduling for good performance in multicores
TechNews digests: Jan - Nov 2009
TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month
Making data centres fit for demand response: introducing GreenSDA and GreenSLA contracts
The power grid has become a critical infrastructure,
which modern society cannot do without. It has always been a
challenge to keep power supply and demand in balance; the more
so with the recent rise of intermittent renewable energy sources.
Demand response schemes are one of the counter measures,
traditionally employed with large industrial plants. This paper
suggests to consider data centres as candidates for demand
response as they are large energy consumers and as they are
able to adapt their power profile sufficiently well. To unlock
this potential, we suggest a system of contracts that regulate
collaboration and economic incentives between the data centre
and its energy supplier (GreenSDA) as well as between the
data centre and its customers (GreenSLA). Several presented use
cases serve to validate the suitability of data centers for demand
response schemes.Peer ReviewedPostprint (author's final draft
Workload-Aware Database Monitoring and Consolidation
In most enterprises, databases are deployed on dedicated database servers. Often, these servers are underutilized much of the time. For example, in traces from almost 200 production servers from different organizations, we see an average CPU utilization of less than 4%. This unused capacity can be potentially harnessed to consolidate multiple databases on fewer machines, reducing hardware and operational costs. Virtual machine (VM) technology is one popular way to approach this problem. However, as we demonstrate in this paper, VMs fail to adequately support database consolidation, because databases place a unique and challenging set of demands on hardware resources, which are not well-suited to the assumptions made by VM-based consolidation.
Instead, our system for database consolidation, named Kairos, uses novel techniques to measure the hardware requirements of database workloads, as well as models to predict the combined resource utilization of those workloads. We formalize the consolidation problem as a non-linear optimization program, aiming to minimize the number of servers and balance load, while achieving near-zero performance degradation. We compare Kairos against virtual machines, showing up to a factor of 12× higher throughput on a TPC-C-like benchmark. We also tested the effectiveness of our approach on real-world data collected from production servers at Wikia.com, Wikipedia, Second Life, and MIT CSAIL, showing absolute consolidation ratios ranging between 5.5:1 and 17:1
Virtualisation and Thin Client : A Survey of Virtual Desktop environments
This survey examines some of the leading commercial Virtualisation and Thin Client technologies. Reference is made to a number of academic research sources and to prominent industry specialists and commentators. A basic virtualisation Laboratory model is assembled to demonstrate fundamental Thin Client operations and to clarify potential problem areas
Cloud engineering is search based software engineering too
Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE
Topics in Power Usage in Network Services
The rapid advance of computing technology has created a world powered
by millions of computers. Often these computers are idly consuming energy
unnecessarily in spite of all the efforts of hardware manufacturers. This thesis
examines proposals to determine when to power down computers without
negatively impacting on the service they are used to deliver, compares and
contrasts the efficiency of virtualisation with containerisation, and investigates
the energy efficiency of the popular cryptocurrency Bitcoin.
We begin by examining the current corpus of literature and defining the key
terms we need to proceed.
Then we propose a technique for improving the energy consumption of servers
by moving them into a sleep state and employing a low powered device to act
as a proxy in its place.
After this we move on to investigate the energy efficiency of virtualisation and
compare the energy efficiency of two of the most common means used to do
this.
Moving on from this we look at the cryptocurrency Bitcoin. We consider the
energy consumption of bitcoin mining and if this compared with the value of
bitcoin makes this profitable.
Finally we conclude by summarising the results and findings of this thesis.
This work increases our understanding of some of the challenges of energy
efficient computation as well as proposing novel mechanisms to save energy
Disaggregating and Consolidating Network Functionalities
Resource disaggregation has gained huge popularity in recent years. Existing
works demonstrate how to disaggregate compute, memory, and storage resources.
We, for the first time, demonstrate how to disaggregate network resources by
proposing a new distributed hardware framework called SuperNIC. Each SuperNIC
connects a small set of endpoints and consolidates network functionalities for
these endpoints. We prototyped SuperNIC with FPGA and demonstrate its
performance and cost benefits with real network functions and customized
disaggregated applications
ACUTA Journal of Telecommunications in Higher Education
In This Issue
Abundance of Services at lU
Customer Relations and Technology: Practical Solutions from Two Campuses
FSU Converges Support to Follow Technology
Service Catalogs and the Value of Just 12 Minutes
Essential Telephone Skills
Email Services: Beginning of the End?
lnstitutional Excellence Award
Interview
President\u27s Message
From the Executive Directo
Master of Science
thesisEfficient movement of massive amounts of data over high-speed networks at high throughput is essential for a modern-day in-memory storage system. In response to the growing needs of throughput and latency demands at scale, a new class of database systems was developed in recent years. The development of these systems was guided by increased access to high throughput, low latency network fabrics, and declining cost of Dynamic Random Access Memory (DRAM). These systems were designed with On-Line Transactional Processing (OLTP) workloads in mind, and, as a result, are optimized for fast dispatch and perform well under small request-response scenarios. However, massive server responses such as those for range queries and data migration for load balancing poses challenges for this design. This thesis analyzes the effects of large transfers on scale-out systems through the lens of a modern Network Interface Card (NIC). The present-day NIC offers new and exciting opportunities and challenges for large transfers, but using them efficiently requires smart data layout and concurrency control. We evaluated the impact of modern NICs in designing data layout by measuring transmit performance and full system impact by observing the effects of Direct Memory Access (DMA), Remote Direct Memory Access (RDMA), and caching improvements such as Intel® Data Direct I/O (DDIO). We discovered that use of techniques such as Zero Copy yield around 25% savings in CPU cycles and a 50% reduction in the memory bandwidth utilization on a server by using a client-assisted design with records that are not updated in place. We also set up experiments that underlined the bottlenecks in the current approach to data migration in RAMCloud and propose guidelines for a fast and efficient migration protocol for RAMCloud
- …