473 research outputs found
Physically Dense Server Architectures.
Distributed, in-memory key-value stores have emerged as one of today's most
important data center workloads. Being critical for the scalability of modern
web services, vast resources are dedicated to key-value stores in order
to ensure that quality of service guarantees are met. These resources include:
many server racks to store terabytes of key-value data, the power necessary to
run all of the machines, networking equipment and bandwidth, and the data center
warehouses used to house the racks.
There is, however, a mismatch between the key-value store software and the
commodity servers on which it is run, leading to inefficient use of resources.
The primary cause of inefficiency is the overhead incurred from processing
individual network packets, which typically carry small payloads, and require
minimal compute resources. Thus, one of the key challenges as we enter the
exascale era is how to best adjust to the paradigm shift from compute-centric
to storage-centric data centers.
This dissertation presents a hardware/software solution that addresses the
inefficiency issues present in the modern data centers on which key-value
stores are currently deployed. First, it proposes two physical server
designs, both of which use 3D-stacking technology and low-power CPUs to improve
density and efficiency. The first 3D architecture---Mercury---consists of stacks
of low-power CPUs with 3D-stacked DRAM. The second
architecture---Iridium---replaces DRAM with 3D NAND Flash to improve density.
The second portion of this dissertation proposes and enhanced version of the
Mercury server design---called KeyVault---that incorporates integrated,
zero-copy network interfaces along with an integrated switching fabric. In order
to utilize the integrated networking hardware, as well as reduce the
response time of requests, a custom networking protocol is proposed. Unlike
prior works on accelerating key-value stores---e.g., by completely bypassing the
CPU and OS when processing requests---this work only bypasses the CPU and OS
when placing network payloads into a process' memory. The insight behind this is
that because most of the overhead comes from processing packets in the OS
kernel---and not the request processing itself---direct placement of packet's
payload is sufficient to provide higher throughput and lower latency than prior
approaches.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111414/1/atgutier_1.pd
Proceedings of the Second International Workshop on HyperTransport Research and Applications (WHTRA2011)
Proceedings of the Second International Workshop on HyperTransport Research and Applications (WHTRA2011) which was held Feb. 9th 2011 in Mannheim, Germany. The Second International Workshop for Research on HyperTransport is an international high quality forum for scientists, researches and developers working in the area of HyperTransport. This includes not only developments and research in HyperTransport itself, but also work which is based on or enabled by HyperTransport. HyperTransport (HT) is an interconnection technology which is typically used as system interconnect in modern computer systems, connecting the CPUs among each other and with the I/O bridges. Primarily designed as interconnect between high performance CPUs it provides an extremely low latency, high bandwidth and excellent scalability. The definition of the HTX connector allows the use of HT even for add-in cards. In opposition to other peripheral interconnect technologies like PCI-Express no protocol conversion or intermediate bridging is necessary. HT is a direct connection between device and CPU with minimal latency. Another advantage is the possibility of cache coherent devices. Because of these properties HT is of high interest for high performance I/O like networking and storage, but also for co-processing and acceleration based on ASIC or FPGA technologies. In particular acceleration sees a resurgence of interest today. One reason is the possibility to reduce power consumption by the use of accelerators. In the area of parallel computing the low latency communication allows for fine grain communication schemes and is perfectly suited for scalable systems. Summing up, HT technology offers key advantages and great performance to any research aspect related to or based on interconnects. For more information please consult the workshop website (http://whtra.uni-hd.de)
Design an Object-Oriented Home Inspection Application for a Portable Device
Recent advancements in the personal digital assistant (PDA) Windows application programming methodology made it easier to develop PDA applications. The release of the Microsoft® Visual Studio 2005 .NET incorporated handheld programming support while the Microsoft® Mobile® 5.0 operating system dramatically improved the PDA\u27s operation and hardware configuration. This paper researches and analyzes object-oriented languages, relational database and dynamic report generation technologies for the PDA as they apply to the development of a professional home inspection application. The focus of this paper is on the implementation of the most advanced PDA technologies for a high-end database PDA application design
Efficient and predictable high-speed storage access for real-time embedded systems
As the speed, size, reliability and power efficiency of non-volatile storage media increases, and the data demands of many application domains grow, operating systems are being put under escalating pressure to provide high-speed access to storage. Traditional models of storage access assume devices to be slow, expecting plenty of slack time in which to process data between requests being serviced, and that all significant variations in timing will be down to the storage device itself. Modern high-speed storage devices break this assumption, causing storage applications to become processor-bound, rather than I/O-bound, in an increasing number of situations. This is especially an issue in real-time embedded systems, where limited processing resources and strict timing and predictability requirements amplify any issues caused by the complexity of the software storage stack.
This thesis explores the issues related to accessing high-speed storage from real-time embedded systems, providing a thorough analysis of storage operations based on metrics relevant to the area. From this analysis, a number of alternative storage architectures are proposed and explored, showing that a simpler, more direct path from applications to storage can have a positive impact on efficiency and predictability in such systems
Recommended from our members
Automatically bridging the semantic gap in machine introspection
Disclosed are various embodiments that facilitate automatically bridging the semantic gap in machine introspection. It may be determined that a program executed by a first virtual machine is requested to introspect a second virtual machine. A system call execution context of the program may be determined in response to determining that the program is requested to introspect the second virtual machine. Redirectable data in a memory of the second virtual machine may be identified based at least in part on the system call execution context of the program. The program may be configured to access the redirectable data. In various embodiments, the program may be able to modify the redirectable data, thereby facilitating configuration, reconfiguration, and recovery operations to be performed on the second virtual machine from within the first virtual machine.Board of Regents, University of Texas Syste
Rack-Scale Memory Pooling for Datacenters
The rise of web-scale services has led to a staggering growth in user data on the Internet. To transform such a vast raw data into valuable information for the user and provide quality assurances, it is important to minimize access latency and enable in-memory processing. For more than a decade, the only practical way to accommodate for ever-growing data in memory has been to scale out server resources, which has led to the emergence of large-scale datacenters and distributed non-relational databases (NoSQL). Such horizontal scaling of resources translates to an increasing number of servers that participate in processing individual user requests. Typically, each user request results in hundreds of independent queries targeting different NoSQL nodes - servers, and the larger the number of servers involved, the higher the fan-out. To complete a single user request, all of the queries associated with that request have to complete first, and thus, the slowest query determines the completion time. Because of skewed popularity distributions and resource contention, the more servers we have, the harder it is to achieve high throughput and facilitate server utilization, without violating service level objectives. This thesis proposes rack-scale memory pooling (RSMP), a new scaling technique for future datacenters that reduces networking overheads and improves the performance of core datacenter software. RSMP is an approach to building larger, rack-scale capacity units for datacenters through specialized fabric interconnects with support for one-sided operations, and using them, in lieu of conventional servers (e.g. 1U), to scale out. We define an RSMP unit to be a server rack connecting 10s to 100s of servers to a secondary network enabling direct, low-latency access to the global memory of the rack. We, then, propose a new RSMP design - Scale-Out NUMA that leverages integration and a NUMA fabric to bridge the gap between local and remote memory to only 5Ă— difference in access latency. Finally, we show how RSMP impacts NoSQL data serving, a key datacenter service used by most web-scale applications today. We show that using fewer larger data shards leads to less load imbalance and higher effective throughput, without violating applicationsÂż service level objectives. For example, by using Scale-Out NUMA, RSMP improves the throughput of a key-value store up to 8.2Ă— over a traditional scale-out deployment
An approach to building a secure and persistent distributed object management system
The Common Object Request Broker Architecture (CORBA) proposed by the Object Management Group (OMG) is a widely accepted standard to provide a system level framework in design and implementation of distributed objects. The core of the Object Management Architecture (OMA) is an Object Request Broker (ORB), which provides transparency of object location, activation, and communications. However, the specification provided by the OMG is not sufficient. For instance, there are no security specifications when handling object requests through the ORBs. The lack of such a security service prevents the use of CORBA from handling sensitive data such as personal and corporate financial information; In view of the above, this thesis identifies, explores, and provides an approach to handling secure objects in a distributed environment along with a persistent object service using the CORBA specification. The research specifically involves the design and implementation of a secured distributed object service. This object service requires a persistent service and object storage for storing and retrieving security specific information. To provide a secure distributed object environment, a secure object service using the specifications provided by the OMG has been designed and implemented. In addition, to preserve the persistence of secure information, an object service has been implemented to provide a persistent data store; The secure object service can provide a framework for handling distributed object in applications requiring security clearance such as distributed banking, online stock tradings, internet shopping, geographic and medical information systems
- …