6 research outputs found

    Re-Architecting Mass Storage Input/Output for Performance and Efficiency

    Full text link
    The semantics and fundamental structure of modern operating system IO systems dates from the mid-1960\u27s to the mid-1970\u27s, a period of time when computing power and memory capacity were a mere fraction of today\u27s systems. Engineering tradeoffs made in the past enshrine the resource availability context of computing at that time. Deconstructing the semantics of the IO infrastructure allows a re-examination of long-standing design decisions in the context of today\u27s greater processing and memory resources. The re-examination allows changes to several wide-spread paradigms to improve efficiency and performance

    Metacomputing on clusters augmented with reconfigurable hardware

    Get PDF

    Network-Compute Co-Design for Distributed In-Memory Computing

    Get PDF
    The booming popularity of online services is rapidly raising the demands for modern datacenters. In order to cope with data deluge, growing user bases, and tight quality of service constraints, service providers deploy massive datacenters with tens to hundreds of thousands of servers, keeping petabytes of latency-critical data memory resident. Such data distribution and the multi-tiered nature of the software used by feature-rich services results in frequent inter-server communication and remote memory access over the network. Hence, networking takes center stage in datacenters. In response to growing internal datacenter network traffic, networking technology is rapidly evolving. Lean user-level protocols, like RDMA, and high-performance fabrics have started making their appearance, dramatically reducing datacenter-wide network latency and offering unprecedented per-server bandwidth. At the same time, the end of Dennard scaling is grinding processor performance improvements to a halt. The net result is a growing mismatch between the per-server network and compute capabilities: it will soon be difficult for a server processor to utilize all of its available network bandwidth. Restoring balance between network and compute capabilities requires tighter co-design of the two. The network interface (NI) is of particular interest, as it lies on the boundary of network and compute. In this thesis, we focus on the design of an NI for a lightweight RDMA-like protocol and its full integration with modern manycore server processors. The NI capabilities scale with both the increasing network bandwidth and the growing number of cores on modern server processors. Leveraging our architecture's integrated NI logic, we introduce new functionality at the network endpoints that yields performance improvements for distributed systems. Such additions include new network operations with stronger semantics tailored to common application requirements and integrated logic for balancing network load across a modern processor's multiple cores. We make the case that exposing richer, end-to-end semantics to the NI is a unique enabler for optimizations that can reduce software complexity and remove significant load from the processor, contributing towards maintaining balance between the two valuable resources of network and compute. Overall, network-compute co-design is an approach that addresses challenges associated with the emerging technological mismatch of compute and networking capabilities, yielding significant performance improvements for distributed memory systems

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Heart & Matter: Fermentation in a Time of Crisis

    Get PDF
    In Heart & Matter, I explore contemporary artisan movements from the perspectives of the artisans that animate these movements, considering how people draw on this emergent category of alternate labor and identity to navigate crises of social, economic, and personal precariousness within the artisan industry. Moving from North Carolina to Okinawa, Tokyo to Chicago, my collaborators shared the quotidian anxiety of how to keep their crafts - and the businesses, livelihoods, and identities tied up in those crafts – relevant, viable, and even successful. Toward survival, my interlocutors engaged in practices of resilience, innovation, and collaboration, elemental threads that wove their working philosophies of craft. At the visceral intersection of ethnography and apprenticeship, I trace a working ethos of emergent artisanship that captures the hopes and anxieties, the successes and failures, the everyday lives and works of craftspeople confronting uncertain frontiers of vocation and taste. By way of introduction, Every Scar a Lesson outlines and demonstrates my primary methodology, an itinerant series of participant observations from the perspective of formal and informal apprenticeship, or what I call a wandering apprenticeship. Storms Within, Storms Without examines the resilience crucial to meeting and overcoming the difficulties of craft livelihoods. Despite the ease many associate with industry (even some of those within the industry), being a craftsperson is not easy. The ups and downs of a craft livelihood can be overwhelming, and I trace some of the strategies – ethical or otherwise - craftspeople use to resist defeat. Fortune and Glory contemplates the fickleness of innovation. I discuss the environmentally contingent, cooperative nature of creativity and the possibilities and limitations such a nature enacts. I consider the value innovation can bring to a craft venture, and also the potential consequences to business and craftsperson when the well of innovation runs dry. Ouroboros explores the phenomenon of collaboration, touching on the practice of collaborative production, the communal ethos among craftspeople, and the broader concerns of working with and within a community. This chapter reflects both on the creative potential of the craft community, and on its pressures.Doctor of Philosoph
    corecore