107 research outputs found

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Performance Observability and Monitoring of High Performance Computing with Microservices

    Get PDF
    Traditionally, High Performance Computing (HPC) softwarehas been built and deployed as bulk-synchronous, parallel executables based on the message-passing interface (MPI) programming model. The rise of data-oriented computing paradigms and an explosion in the variety of applications that need to be supported on HPC platforms have forced a re-think of the appropriate programming and execution models to integrate this new functionality. In situ workflows demarcate a paradigm shift in HPC software development methodologies enabling a range of new applications --- from user-level data services to machine learning (ML) workflows that run alongside traditional scientific simulations. By tracing the evolution of HPC software developmentover the past 30 years, this dissertation identifies the key elements and trends responsible for the emergence of coupled, distributed, in situ workflows. This dissertation's focus is on coupled in situ workflows involving composable, high-performance microservices. After outlining the motivation to enable performance observability of these services and why existing HPC performance tools and techniques can not be applied in this context, this dissertation proposes a solution wherein a set of techniques gathers, analyzes, and orients performance data from different sources to generate observability. By leveraging microservice components initially designed to build high performance data services, this dissertation demonstrates their broader applicability for building and deploying performance monitoring and visualization as services within an in situ workflow. The results from this dissertation suggest that: (1) integration of performance data from different sources is vital to understanding the performance of service components, (2) the in situ (online) analysis of this performance data is needed to enable the adaptivity of distributed components and manage monitoring data volume, (3) statistical modeling combined with performance observations can help generate better service configurations, and (4) services are a promising architecture choice for deploying in situ performance monitoring and visualization functionality. This dissertation includes previously published and co-authored material and unpublished co-authored material

    TACKLING PERFORMANCE AND SECURITY ISSUES FOR CLOUD STORAGE SYSTEMS

    Get PDF
    Building data-intensive applications and emerging computing paradigm (e.g., Machine Learning (ML), Artificial Intelligence (AI), Internet of Things (IoT) in cloud computing environments is becoming a norm, given the many advantages in scalability, reliability, security and performance. However, under rapid changes in applications, system middleware and underlying storage device, service providers are facing new challenges to deliver performance and security isolation in the context of shared resources among multiple tenants. The gap between the decades-old storage abstraction and modern storage device keeps widening, calling for software/hardware co-designs to approach more effective performance and security protocols. This dissertation rethinks the storage subsystem from device-level to system-level and proposes new designs at different levels to tackle performance and security issues for cloud storage systems. In the first part, we present an event-based SSD (Solid State Drive) simulator that models modern protocols, firmware and storage backend in detail. The proposed simulator can capture the nuances of SSD internal states under various I/O workloads, which help researchers understand the impact of various SSD designs and workload characteristics on end-to-end performance. In the second part, we study the security challenges of shared in-storage computing infrastructures. Many cloud providers offer isolation at multiple levels to secure data and instance, however, security measures in emerging in-storage computing infrastructures are not studied. We first investigate the attacks that could be conducted by offloaded in-storage programs in a multi-tenancy cloud environment. To defend against these attacks, we build a lightweight Trusted Execution Environment, IceClave to enable security isolation between in-storage programs and internal flash management functions. We show that while enforcing security isolation in the SSD controller with minimal hardware cost, IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.4x better performance than the conventional host-based trusted computing approach. In the third part, we investigate the performance interference problem caused by other tenants' I/O flows. We demonstrate that I/O resource sharing can often lead to performance degradation and instability. The block device abstraction fails to expose SSD parallelism and pass application requirements. To this end, we propose a software/hardware co-design to enforce performance isolation by bridging the semantic gap. Our design can significantly improve QoS (Quality of Service) by reducing throughput penalties and tail latency spikes. Lastly, we explore more effective I/O control to address contention in the storage software stack. We illustrate that the state-of-the-art resource control mechanism, Linux cgroups is insufficient for controlling I/O resources. Inappropriate cgroup configurations may even hurt the performance of co-located workloads under memory intensive scenarios. We add kernel support for limiting page cache usage per cgroup and achieving I/O proportionality

    Programming Persistent Memory

    Get PDF
    Beginning and experienced programmers will use this comprehensive guide to persistent memory programming. You will understand how persistent memory brings together several new software/hardware requirements, and offers great promise for better performance and faster application startup times—a huge leap forward in byte-addressable capacity compared with current DRAM offerings. This revolutionary new technology gives applications significant performance and capacity improvements over existing technologies. It requires a new way of thinking and developing, which makes this highly disruptive to the IT/computing industry. The full spectrum of industry sectors that will benefit from this technology include, but are not limited to, in-memory and traditional databases, AI, analytics, HPC, virtualization, and big data. Programming Persistent Memory describes the technology and why it is exciting the industry. It covers the operating system and hardware requirements as well as how to create development environments using emulated or real persistent memory hardware. The book explains fundamental concepts; provides an introduction to persistent memory programming APIs for C, C++, JavaScript, and other languages; discusses RMDA with persistent memory; reviews security features; and presents many examples. Source code and examples that you can run on your own systems are included. What You’ll Learn Understand what persistent memory is, what it does, and the value it brings to the industry Become familiar with the operating system and hardware requirements to use persistent memory Know the fundamentals of persistent memory programming: why it is different from current programming methods, and what developers need to keep in mind when programming for persistence Look at persistent memory application development by example using the Persistent Memory Development Kit (PMDK) Design and optimize data structures for persistent memory Study how real-world applications are modified to leverage persistent memory Utilize the tools available for persistent memory programming, application performance profiling, and debugging Who This Book Is For C, C++, Java, and Python developers, but will also be useful to software, cloud, and hardware architects across a broad spectrum of sectors, including cloud service providers, independent software vendors, high performance compute, artificial intelligence, data analytics, big data, etc

    Unravelling beta cell destruction in type 1 diabetes

    Get PDF
    Type 1 diabetes (T1D) results from the immune-mediated destruction of the insulin-producing beta cells. Genetic predisposition, impaired immune regulation, and beta cell (dys)function all contribute to disease initiation and progression. A critical gap in our knowledge is what causes the break in peripheral tolerance that eventually leads to beta cell destruction. We propose that neoepitopes generated by dysfunctional beta cells activate immune surveillance, causing beta cell autoimmunity. ER stress imposed both by intrinsic beta cell physiology and by external secondary triggers seems to be a crucial component in this process. Understanding the molecular mechanisms underlying beta cell dysfunction and neoantigen generation is critical to identify clinically relevant neoepitopes. This subsequently provides more insight in the disease dynamics as well as contribute to translational research in the development of biomarker assays and development of therapeutic strategies targeting autoreactive T-cells and beta cell function. Our task will be to restore the balance between immune reactivity and beta cell function, in order to prevent, treat, or cure type 1 diabetes.LUMC / Geneeskunde Repositoriu

    The Decline of Democracy: How the State Uses Control of Food Production to Undermine Free Society

    Get PDF
    abstract: This work explores the underlying dynamics of democracies in the context of underdevelopment, arguing that when society has not attained a substantial degree of economic independence from the state, it undermines democratic quality and stability. Economic underdevelopment and political oppression are mutually reinforcing, and both are rooted in the structure of the agriculture sector, the distribution of land, and the rural societies that emerge around this order. These systems produce persistent power imbalances that militate toward their continuance, encourage dependency, and foster the development of neopatrimonialism and corruption in the government, thereby weakening key pillars of democracy such as accountability and representativeness. Through historical analysis of a single case study, this dissertation demonstrates that while this is partly a result of actor choices at key points in time, it is highly influenced by structural constraints embedded in earlier time periods. I find that Ghana’s historical development from the colonial era to present day closely follows this trajectory.Dissertation/ThesisDoctoral Dissertation Political Science 201

    Accelerating Network Communication and I/O in Scientific High Performance Computing Environments

    Get PDF
    High performance computing has become one of the major drivers behind technology inventions and science discoveries. Originally driven through the increase of operating frequencies and technology scaling, a recent slowdown in this evolution has led to the development of multi-core architectures, which are supported by accelerator devices such as graphics processing units (GPUs). With the upcoming exascale era, the overall power consumption and the gap between compute capabilities and I/O bandwidth have become major challenges. Nowadays, the system performance is dominated by the time spent in communication and I/O, which highly depends on the capabilities of the network interface. In order to cope with the extreme concurrency and heterogeneity of future systems, the software ecosystem of the interconnect needs to be carefully tuned to excel in reliability, programmability, and usability. This work identifies and addresses three major gaps in today's interconnect software systems. The I/O gap describes the disparity in operating speeds between the computing capabilities and second storage tiers. The communication gap is introduced through the communication overhead needed to synchronize distributed large-scale applications and the mixed workload. The last gap is the so called concurrency gap, which is introduced through the extreme concurrency and the inflicted learning curve posed to scientific application developers to exploit the hardware capabilities. The first contribution is the introduction of the network-attached accelerator approach, which moves accelerators into a "stand-alone" cluster connected through the Extoll interconnect. The novel communication architecture enables the direct accelerators communication without any host interactions and an optimal application-to-compute-resources mapping. The effectiveness of this approach is evaluated for two classes of accelerators: Intel Xeon Phi coprocessors and NVIDIA GPUs. The next contribution comprises the design, implementation, and evaluation of the support of legacy codes and protocols over the Extoll interconnect technology. By providing TCP/IP protocol support over Extoll, it is shown that the performance benefits of the interconnect can be fully leveraged by a broader range of applications, including the seamless support of legacy codes. The third contribution is twofold. First, a comprehensive analysis of the Lustre networking protocol semantics and interfaces is presented. Afterwards, these insights are utilized to map the LNET protocol semantics onto the Extoll networking technology. The result is a fully functional Lustre network driver for Extoll. An initial performance evaluation demonstrates promising bandwidth and message rate results. The last contribution comprises the design, implementation, and evaluation of two easy-to-use load balancing frameworks, which transparently distribute the I/O workload across all available storage system components. The solutions maximize the parallelization and throughput of file I/O. The frameworks are evaluated on the Titan supercomputing systems for three I/O interfaces. For example for large-scale application runs, POSIX I/O and MPI-IO can be improved by up to 50% on a per job basis, while HDF5 shows performance improvements of up to 32%

    Development of a CRISPR-based gene therapy approach to correct duplications causing Duchenne Muscular Dystrophy

    Get PDF
    Duchenne Muscular Dystrophy is a severe neurodegenerative disorder caused by deletions, duplications or point mutations in the DMD gene, which encodes dystrophin. In absence of dystrophin, muscle fibres degenerate and patients become wheelchair dependent by their early teens. Cardiac and respiratory muscles are also affected, causing premature death by the third decade of life. Among the approaches currently being tested in clinical trials to treat this disease, none is suitable to permanently restore dystrophin by removing either small or large multi-exon dystrophin duplications, which account for 10-15% of DMD cases. In this thesis, I designed a genome editing approach to correct duplications in the DMD gene by using a single CRISPR/Cas9 target site. First, I identified a CRISPR/Cas9 nuclease able to efficiently target DMD intron 9, which would be suitable for gene editing in patients harbouring DMD duplications in the mutational hotspot 2-201. Then, I tested both integrating lentiviral particles and nuclear electroporation as tools to deliver and express CRISPR/Cas9 in patient-derived cells carrying different dystrophin duplications. Patient-derived myoblasts allowed me to assess dystrophin restoration at the genomic, transcriptional and protein level by means of the T7 assay, quantitative-PCR and western blot, respectively. I confirmed dystrophin correction in transduced as well as electroporated cells expressing CRISPR/Cas9, and I demonstrated that both a constitutive and a transient nuclease expression led to a similar extent of protein restoration of around 50%. These outcomes allowed me to conclude that CRISPR/Cas9 editing tool is a suitable approach to remove large genomic duplications in vitro. Furthermore, the data presented in this thesis provides the basis for the design of new therapeutic approaches to be tested in vivo in Duchenne Muscular Dystrophy animal models. These include both in vivo CRISPR/Cas9-mediated gene therapy and cell-therapy based on transplantation of ex vivo corrected myoblasts expressing corrected wild-type dystrophin

    On The Table and Under It: Social Negotiation & Drinking Spaces in Frontier Resource Extraction Communities

    Get PDF
    Current research on frontiers describe these spaces as zones of meeting, interaction, dynamism, and change. Further, the geographic, ecological, economic, and political processes that are inherent within these locales shape them, rendering them far from static. These current scholars of frontier theory have sought to fight the image of frontier spaces as locations needing civilization, which is how they used to be approached. They have also stressed the presence of frontier locales outside of the United States, which was the focus of Frederick Jackson Turner\u27s seminal work. Leonard Thompson and Howard Lamar, two prominent figures in the New West approach to frontier theory, argue that the only effective way to study frontiers is to do so through the use of comparative studies. While comparative studies are common in cultural anthropological research on frontiers in North America, the extant archaeology done has not taken a comparative approach nearly as often. My study takes steps toward reintroducing a comparative approach to frontier archaeology. examine the way that the actions of frontier inhabitants (including negotiation, conflict, and cohesion) combined with geographic and ecological factors within two specific locations: Smuttynose Island, Maine, and Highland City, Montana. to make the comparison across space and time between these two locations, I analyze them through the framework of informal economy, trade and exchange networks and the negotiation of social capital through commensal politics. I argue that the inhabitants of frontier settlements interact with the processes at work within frontier zones in such similar ways that it materializes in the archaeological record. I explore tavern assemblages left behind by these frontier inhabitants, with a specific focus on ceramics and glass. Through an examination of the drinking spaces within both settlements, I shed light on the microeconomics of these two locales and of frontier spaces more broadly
    • …
    corecore