29,233 research outputs found

    MultiLibOS: an OS architecture for cloud computing

    Full text link
    Cloud computing is resulting in fundamental changes to computing infrastructure, yet these changes have not resulted in corresponding changes to operating systems. In this paper we discuss some key changes we see in the computing infrastructure and applications of IaaS systems. We argue that these changes enable and demand a very different model of operating system. We then describe the MulitLibOS architecture we are exploring and how it helps exploit the scale and elasticity of integrated systems while still allowing for legacy software run on traditional OSes

    Fast and Lean Immutable Multi-Maps on the JVM based on Heterogeneous Hash-Array Mapped Tries

    Get PDF
    An immutable multi-map is a many-to-many thread-friendly map data structure with expected fast insert and lookup operations. This data structure is used for applications processing graphs or many-to-many relations as applied in static analysis of object-oriented systems. When processing such big data sets the memory overhead of the data structure encoding itself is a memory usage bottleneck. Motivated by reuse and type-safety, libraries for Java, Scala and Clojure typically implement immutable multi-maps by nesting sets as the values with the keys of a trie map. Like this, based on our measurements the expected byte overhead for a sparse multi-map per stored entry adds up to around 65B, which renders it unfeasible to compute with effectively on the JVM. In this paper we propose a general framework for Hash-Array Mapped Tries on the JVM which can store type-heterogeneous keys and values: a Heterogeneous Hash-Array Mapped Trie (HHAMT). Among other applications, this allows for a highly efficient multi-map encoding by (a) not reserving space for empty value sets and (b) inlining the values of singleton sets while maintaining a (c) type-safe API. We detail the necessary encoding and optimizations to mitigate the overhead of storing and retrieving heterogeneous data in a hash-trie. Furthermore, we evaluate HHAMT specifically for the application to multi-maps, comparing them to state-of-the-art encodings of multi-maps in Java, Scala and Clojure. We isolate key differences using microbenchmarks and validate the resulting conclusions on a real world case in static analysis. The new encoding brings the per key-value storage overhead down to 30B: a 2x improvement. With additional inlining of primitive values it reaches a 4x improvement

    Stewardship of the evolving scholarly record: from the invisible hand to conscious coordination

    Get PDF
    The scholarly record is increasingly digital and networked, while at the same time expanding in both the volume and diversity of the material it contains. The long-term future of the scholarly record cannot be effectively secured with traditional stewardship models developed for print materials. This report describes the key features of future stewardship models adapted to the characteristics of a digital, networked scholarly record, and discusses some practical implications of implementing these models. Key highlights include: As the scholarly record continues to evolve, conscious coordination will become an important organizing principle for stewardship models. Past stewardship models were built on an "invisible hand" approach that relied on the uncoordinated, institution-scale efforts of individual academic libraries acting autonomously to maintain local collections. Future stewardship of the evolving scholarly record requires conscious coordination of context, commitments, specialization, and reciprocity. With conscious coordination, local stewardship efforts leverage scale by collecting more of less. Keys to conscious coordination include right-scaling consolidation, cooperation, and community mix. Reducing transaction costs and building trust facilitate conscious coordination. Incentives to participate in cooperative stewardship activities should be linked to broader institutional priorities. The long-term future of the scholarly record in its fullest expression cannot be effectively secured with stewardship strategies designed for print materials. The features of the evolving scholarly record suggest that traditional stewardship strategies, built on an “invisible hand” approach that relies on the uncoordinated, institution-scale efforts of individual academic libraries acting autonomously to maintain local collections, is no longer suitable for collecting, organizing, making available, and preserving the outputs of scholarly inquiry. As the scholarly record continues to evolve, conscious coordination will become an important organizing principle for stewardship models. Conscious coordination calls for stewardship strategies that incorporate a broader awareness of the system-wide stewardship context; declarations of explicit commitments around portions of the local collection; formal divisions of labor within cooperative arrangements; and robust networks for reciprocal access. Stewardship strategies based on conscious coordination involve an acceleration of an already perceptible transition away from relatively autonomous local collections to ones built on networks of cooperation across many organizations, within and outside the traditional cultural heritage community

    cphVB: A System for Automated Runtime Optimization and Parallelization of Vectorized Applications

    Full text link
    Modern processor architectures, in addition to having still more cores, also require still more consideration to memory-layout in order to run at full capacity. The usefulness of most languages is deprecating as their abstractions, structures or objects are hard to map onto modern processor architectures efficiently. The work in this paper introduces a new abstract machine framework, cphVB, that enables vector oriented high-level programming languages to map onto a broad range of architectures efficiently. The idea is to close the gap between high-level languages and hardware optimized low-level implementations. By translating high-level vector operations into an intermediate vector bytecode, cphVB enables specialized vector engines to efficiently execute the vector operations. The primary success parameters are to maintain a complete abstraction from low-level details and to provide efficient code execution across different, modern, processors. We evaluate the presented design through a setup that targets multi-core CPU architectures. We evaluate the performance of the implementation using Python implementations of well-known algorithms: a jacobi solver, a kNN search, a shallow water simulation and a synthetic stencil simulation. All demonstrate good performance
    • …
    corecore