1,311 research outputs found

    Analysis of data processing systems

    Get PDF
    Mathematical simulation models and software monitoring of multiprogramming computer syste

    Thoughts on finding the right computer buddy: a moveable feast.

    Get PDF
    The burgeoning supernova of medical information is rapidly overtaking the practicing physician's envelope of comprehension. More physicians by necessity are turning to automated resources as a means of amplifying the information they need to know while, at the same time, reducing the volume of technical pollution. Computers are capable of being a silent partner at your side as you talk with your patient--ready to cut to the quick and retrieve the latest information for the particular clinical problem at hand. Computers can be considered an extension of the brain. In a sense, they are silicon-based "life" forms. Virtuosity is learned from them as familiarity is gained--the same as becoming acquainted with a human stranger. This article is about one physician's solution to the problem of too much information. It's unabashedly anecdotal but we hope the reader will glean some hints while navigating through the realms of cyberspace

    vMCA: Memory Capacity Aggregation and Management in Cloud Environments

    Get PDF
    In cloud environments, the VMs within the computing nodes generate varying memory demand profiles. When memory utilization reaches its limits due to this, costly (virtual) disk accesses and/or VM migrations can occur. Since some nodes might have idle memory, some costly operations could be avoided by making the idle memory available to the nodes that need it. In view of this, new architectures have been introduced that provide hardware support for a shared global address space that, together with fast interconnects, can share resources across nodes. Thus, memory becomes a global resource. This paper presents a memory capacity aggregation mechanism for cloud environments called vMCA (Virtualized Memory Capacity Aggregation) based on Xen's Transcendent Memory (Tmem). vMCA distributes the system's total memory within a single node and globally across multiple nodes using a user-space process with high-level memory management policies. We evaluate vMCA using CloudSuite 3.0 on Linux and Xen. Our results demonstrate a peak running time improvement of 76.8% when aggregating memory, and of 37.5% when aggregating memory and implementing our policies.This research has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement number 610456 (Euroserver). The research was also supported by the Ministry of Economy and Competitiveness of Spain (TIN2012-34557 and TIN2015-65316), HiPEAC Network of Excellence (ICT-287759 and ICT-687698), the FI-DGR Grant Program (2016FI-B-00947) of the Government of Catalonia and the Severo Ochoa Program (SEV-2011-00067) of the Spanish Government.Peer ReviewedPostprint (author's final draft

    Scalable Storage for Digital Libraries

    Get PDF
    I propose a storage system optimised for digital libraries. Its key features are its heterogeneous scalability; its integration and exploitation of rich semantic metadata associated with digital objects; its use of a name space; and its aggressive performance optimisation in the digital library domain

    Storage Coalescing

    Get PDF
    Typically, when a program executes, it creates objects dynamically and requests storage for its objects from the underlying storage allocator. The patterns of such requests can potentially lead to internal fragmentation as well as external fragmentation. Internal fragmentation occurs when the storage allocator allocates a contiguous block of storage to a program, but the program uses only a fraction of that block to satisfy a request. The unused portion of that block is wasted since the allocator cannot use it to satisfy a subsequent allocation request. External fragmentation, on the other hand, concerns chunks of memory that reside between allocated blocks. External fragmentation becomes problematic when these chunks are not large enough to satisfy an allocation request individually. Consequently, these chunks exist as useless holes in the memory system. In this thesis, we present necessary and sufficient storage conditions for satisfying allocation and deallocation sequences for programs that run on systems that use a binary-buddy allocator. We show that these sequences can be serviced without the need for defragmentation. We also explore the effects of buddy-coalescing on defragmentation and on overall program performance when using a defragmentation algorithm that implements buddy system policies. Our approach involves experimenting with Sun’s Java Virtual Machine and a buddy system simulator that embodies our defragmentation algorithm. We examine our algorithm in the presence of two approximate collection strategies, namely Reference Counting and Contaminated Garbage Collection, and one complete collection strategy - Mark and Sweep Garbage Collection. We analyze the effectiveness of these approaches with regards to how well they manage storage when we alter the coalescing strategy of our simulator. Our analysis indicates that prompt coalescing minimizes defragmentation and delayed coalescing minimizes number of coalescing in the three collection approaches
    corecore