272 research outputs found

    Object Distribution Networks for World-wide Document Circulation

    Get PDF
    This paper presents an Object Distribution System (ODS), a distributed system inspired by the ultra-large scale distribution models used in everyday life (e.g. food or newspapers distribution chains). Beyond traditional mechanisms of approaching information to readers (e.g. caching and mirroring), this system enables the publication, classification and subscription to volumes of objects (e.g. documents, events). Authors submit their contents to publication agents. Classification authorities provide classification schemes to classify objects. Readers subscribe to topics or authors, and retrieve contents from their local delivery agent (like a kiosk or library, with local copies of objects). Object distribution is an independent process where objects circulate asynchronously among distribution agents. ODS is designed to perform specially well in an increasingly populated, widespread and complex Internet jungle, using weak consistency replication by object distribution, asynchronous replication, and local access to objects by clients. ODS is based on two independent virtual networks, one dedicated to the distribution (replication) of objects and the other to calculate optimised distribution chains to be applied by the first network

    Exascale Co-Design Center for Materials in Extreme Environments (ExMatEx) Annual Report - Year 2

    Full text link

    A Message-Passing, Thread-Migrating Operating System for a Non-Cache-Coherent Many-Core Architecture

    Get PDF
    The difference between emerging many-core architectures and their multi-core predecessors goes beyond just the number of cores incorporated on a chip. Current technologies for maintaining cache coherency are not scalable beyond a few dozen cores, and a lack of coherency presents a new paradigm for software developers to work with. While shared memory multithreading has been a viable and popular programming technique for multi-cores, the distributed nature of many-cores is more amenable to a model of share-nothing, message-passing threads. This model places different demands on a many-core operating system, and this thesis aims to understand and accommodate those demands. We introduce Xipx, a port of the lightweight Embedded Xinu operating system to the many-core Intel Single-chip Cloud Computer (SCC). The SCC is a 48-core x86 architecture that lacks cache coherency. It features a fast mesh network-on-chip (NoC) and on-die message passing buffers to facilitate message-passing communications between cores. Running as a separate instance per core, Xipx takes advantage of this hardware in its implementation of a message-passing device. The device multiplexes the message passing hardware, thereby allowing multiple concurrent threads to share the hardware without interfering with each other. Xipx also features a limited framework for transparent thread migration. This achievement required fundamental modifications to the kernel, including incorporation of a new type of thread. Additionally, a minimalistic framework for bare-metal development on the SCC has been produced as a pragmatic offshoot of the work on Xipx. This thesis discusses the design and implementation of the many-core extensions described above. While Xipx serves as a foundation for continued research on many-core operating systems, test results show good performance from both message passing and thread migration suggesting that, as it stands, Xipx is an effective platform for exploration of many-core development at the application level as well

    Maintaining Mutual Consistency for Cached Web Objects

    Get PDF
    Existing web proxy caches employ cache consistency mechanisms to ensure that locally cached data is consistent with that at the server. In this paper, we argue that techniques for maintaining consistency of individual objects are not sufficient—a proxy should employ additional mechanisms to ensure that related web objects are mutually consistent with one another. We formally define the notion of mutual consistency and the semantics provided by a mutual consistency mechanism to end-users. We then present techniques for maintaining mutual consistency in the temporal and value domains. A novel aspect of our techniques is that they can adapt to the variations in the rate of change of the source data, resulting in judicious use of proxy and network resources. We evaluate our approaches using real-world web traces and show that (i) careful tuning can result in substantial savings in the network overhead incurred without any substantial loss in fidelity of the consistency guarantees, and (ii) the incremental cost of providing mutual consistency guarantees over mechanisms to provide individual consistency guarantees is small

    Scalable techniques for memory-efficient CDN simulations

    Get PDF

    Quality of Service Issues in Internet Web Services

    Get PDF
    Editorial special section on "Quality of Service Issues in Internet Web Services

    Method-based caching in multi-tiered server applications

    Get PDF
    Abstract In recent years, application server technology has become very popular for building complex but mission-critical systems such as Web-based E-Commerce applications. However, the resulting solutions tend to suffer from serious performance and scalability bottlenecks, because of their distributed nature and their various software layers. This paper deals with the problem by presenting an approach about transparently caching results of a service interface\u27s read-only methods on the client side. Cache consistency is provided by a descriptive cache invalidation model which may be specified by an application programmer. As the cache layer is transparent to the server as well as to the client code, it can be integrated with relatively low effort even in systems that have already been implemented. Experimental results show that the approach is very effective in improving a server\u27s response times and its transactional throughput. Roughly speaking, the overhead for cache maintenance is small when compared to the cost for method invocations on the server side. The cache\u27s performance improvements are dominated by the fraction of read method invocations and the cache hit rate. Our experiments are based on a realistic E-commerce Web site scenario and site user behaviour is emulated in an authentic way. By inserting our cache, the maximum user request throughput of the web application could be more than doubled while its response time (such as perceived by a web client) was kept at a very low level. Moreover, the cache can be smoothly integrated with traditional caching strategies acting on other system tiers (e.g. caching of dynamic Web pages on a Web server). The presented approach as well as the related implementation are not restricted to application server scenarios but may be applied to any kind of interface-based software layers
    • 

    corecore