34 research outputs found

    7. Paging. Virtual Memory.

    Get PDF
    For personal use only. Please do not repost or distribute

    Predictive Caching Using the TDAG Algorithm

    Get PDF
    We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model

    The transputer virtual memory system

    Get PDF
    Thesis (MIng.)--Stellenbosch University, 1990.ENGLISH ABSTRACT: The transputer virtual memory system provide, for the transputer without memory management primitives, a viable virtual memory system. This report evaluates the architecture and its parameters. The basic software is also implemented a.nd described. The disk subsystem with software and hard",,'are is also evaluated in a single disk environment. It is shown that the unique features of the TVM system has advantages and disadvantages when compared to conventional virtual memory systems. One of the advantages is that a conventional operating system with memory protection can now also be implemented on the transputer. The main conclusion is that this is a performance effective implementation of a virtual memory system with unique features that should be exploited further.AFRIKAANSE OPSOMMING: Die transputer virtuele geheue verskaf, vir 'n verwerker sander virtuele geheue ondersteuning, 'n doeltreffende virtuele geheue stelsel. Die verslag evalueer die argitektuur en sy parameters. Die skyfsubstelsel met programmatuur en apparatuur word ook geevalueer in 'n enkel skyfkoppelvlak omgewing. Daar word bewys dat die upieke eienskappe van die TVG (transputer virtuele geheue) voor- en nadele besit wanneer dit vElrgelyk word met konvensionele virtuele geheue stelsels. Een van die voordele is dat 'n konvensionele bedryfstelsel met geheue beskerming nou op 'n transputer ge-implementeer kan word. Die hoofnadeel agv die spesifieke argitektuur gee slegs 'n 15% degradering in werkverrigting. Dit word egter slegs oar 'n sekere datagrootte ervaar en kom tipies nie ter sprake wanneer daar massiewe programme geloop word nie

    Virtual memory

    Get PDF
    Virtual memory was conceived as a way to automate overlaying of program segments. Modern computers have very large main memories, but need automatic solutions to the relocation and protection problems. Virtual memory serves this need as well and is thus useful in computers of all sizes. The history of the idea is traced, showing how it has become a widespread, little noticed feature of computers today

    An accurate prefetching policy for object oriented systems

    Get PDF
    PhD ThesisIn the latest high-performance computers, there is a growing requirement for accurate prefetching(AP) methodologies for advanced object management schemes in virtual memory and migration systems. The major issue for achieving this goal is that of finding a simple way of accurately predicting the objects that will be referenced in the near future and to group them so as to allow them to be fetched same time. The basic notion of AP involves building a relationship for logically grouping related objects and prefetching them, rather than using their physical grouping and it relies on demand fetching such as is done in existing restructuring or grouping schemes. By this, AP tries to overcome some of the shortcomings posed by physical grouping methods. Prefetching also makes use of the properties of object oriented languages to build inter and intra object relationships as a means of logical grouping. This thesis describes how this relationship can be established at compile time and how it can be used for accurate object prefetching in virtual memory systems. In addition, AP performs control flow and data dependency analysis to reinforce the relationships and to find the dependencies of a program. The user program is decomposed into prefetching blocks which contain all the information needed for block prefetching such as long branches and function calls at major branch points. The proposed prefetching scheme is implemented by extending a C++ compiler and evaluated on a virtual memory simulator. The results show a significant reduction both in the number of page fault and memory pollution. In particular, AP can suppress many page faults that occur during transition phases which are unmanageable by other ways of fetching. AP can be applied to a local and distributed virtual memory system so as to reduce the fault rate by fetching groups of objects at the same time and consequently lessening operating system overheads.British Counci

    A study of memory references in a data flow environment

    Get PDF

    C-MOS array design techniques: SUMC multiprocessor system study

    Get PDF
    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units

    MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

    Full text link
    In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next kk predicted page memory references to better approximate the optimal B\'el\'ady's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%

    Prefetching techniques for client server object-oriented database systems

    Get PDF
    The performance of many object-oriented database applications suffers from the page fetch latency which is determined by the expense of disk access. In this work we suggest several prefetching techniques to avoid, or at least to reduce, page fetch latency. In practice no prediction technique is perfect and no prefetching technique can entirely eliminate delay due to page fetch latency. Therefore we are interested in the trade-off between the level of accuracy required for obtaining good results in terms of elapsed time reduction and the processing overhead needed to achieve this level of accuracy. If prefetching accuracy is high then the total elapsed time of an application can be reduced significantly otherwise if the prefetching accuracy is low, many incorrect pages are prefetched and the extra load on the client, network, server and disks decreases the whole system performance. Access pattern of object-oriented databases are often complex and usually hard to predict accurately. The ..

    Major Trends in Operating Systems Development

    Get PDF
    Operating systems have changed in nature in response to demands of users, and in response to advances in hardware and software technology. The purpose of this paper is to trace the development of major themes in operating system design from their beginnings through the present. This is not an exhaustive history of operating systems, but instead is intended to give the reader the flavor of the dif ferent periods in operating systems\u27 development. To this end, the paper will be organized by topic in approximate order of development. Each chapter will start with an introduction to the factors behind the rise of the period. This will be fol lowed by a survey of the state-of-the-art systems, and the conditions influencing them. The chapters close with a summation of the significant hardware and software contributions from the period
    corecore