823 research outputs found

    Analysis of data processing systems

    Get PDF
    Mathematical simulation models and software monitoring of multiprogramming computer syste

    Finite size scaling approach to dynamic storage allocation problem

    Full text link
    It is demonstrated how dynamic storage allocation algorithms can be analyzed in terms of finite size scaling. The method is illustrated in the three simple cases of the it first-fit, next-fit and it best-fit algorithms, and the system works at full capacity. The analysis is done from two different points of view - running speed and employed memory. In both cases, and for all algorithms, it is shown that a simple scaling function exists and the relevant exponents are calculated. The method can be applied on similar problems as well.Comment: 9 pages, 4 figures, will apear in Physica

    An Efficient Data Structure for Dynamic Two-Dimensional Reconfiguration

    Full text link
    In the presence of dynamic insertions and deletions into a partially reconfigurable FPGA, fragmentation is unavoidable. This poses the challenge of developing efficient approaches to dynamic defragmentation and reallocation. One key aspect is to develop efficient algorithms and data structures that exploit the two-dimensional geometry of a chip, instead of just one. We propose a new method for this task, based on the fractal structure of a quadtree, which allows dynamic segmentation of the chip area, along with dynamically adjusting the necessary communication infrastructure. We describe a number of algorithmic aspects, and present different solutions. We also provide a number of basic simulations that indicate that the theoretical worst-case bound may be pessimistic.Comment: 11 pages, 12 figures; full version of extended abstract that appeared in ARCS 201

    Storage Coalescing

    Get PDF
    Typically, when a program executes, it creates objects dynamically and requests storage for its objects from the underlying storage allocator. The patterns of such requests can potentially lead to internal fragmentation as well as external fragmentation. Internal fragmentation occurs when the storage allocator allocates a contiguous block of storage to a program, but the program uses only a fraction of that block to satisfy a request. The unused portion of that block is wasted since the allocator cannot use it to satisfy a subsequent allocation request. External fragmentation, on the other hand, concerns chunks of memory that reside between allocated blocks. External fragmentation becomes problematic when these chunks are not large enough to satisfy an allocation request individually. Consequently, these chunks exist as useless holes in the memory system. In this thesis, we present necessary and sufficient storage conditions for satisfying allocation and deallocation sequences for programs that run on systems that use a binary-buddy allocator. We show that these sequences can be serviced without the need for defragmentation. We also explore the effects of buddy-coalescing on defragmentation and on overall program performance when using a defragmentation algorithm that implements buddy system policies. Our approach involves experimenting with Sun’s Java Virtual Machine and a buddy system simulator that embodies our defragmentation algorithm. We examine our algorithm in the presence of two approximate collection strategies, namely Reference Counting and Contaminated Garbage Collection, and one complete collection strategy - Mark and Sweep Garbage Collection. We analyze the effectiveness of these approaches with regards to how well they manage storage when we alter the coalescing strategy of our simulator. Our analysis indicates that prompt coalescing minimizes defragmentation and delayed coalescing minimizes number of coalescing in the three collection approaches

    Heap Defragmentation in Bounded Time

    Get PDF
    Knuth’s buddy system is an attractive algorithm for managing storage allocation, and it can be made to operate in real time. However, the is-sue of defragmentation for heaps that are managed by the buddy system has not been studied. In this paper, we present strong bounds on the amount of storage necessary to avoid defragmentation. We then present an algorithm for defragmenting buddy heaps and present experiments from applying that algorithm to real and syn-thetic benchmarks. Our algorithm is within a factor of two of optimal in terms of the time re-quired to defragment the heap so as to respond to a single allocation request. Our experiments show our algorithm to be much more efficient than extant defragmentation algorithms

    Intelligent Memory Allocation based on Fuzzy Logic

    Get PDF
    Based on the Computerized Parkinson’s Law “work expands so as to fill the time available for its completion” (Thimbleby, 1993) it can be deduced that regardless of the size of the memory, there will always be programs to completely fill, or even overload that memory. Thus intelligent/sensible memory allocation process is crucial to system’s performance. However, due to the constant increase of processing power and the growth and spread of distributed systems, such as grid and cloud computing, memory allocation becomes a great challenge in the area of memory management today. Making allocation intelligent, so that the memory fragmentation and response time are reduced would be great, and in this research, this was attempted. The research presents Fuzzy Allocator, memory allocator based on fuzzy inference system. The allocator manages to sort the incoming memory requests according to their size and the size of free memory slot (hole). The output of the fuzzy allocator is the order in which the allocation of memory will be performed on the incoming memory requests. It reorders the incoming memory request queue so that the response time is reduced, and fragmentation is minimized

    Real-Time Memory Management: Life and Times

    Get PDF
    As high integrity real-time systems become increasingly large and complex, forcing a static model of memory usage becomes untenable. The challenge is to provide a dynamic memory model that guarantees tight and bounded time and space requirements without overburdening the developer with memory concerns. This paper provides an analysis of memory management approaches in order to characterise the tradeoffs across three semantic domains: space, time and a characterisation of memory usage information such as the lifetime of objects. A unified approach to distinguishing the merits of each memory model highlights the relationship across these three domains, thereby identifying the class of applications that benefit from targeting a particular model. Crucially, an initial investigation of this relationship identifies the direction future research must take in order to address the requirements of the next generation of complex embedded systems. Some initial suggestions are made in this regard and the memory model proposed in the Real-Time Specification for Java is evaluated in this context

    Storage management in Ada. Three reports. Volume 1: Storage management in Ada as a risk to the development of reliable software. Volume 2: Relevant aspects of language. Volume 3: Requirements of the language versus manifestations of current implementations

    Get PDF
    The risk to the development of program reliability is derived from the use of a new language and from the potential use of new storage management techniques. With Ada and associated support software, there is a lack of established guidelines and procedures, drawn from experience and common usage, which assume reliable behavior. The risk is identified and clarified. In order to provide a framework for future consideration of dynamic storage management on Ada, a description of the relevant aspects of the language is presented in two sections: Program data sources, and declaration and allocation in Ada. Storage-management characteristics of the Ada language and storage-management characteristics of Ada implementations are differentiated. Terms that are used are defined in a narrow and precise sense. The storage-management implications of the Ada language are described. The storage-management options available to the Ada implementor and the implications of the implementor's choice for the Ada programmer are also described

    Efficient processor management strategies for multicomputer systems

    Get PDF
    Multicomputers are cost-effective alternatives to the conventional supercomputers. Contemporary processor management schemes tend to underutilize the processors and leave many of the processors in the system idle while jobs are waiting for execution;Instead of designing faster processors or interconnection networks, a substantial performance improvement can be obtained by implementing better processor management strategies. This dissertation studies the performance issues related to the processor management schemes and proposes several ways to enhance the multicomputer systems by means of processor management. The proposed schemes incorporate the concepts of size-reduction, non-contiguous allocation, as well as job migration. Job scheduling using a bypass-queue is also studied. All the proposed schemes are proven effective in improving the system performance via extensive simulations. Each proposed scheme has different implementation cost and constraints. In order to take advantage of these schemes, judicious selection of system parameters is important and is discussed

    Predictability of just in time compilation

    No full text
    The productivity of embedded software development is limited by the high fragmentation of hardware platforms. To alleviate this problem, virtualization has become an important tool in computer science; and virtual machines are used in a number of subdisciplines ranging from operating systems to processor architecture. The processor virtualization can be used to address the portability problem. While the traditional compilation flow consists of compiling program source code into binary objects that can natively executed on a given processor, processor virtualization splits that flow in two parts: the first part consists of compiling the program source code into processor-independent bytecode representation; the second part provides an execution platform that can run this bytecode in a given processor. The second part is done by a virtual machine interpreting the bytecode or by just-in-time (JIT) compiling the bytecodes of a method at run-time in order to improve the execution performance. Many applications feature real-time system requirements. The success of real-time systems relies upon their capability of producing functionally correct results within dened timing constraints. To validate these constraints, most scheduling algorithms assume that the worstcase execution time (WCET) estimation of each task is already known. The WCET of a task is the longest time it takes when it is considered in isolation. Sophisticated techniques are used in static WCET estimation (e.g. to model caches) to achieve both safe and tight estimation. Our work aims at recombining the two domains, i.e. using the JIT compilation in realtime systems. This is an ambitious goal which requires introducing the deterministic in many non-deterministic features, e.g. bound the compilation time and the overhead caused by the dynamic management of the compiled code cache, etc. Due to the limited time of the internship, this report represents a rst attempt to such combination. To obtain the WCET of a program, we have to add the compilation time to the execution time because the two phases are now mixed. Therefore, one needs to know statically how many times in the worst case a function will be compiled. It may be seemed a simple job, but if we consider a resource constraint as the limited memory size and the advanced techniques used in JIT compilation, things will be nasty. We suppose that a function is compiled at the rst time it is used, and its compiled code is cached in limited size software cache. Our objective is to find an appropriate structure cache and replacement policy which reduce the overhead of compilation in the worst case
    corecore