34 research outputs found

    Approximations from Anywhere and General Rough Sets

    Full text link
    Not all approximations arise from information systems. The problem of fitting approximations, subjected to some rules (and related data), to information systems in a rough scheme of things is known as the \emph{inverse problem}. The inverse problem is more general than the duality (or abstract representation) problems and was introduced by the present author in her earlier papers. From the practical perspective, a few (as opposed to one) theoretical frameworks may be suitable for formulating the problem itself. \emph{Granular operator spaces} have been recently introduced and investigated by the present author in her recent work in the context of antichain based and dialectical semantics for general rough sets. The nature of the inverse problem is examined from number-theoretic and combinatorial perspectives in a higher order variant of granular operator spaces and some necessary conditions are proved. The results and the novel approach would be useful in a number of unsupervised and semi supervised learning contexts and algorithms.Comment: 20 Pages. Scheduled to appear in IJCRS'2017 LNCS Proceedings, Springe

    An Effective Cache Replacement Policy For Multicore Processors

    Get PDF
    Cache management is increasingly important on multicore systems since the available cache space is shared by an increasing number of cores. Optimal caching is generally impossible at the system or hardware. The goal of cache management is to maximize data reuse. In this paper, a collaborative caching system is implemented that allows a program to choose different caching methods (generally LRU or MRU) for its data. We have developed a LRU MRU caching replacement policy for multicore processor which is simulated using Multi2sim simulator. We have compared the value of the following parameters (Instruction committed per cycle and branch predication Accuracy) of the multicore processor when different caching algorithm is used for cache management: replacement policies are LRU,MRU,FIFO and LRU-MRU. We use 2 benchmarks:Spec2006 and Splash-2 to see how or LRU-MRU replacement is behaving and what is the value of IPC, is it less than LRU,MRU,FIFO and Random or vice versa. We get Random replacement policy performance is worst when use for multicore processor and LRU and MRU performance is almost same. Here Performance refers to the value of IPC, More the value of IPC better the performance of the replacement policy used

    Single-Pass Memory System Evaluation for Multiprogramming Workloads

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation (NSF) / MIP-8809478NCRNational Aeronautics and Space Administration (NASA) / NASA NAG 1-613Office of Naval Research / N00014-88-K-065

    On Reconfigurable On-Chip Data Caches

    Get PDF
    Abstract Cache memory has shown to be the most important technique to bridge the gap between the processor speed and the memory access time. The advent of high-speed RISC and superscalar processors, however, calls for small on-chip data caches. Due to physical limitations, these should be simply designed and yet yield good performance. In this paper, we present new cache architectures that address the problems of conflict misses and non-optimal line sizes in the context of direct-mapped caches. Our cache architectures can be reconfigured by software in a way that matches the reference pattern for array data structures. We show that the implementation cost of the reconfiguration capability is neglectable. We also show simulation results !M demons tratc sign i fican t performance improvements for both methods

    The Effect of Code Expanding Optimizations on Instruction Cache Design

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation / MIP-8809478NCRAMD 29K Advanced Processor Development DivisionNational Aeronautics and Space Administration / NASA NAG 1-613N00014-91-J-128

    A study on cache replacement policies in level 2 cache for multicore processors

    Get PDF
    Cache memory performance is an important factor in determining overall processor performance. In a multi core processor, concurrent processes resides in main memory uses a shared cache. The shared cache memory reduces the access time, bus overhead, delay and improves processor utilization. We have analyzed the variation of shared cache size and its associativity over hit rate, effective access rate and efficiency in single, dual and quad core processor using multi2sim with splash-2 benchmark. We have proposed a novel cache configuration for a single, dual and quad core system. This research also suggests a new Bit set insertion, replacement policy for thrashing access pattern for dual and quad core system. The Bit set insertion policy considering the miss rate with the shared cache of size = 128kb is reduced by 15 \% for FFT application and 20 \% for LU when compared with the Least Recently Used cache replacement policy in a dual core system. For quad core system for the shared cache of size=512 KB, the miss rate is reduced by 21 \% for FFT application and 24 \% for LU decomposition over Least Recently Used cache replacement policy using multi2sim with splash-2
    corecore